fix: DockerSecretsResolver - don't normalize absolute paths like /var/www/html/...
Some checks failed
Deploy Application / deploy (push) Has been cancelled
Some checks failed
Deploy Application / deploy (push) Has been cancelled
This commit is contained in:
21
deployment/legacy/stacks/stacks/gitea/.env.example
Normal file
21
deployment/legacy/stacks/stacks/gitea/.env.example
Normal file
@@ -0,0 +1,21 @@
|
||||
# Gitea Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# Gitea Domain
|
||||
GITEA_DOMAIN=git.michaelschiemer.de
|
||||
|
||||
# MySQL Configuration
|
||||
MYSQL_ROOT_PASSWORD=<generate-strong-password>
|
||||
MYSQL_DATABASE=gitea
|
||||
MYSQL_USER=gitea
|
||||
MYSQL_PASSWORD=<generate-strong-password>
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_PASSWORD=<generate-strong-password>
|
||||
|
||||
# Gitea Settings
|
||||
# Set to false to allow user registration
|
||||
DISABLE_REGISTRATION=true
|
||||
560
deployment/legacy/stacks/stacks/gitea/README.md
Normal file
560
deployment/legacy/stacks/stacks/gitea/README.md
Normal file
@@ -0,0 +1,560 @@
|
||||
# Gitea Stack - Self-Hosted Git Server
|
||||
|
||||
## Overview
|
||||
|
||||
Gitea acts as the central Git server with integrated CI/CD capabilities through Gitea Actions, handling:
|
||||
- Git repository hosting
|
||||
- User and organization management
|
||||
- Pull requests and code reviews
|
||||
- Issue tracking
|
||||
- Gitea Actions for CI/CD (runner runs on development machine)
|
||||
- API for automation
|
||||
|
||||
## Services
|
||||
|
||||
- **git.michaelschiemer.de** - Gitea Web Interface
|
||||
- **git.michaelschiemer.de:2222** - SSH for Git operations
|
||||
- **MySQL 8.0** - Database backend
|
||||
- **Redis 7** - Cache, session, and queue storage
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running**
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **DNS Configuration**
|
||||
Point `git.michaelschiemer.de` to your server IP (94.16.110.151)
|
||||
|
||||
3. **SSH Port Availability**
|
||||
Ensure port 2222 is open in your firewall for Git SSH operations
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Strong Passwords
|
||||
|
||||
```bash
|
||||
# MySQL root password
|
||||
openssl rand -base64 32
|
||||
|
||||
# MySQL gitea password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Redis password
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env` with generated passwords:
|
||||
```env
|
||||
MYSQL_ROOT_PASSWORD=<generated-password-1>
|
||||
MYSQL_PASSWORD=<generated-password-2>
|
||||
REDIS_PASSWORD=<generated-password-3>
|
||||
```
|
||||
|
||||
### 3. Adjust Configuration (Optional)
|
||||
|
||||
Edit `.env` for:
|
||||
- Domain customization
|
||||
- User registration settings
|
||||
- Database configuration
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Deploy stack
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Wait for MySQL initialization (30-60 seconds)
|
||||
docker compose logs mysql | grep "ready for connections"
|
||||
|
||||
# Verify services are healthy
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### First Time Configuration
|
||||
|
||||
**Option 1: Automated Setup (Recommended)**
|
||||
|
||||
The Gitea initial setup can be automated using Ansible:
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
|
||||
# 1. Set Gitea admin credentials in vault
|
||||
ansible-vault edit secrets/production.vault.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Add these variables:
|
||||
# vault_gitea_admin_username: "admin"
|
||||
# vault_gitea_admin_password: "your-secure-password"
|
||||
# vault_gitea_admin_email: "kontakt@michaelschiemer.de"
|
||||
|
||||
# 2. Run the setup playbook
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup-gitea-initial-config.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
The playbook will:
|
||||
- Check if Gitea is already configured
|
||||
- Generate `app.ini` with `INSTALL_LOCK = true` to skip the initial setup page
|
||||
- Copy the configuration file to the Gitea container
|
||||
- Create admin user via Gitea CLI with credentials from vault
|
||||
- Use database settings from environment variables
|
||||
|
||||
**How it works:**
|
||||
The playbook creates a complete `app.ini` configuration file with `INSTALL_LOCK = true` in the `[security]` section. This tells Gitea to skip the initial setup wizard. The admin user is then created using the `gitea admin user create` command.
|
||||
|
||||
**Option 2: Manual Setup**
|
||||
|
||||
1. **Access Gitea**: https://git.michaelschiemer.de
|
||||
|
||||
2. **Initial Setup Wizard**:
|
||||
- Database settings are pre-configured via environment variables
|
||||
- Set up admin account:
|
||||
- Username: `admin` (or your preference)
|
||||
- Email: `kontakt@michaelschiemer.de`
|
||||
- Password: Strong password
|
||||
- Server and third-party settings: Use defaults
|
||||
- Click "Install Gitea"
|
||||
|
||||
3. **Verify SSH Access**:
|
||||
```bash
|
||||
# Test SSH connection (replace 'git' with your username after setup)
|
||||
ssh -T -p 2222 git@git.michaelschiemer.de
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating a Repository
|
||||
|
||||
**Option 1: Automated Setup (Recommended)**
|
||||
|
||||
Use Ansible to automatically create the repository and configure Git remote:
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup-gitea-repository.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "repo_name=michaelschiemer" \
|
||||
-e "repo_owner=michael" \
|
||||
-e "repo_private=false"
|
||||
```
|
||||
|
||||
The playbook will:
|
||||
- Create repository in Gitea via API
|
||||
- Configure Git remote automatically
|
||||
- Use credentials from Ansible Vault
|
||||
|
||||
**Option 2: Manual Setup**
|
||||
|
||||
1. Log in to https://git.michaelschiemer.de
|
||||
2. Click "+" → "New Repository"
|
||||
3. Fill in repository details
|
||||
4. Clone via HTTPS or SSH:
|
||||
```bash
|
||||
# HTTPS
|
||||
git clone https://git.michaelschiemer.de/username/repo.git
|
||||
|
||||
# SSH
|
||||
git clone ssh://git@git.michaelschiemer.de:2222/username/repo.git
|
||||
```
|
||||
|
||||
### Configuration File
|
||||
|
||||
Gitea configuration is managed via `app.ini` file:
|
||||
- **Template**: `deployment/ansible/templates/gitea-app.ini.j2` (Ansible template)
|
||||
- **Production**: Generated from template and deployed via Ansible playbook `setup-gitea-initial-config.yml`
|
||||
- The `app.ini` is copied to the container at `/data/gitea/conf/app.ini`
|
||||
- **Important**: `app.ini` is a minimal configuration. Cache, Session, Queue, and other settings are controlled via `GITEA__...` environment variables in `docker-compose.yml` which override `app.ini` settings on every container start.
|
||||
- Configuration is based on the official Gitea example: https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
|
||||
|
||||
**Key Configuration Sections:**
|
||||
- `[server]`: Domain, ports, SSH settings
|
||||
- `[database]`: PostgreSQL connection
|
||||
- `[actions]`: Actions enabled, no GitHub dependency
|
||||
- `[service]`: Registration settings
|
||||
- `[cache]` / `[session]` / `[queue]`: Storage configuration
|
||||
|
||||
### Gitea Actions
|
||||
|
||||
Gitea Actions (GitHub Actions compatible) are enabled by default. To use them:
|
||||
|
||||
1. **Create `.gitea/workflows/` directory** in your repository
|
||||
2. **Add workflow YAML files** (e.g., `deploy.yml`)
|
||||
3. **Register a Runner** (see Runner setup section below)
|
||||
|
||||
**Note**: The Gitea Actions Runner should run on your **development machine**, not on the production server. See Stack 9 documentation for runner setup.
|
||||
|
||||
### User Management
|
||||
|
||||
**Disable Registration** (Default):
|
||||
- Set `DISABLE_REGISTRATION=true` in `.env` (already default)
|
||||
- Create users via Admin Panel
|
||||
|
||||
**Enable Registration**:
|
||||
- Set `DISABLE_REGISTRATION=false` in `.env`
|
||||
- Restart: `docker compose restart gitea`
|
||||
|
||||
### Organizations and Teams
|
||||
|
||||
1. Navigate to Organizations
|
||||
2. Create organization
|
||||
3. Add repositories to organization
|
||||
4. Manage teams and permissions
|
||||
|
||||
## API Access
|
||||
|
||||
Gitea provides a comprehensive API:
|
||||
|
||||
```bash
|
||||
# Generate API token
|
||||
# Settings → Applications → Generate New Token
|
||||
|
||||
# Example: List repositories
|
||||
curl -H "Authorization: token YOUR_TOKEN" \
|
||||
https://git.michaelschiemer.de/api/v1/user/repos
|
||||
```
|
||||
|
||||
**API Documentation**: https://git.michaelschiemer.de/api/swagger
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
# Backup script (run on production server)
|
||||
#!/bin/bash
|
||||
BACKUP_DIR="/backups/gitea"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
# Create backup directory
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup Gitea data
|
||||
docker run --rm \
|
||||
-v gitea-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/gitea-data-$DATE.tar.gz -C /data .
|
||||
|
||||
# Backup MySQL database
|
||||
docker exec gitea-mysql mysqldump \
|
||||
-u root -p$MYSQL_ROOT_PASSWORD \
|
||||
--all-databases \
|
||||
--single-transaction \
|
||||
--quick \
|
||||
--lock-tables=false \
|
||||
> $BACKUP_DIR/gitea-mysql-$DATE.sql
|
||||
|
||||
# Backup Redis data
|
||||
docker run --rm \
|
||||
-v gitea-redis-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/gitea-redis-$DATE.tar.gz -C /data .
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR/*-$DATE.*"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop services
|
||||
docker compose down
|
||||
|
||||
# Restore Gitea data
|
||||
docker run --rm \
|
||||
-v gitea-data:/data \
|
||||
-v /backups/gitea:/backup \
|
||||
alpine tar xzf /backup/gitea-data-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Restore MySQL
|
||||
cat /backups/gitea/gitea-mysql-YYYYMMDD_HHMMSS.sql | \
|
||||
docker exec -i gitea-mysql mysql -u root -p$MYSQL_ROOT_PASSWORD
|
||||
|
||||
# Restore Redis
|
||||
docker run --rm \
|
||||
-v gitea-redis-data:/data \
|
||||
-v /backups/gitea:/backup \
|
||||
alpine tar xzf /backup/gitea-redis-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Start services
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Add to crontab on production server:
|
||||
|
||||
```bash
|
||||
# Daily backup at 2 AM
|
||||
0 2 * * * /path/to/backup-gitea.sh
|
||||
|
||||
# Keep only last 7 days
|
||||
0 3 * * * find /backups/gitea -type f -mtime +7 -delete
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check service health
|
||||
docker compose ps
|
||||
|
||||
# Gitea health endpoint
|
||||
curl -f https://git.michaelschiemer.de/api/healthz
|
||||
|
||||
# MySQL health
|
||||
docker exec gitea-mysql mysqladmin ping -h localhost -u root -p$MYSQL_ROOT_PASSWORD
|
||||
|
||||
# Redis health
|
||||
docker exec gitea-redis redis-cli -a $REDIS_PASSWORD ping
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# All services
|
||||
docker compose logs -f
|
||||
|
||||
# Gitea only
|
||||
docker compose logs -f gitea
|
||||
|
||||
# MySQL only
|
||||
docker compose logs -f mysql
|
||||
|
||||
# Redis only
|
||||
docker compose logs -f redis
|
||||
|
||||
# MySQL slow queries
|
||||
docker exec gitea-mysql tail -f /var/log/mysql/slow-queries.log
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# Container stats
|
||||
docker stats gitea gitea-mysql gitea-redis
|
||||
|
||||
# Disk usage
|
||||
docker system df -v | grep gitea
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Gitea Not Starting
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker compose logs gitea
|
||||
|
||||
# Common issues:
|
||||
# 1. MySQL not ready - wait 30-60 seconds
|
||||
# 2. Database connection failed - check MYSQL_PASSWORD in .env
|
||||
# 3. Redis connection failed - check REDIS_PASSWORD
|
||||
```
|
||||
|
||||
### SSH Not Working
|
||||
|
||||
```bash
|
||||
# Verify port 2222 is open
|
||||
sudo ufw status | grep 2222
|
||||
|
||||
# Open if needed
|
||||
sudo ufw allow 2222/tcp
|
||||
|
||||
# Test SSH connection
|
||||
ssh -T -p 2222 git@git.michaelschiemer.de
|
||||
|
||||
# Check Gitea SSH settings
|
||||
# Admin Panel → Configuration → Server and Other Services → SSH Server Domain
|
||||
```
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
```bash
|
||||
# Verify MySQL is running and healthy
|
||||
docker compose ps mysql
|
||||
|
||||
# Test database connection
|
||||
docker exec gitea-mysql mysql -u gitea -p$MYSQL_PASSWORD -e "SELECT 1;"
|
||||
|
||||
# Check MySQL logs
|
||||
docker compose logs mysql | grep -i error
|
||||
```
|
||||
|
||||
### Redis Connection Issues
|
||||
|
||||
```bash
|
||||
# Verify Redis is running
|
||||
docker compose ps redis
|
||||
|
||||
# Test Redis connection
|
||||
docker exec gitea-redis redis-cli -a $REDIS_PASSWORD ping
|
||||
|
||||
# Check Redis logs
|
||||
docker compose logs redis
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
**If Gitea has frequent outages or connection issues:**
|
||||
|
||||
1. **Update Gitea Configuration** (Recommended):
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/update-gitea-config.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
This playbook will:
|
||||
- Enable Redis cache for better performance and persistence
|
||||
- Configure database connection pooling
|
||||
- Set connection limits to prevent "Connection reset by peer" errors
|
||||
|
||||
2. **Manual Troubleshooting**:
|
||||
```bash
|
||||
# Check PostgreSQL slow queries
|
||||
docker exec gitea-postgres psql -U gitea -d gitea -c "SELECT * FROM pg_stat_activity;"
|
||||
|
||||
# Check container resource usage
|
||||
docker stats gitea gitea-postgres gitea-redis
|
||||
|
||||
# Check Gitea logs for errors
|
||||
docker compose logs --tail 100 gitea | grep -i error
|
||||
|
||||
# Check Redis connection
|
||||
docker exec gitea-redis redis-cli -a $REDIS_PASSWORD ping
|
||||
```
|
||||
|
||||
### Known Issues
|
||||
|
||||
**Bad Gateway after many rapid requests (15-20 reloads):**
|
||||
- **Status**: Known issue, non-critical
|
||||
- **Symptoms**: Gitea returns "Bad Gateway" after 15-20 rapid page reloads, recovers after a few seconds
|
||||
- **Impact**: Low - Gitea is functional for normal usage
|
||||
- **Possible causes**:
|
||||
- Container restart during high load
|
||||
- Connection pool exhaustion (mitigated with increased limits)
|
||||
- Traefik service discovery delay in host network mode
|
||||
- **Workarounds**:
|
||||
- Wait a few seconds and retry
|
||||
- Use Redis cache (already enabled) for better performance
|
||||
- Consider adding rate limiting if needed (see Traefik middlewares)
|
||||
- **Future improvements**:
|
||||
- Monitor and optimize connection pool usage
|
||||
- Consider adding rate limiting middleware for Gitea
|
||||
- Investigate Traefik service discovery in host network mode
|
||||
|
||||
### Reset Admin Password
|
||||
|
||||
```bash
|
||||
# Connect to Gitea container
|
||||
docker exec -it gitea bash
|
||||
|
||||
# Change admin password
|
||||
gitea admin user change-password --username admin --password new-password
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Disable User Registration**: Set `DISABLE_REGISTRATION=true`
|
||||
2. **Strong Passwords**: Use generated passwords for all services
|
||||
3. **Regular Updates**: Keep Gitea, MySQL, and Redis updated
|
||||
4. **SSH Keys**: Prefer SSH keys over HTTPS for Git operations
|
||||
5. **2FA**: Enable two-factor authentication for admin accounts
|
||||
6. **API Token Security**: Rotate tokens regularly
|
||||
7. **Firewall**: Only expose ports 80, 443, and 2222
|
||||
|
||||
### Update Stack
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Security Headers
|
||||
|
||||
Security headers are applied via Traefik's `default-chain@file` middleware:
|
||||
- HSTS
|
||||
- Content-Type Nosniff
|
||||
- XSS Protection
|
||||
- Frame Deny
|
||||
- CSP
|
||||
|
||||
## Integration with Other Stacks
|
||||
|
||||
### Docker Registry (Stack 3)
|
||||
|
||||
Gitea Actions can push built images to the private Docker Registry:
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/deploy.yml
|
||||
- name: Push to Registry
|
||||
run: |
|
||||
docker login registry.michaelschiemer.de -u ${{ secrets.REGISTRY_USER }} -p ${{ secrets.REGISTRY_PASS }}
|
||||
docker push registry.michaelschiemer.de/myapp:latest
|
||||
```
|
||||
|
||||
### Application Stack (Stack 4)
|
||||
|
||||
Deploy applications via Gitea Actions + Ansible:
|
||||
|
||||
```yaml
|
||||
- name: Deploy to Production
|
||||
run: |
|
||||
ansible-playbook -i inventory/production deploy.yml
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### MySQL Optimization
|
||||
|
||||
Adjust `mysql/conf.d/gitea.cnf`:
|
||||
- `innodb_buffer_pool_size`: Increase for more RAM
|
||||
- `max_connections`: Increase for more concurrent users
|
||||
- `slow_query_log`: Monitor slow queries
|
||||
|
||||
### Redis Optimization
|
||||
|
||||
```bash
|
||||
# Add to docker-compose.yml redis command:
|
||||
# --maxmemory 512mb --maxmemory-policy allkeys-lru
|
||||
```
|
||||
|
||||
### Gitea Configuration
|
||||
|
||||
Edit via Admin Panel → Configuration or `app.ini`:
|
||||
- Enable caching for static assets
|
||||
- Adjust session timeout
|
||||
- Configure queue workers for Actions
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Gitea Documentation**: https://docs.gitea.io/
|
||||
- **Gitea Actions**: https://docs.gitea.io/en-us/usage/actions/overview/
|
||||
- **API Documentation**: https://git.michaelschiemer.de/api/swagger
|
||||
- **MySQL Tuning**: https://dev.mysql.com/doc/refman/8.0/en/optimization.html
|
||||
121
deployment/legacy/stacks/stacks/gitea/docker-compose.yml
Normal file
121
deployment/legacy/stacks/stacks/gitea/docker-compose.yml
Normal file
@@ -0,0 +1,121 @@
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:1.25
|
||||
container_name: gitea
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
networks:
|
||||
- traefik-public
|
||||
- gitea-internal
|
||||
environment:
|
||||
# Container-specific settings only
|
||||
- TZ=Europe/Berlin
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
# Postgres password for postgres container (not for Gitea config)
|
||||
- POSTGRES_PASSWORD=gitea_password
|
||||
# All Gitea configuration is now in app.ini (deployed via Ansible)
|
||||
# Environment variables removed for better reliability and maintainability
|
||||
# Migration benefits:
|
||||
# - Cache now works correctly (environment variables had a bug in Gitea 1.25)
|
||||
# - All settings are versioned in Git
|
||||
# - Better documentation and maintainability
|
||||
volumes:
|
||||
- gitea-data:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
|
||||
# HTTP Router configuration
|
||||
- "traefik.http.routers.gitea.rule=Host(`git.michaelschiemer.de`)"
|
||||
- "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
- "traefik.http.routers.gitea.tls=true"
|
||||
- "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.gitea.priority=100"
|
||||
# Service configuration (Docker provider uses port, not url)
|
||||
- "traefik.http.services.gitea.loadbalancer.server.port=3000"
|
||||
# ServersTransport for longer timeouts (prevents 504 for SSE/Long-Polling like /user/events)
|
||||
# Temporarily removed to test if this is causing the service discovery issue
|
||||
# - "traefik.http.services.gitea.loadbalancer.serversTransport=gitea-transport@docker"
|
||||
# - "traefik.http.serverstransports.gitea-transport.forwardingtimeouts.dialtimeout=10s"
|
||||
# - "traefik.http.serverstransports.gitea-transport.forwardingtimeouts.responseheadertimeout=120s"
|
||||
# - "traefik.http.serverstransports.gitea-transport.forwardingtimeouts.idleconntimeout=180s"
|
||||
# - "traefik.http.serverstransports.gitea-transport.maxidleconnsperhost=100"
|
||||
# X-Forwarded-Proto header (helps with redirects/cookies)
|
||||
- "traefik.http.middlewares.gitea-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
- "traefik.http.routers.gitea.middlewares=gitea-headers@docker"
|
||||
# Explicitly reference the service (like MinIO does)
|
||||
- "traefik.http.routers.gitea.service=gitea"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:3000/api/healthz"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: gitea-postgres
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- gitea-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_DB=gitea
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=gitea_password
|
||||
command: >
|
||||
postgres
|
||||
-c max_connections=300
|
||||
-c authentication_timeout=180
|
||||
-c statement_timeout=30000
|
||||
-c idle_in_transaction_session_timeout=30000
|
||||
volumes:
|
||||
- postgres-data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-gitea} -d ${POSTGRES_DB:-gitea}"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
container_name: gitea-redis
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- gitea-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
command: >
|
||||
redis-server
|
||||
--requirepass ${REDIS_PASSWORD:-gitea_redis_password}
|
||||
--appendonly yes
|
||||
--maxmemory 512mb
|
||||
--maxmemory-policy allkeys-lru
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
volumes:
|
||||
gitea-data:
|
||||
name: gitea-data
|
||||
postgres-data:
|
||||
name: gitea-postgres-data
|
||||
redis-data:
|
||||
name: gitea-redis-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
gitea-internal:
|
||||
name: gitea-internal
|
||||
driver: bridge
|
||||
17
deployment/legacy/stacks/stacks/minio/.env.example
Normal file
17
deployment/legacy/stacks/stacks/minio/.env.example
Normal file
@@ -0,0 +1,17 @@
|
||||
# MinIO Object Storage Stack Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# MinIO Root Credentials
|
||||
# Generate secure password with: openssl rand -base64 32
|
||||
MINIO_ROOT_USER=minioadmin
|
||||
MINIO_ROOT_PASSWORD=<generate-with-openssl-rand-base64-32>
|
||||
|
||||
# Domain Configuration
|
||||
# API endpoint (S3-compatible)
|
||||
MINIO_API_DOMAIN=minio-api.michaelschiemer.de
|
||||
|
||||
# Console endpoint (Web UI)
|
||||
MINIO_CONSOLE_DOMAIN=minio.michaelschiemer.de
|
||||
657
deployment/legacy/stacks/stacks/minio/README.md
Normal file
657
deployment/legacy/stacks/stacks/minio/README.md
Normal file
@@ -0,0 +1,657 @@
|
||||
# MinIO Object Storage Stack - S3-compatible Object Storage
|
||||
|
||||
## Overview
|
||||
|
||||
MinIO ist ein hochperformanter, S3-kompatibler Object Storage Service für private Cloud- und Edge-Computing-Umgebungen.
|
||||
|
||||
**Features**:
|
||||
- S3-kompatible API (Port 9000)
|
||||
- Web-basierte Console für Management (Port 9001)
|
||||
- SSL via Traefik
|
||||
- Persistent storage
|
||||
- Health checks und Monitoring
|
||||
- Multi-Tenant Bucket Management
|
||||
|
||||
## Services
|
||||
|
||||
- **minio-api.michaelschiemer.de** - S3-kompatible API Endpoint
|
||||
- **minio.michaelschiemer.de** - Web Console (Management UI)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running**
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **DNS Configuration**
|
||||
Point these domains to your server IP (94.16.110.151):
|
||||
- `minio-api.michaelschiemer.de`
|
||||
- `minio.michaelschiemer.de`
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate MinIO Root Password
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
MINIO_ROOT_PASSWORD=<generated-password>
|
||||
```
|
||||
|
||||
**Important**: Change default `MINIO_ROOT_USER` in production!
|
||||
|
||||
### 3. Adjust Domains (Optional)
|
||||
|
||||
Edit `.env` to customize domains:
|
||||
```env
|
||||
MINIO_API_DOMAIN=storage-api.example.com
|
||||
MINIO_CONSOLE_DOMAIN=storage.example.com
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Ensure Traefik is running
|
||||
docker network inspect traefik-public
|
||||
|
||||
# Start MinIO
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Test API endpoint
|
||||
curl -I https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Expected: HTTP/2 200
|
||||
|
||||
# Access Console
|
||||
open https://minio.michaelschiemer.de
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Web Console Access
|
||||
|
||||
1. Navigate to: https://minio.michaelschiemer.de
|
||||
2. Login with:
|
||||
- **Access Key**: Value of `MINIO_ROOT_USER`
|
||||
- **Secret Key**: Value of `MINIO_ROOT_PASSWORD`
|
||||
|
||||
### Create Bucket via Console
|
||||
|
||||
1. Login to Console
|
||||
2. Click "Create Bucket"
|
||||
3. Enter bucket name (e.g., `my-bucket`)
|
||||
4. Configure bucket settings:
|
||||
- Versioning
|
||||
- Object Locking
|
||||
- Quota
|
||||
- Retention policies
|
||||
|
||||
### S3 API Access
|
||||
|
||||
#### Using AWS CLI
|
||||
|
||||
```bash
|
||||
# Install AWS CLI (if not installed)
|
||||
pip install awscli
|
||||
|
||||
# Configure MinIO endpoint
|
||||
aws configure set default.s3.signature_version s3v4
|
||||
aws configure set default.s3.addressing_style virtual
|
||||
|
||||
# Set credentials
|
||||
export AWS_ACCESS_KEY_ID=minioadmin
|
||||
export AWS_SECRET_ACCESS_KEY=<MINIO_ROOT_PASSWORD>
|
||||
|
||||
# Test connection
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls
|
||||
|
||||
# Create bucket
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 mb s3://my-bucket
|
||||
|
||||
# Upload file
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp file.txt s3://my-bucket/
|
||||
|
||||
# Download file
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp s3://my-bucket/file.txt ./
|
||||
|
||||
# List objects
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls s3://my-bucket/
|
||||
|
||||
# Delete object
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rm s3://my-bucket/file.txt
|
||||
|
||||
# Delete bucket
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rb s3://my-bucket
|
||||
```
|
||||
|
||||
#### Using MinIO Client (mc)
|
||||
|
||||
```bash
|
||||
# Install MinIO Client
|
||||
wget https://dl.min.io/client/mc/release/linux-amd64/mc
|
||||
chmod +x mc
|
||||
sudo mv mc /usr/local/bin/
|
||||
|
||||
# Configure alias
|
||||
mc alias set minio https://minio-api.michaelschiemer.de minioadmin <MINIO_ROOT_PASSWORD>
|
||||
|
||||
# Test connection
|
||||
mc admin info minio
|
||||
|
||||
# List buckets
|
||||
mc ls minio
|
||||
|
||||
# Create bucket
|
||||
mc mb minio/my-bucket
|
||||
|
||||
# Upload file
|
||||
mc cp file.txt minio/my-bucket/
|
||||
|
||||
# Download file
|
||||
mc cp minio/my-bucket/file.txt ./
|
||||
|
||||
# List objects
|
||||
mc ls minio/my-bucket/
|
||||
|
||||
# Remove object
|
||||
mc rm minio/my-bucket/file.txt
|
||||
|
||||
# Remove bucket
|
||||
mc rb minio/my-bucket
|
||||
```
|
||||
|
||||
#### Using cURL
|
||||
|
||||
```bash
|
||||
# List buckets
|
||||
curl -X GET https://minio-api.michaelschiemer.de \
|
||||
-u "minioadmin:<MINIO_ROOT_PASSWORD>"
|
||||
|
||||
# Upload file (using presigned URL from Console or SDK)
|
||||
```
|
||||
|
||||
### Programmatic Access
|
||||
|
||||
#### PHP (Using AWS SDK)
|
||||
|
||||
```php
|
||||
use Aws\S3\S3Client;
|
||||
use Aws\Exception\AwsException;
|
||||
|
||||
$s3Client = new S3Client([
|
||||
'version' => 'latest',
|
||||
'region' => 'us-east-1',
|
||||
'endpoint' => 'https://minio-api.michaelschiemer.de',
|
||||
'use_path_style_endpoint' => true,
|
||||
'credentials' => [
|
||||
'key' => 'minioadmin',
|
||||
'secret' => '<MINIO_ROOT_PASSWORD>',
|
||||
],
|
||||
]);
|
||||
|
||||
// Create bucket
|
||||
$s3Client->createBucket(['Bucket' => 'my-bucket']);
|
||||
|
||||
// Upload file
|
||||
$s3Client->putObject([
|
||||
'Bucket' => 'my-bucket',
|
||||
'Key' => 'file.txt',
|
||||
'Body' => fopen('/path/to/file.txt', 'r'),
|
||||
]);
|
||||
|
||||
// Download file
|
||||
$result = $s3Client->getObject([
|
||||
'Bucket' => 'my-bucket',
|
||||
'Key' => 'file.txt',
|
||||
]);
|
||||
echo $result['Body'];
|
||||
```
|
||||
|
||||
#### JavaScript/Node.js
|
||||
|
||||
```javascript
|
||||
const AWS = require('aws-sdk');
|
||||
const s3 = new AWS.S3({
|
||||
endpoint: 'https://minio-api.michaelschiemer.de',
|
||||
accessKeyId: 'minioadmin',
|
||||
secretAccessKey: '<MINIO_ROOT_PASSWORD>',
|
||||
s3ForcePathStyle: true,
|
||||
signatureVersion: 'v4',
|
||||
});
|
||||
|
||||
// Create bucket
|
||||
s3.createBucket({ Bucket: 'my-bucket' }, (err, data) => {
|
||||
if (err) console.error(err);
|
||||
else console.log('Bucket created');
|
||||
});
|
||||
|
||||
// Upload file
|
||||
const params = {
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'file.txt',
|
||||
Body: require('fs').createReadStream('/path/to/file.txt'),
|
||||
};
|
||||
s3.upload(params, (err, data) => {
|
||||
if (err) console.error(err);
|
||||
else console.log('File uploaded:', data.Location);
|
||||
});
|
||||
```
|
||||
|
||||
#### Python (boto3)
|
||||
|
||||
```python
|
||||
import boto3
|
||||
from botocore.client import Config
|
||||
|
||||
s3_client = boto3.client(
|
||||
's3',
|
||||
endpoint_url='https://minio-api.michaelschiemer.de',
|
||||
aws_access_key_id='minioadmin',
|
||||
aws_secret_access_key='<MINIO_ROOT_PASSWORD>',
|
||||
config=Config(signature_version='s3v4'),
|
||||
region_name='us-east-1'
|
||||
)
|
||||
|
||||
# Create bucket
|
||||
s3_client.create_bucket(Bucket='my-bucket')
|
||||
|
||||
# Upload file
|
||||
s3_client.upload_file('/path/to/file.txt', 'my-bucket', 'file.txt')
|
||||
|
||||
# Download file
|
||||
s3_client.download_file('my-bucket', 'file.txt', '/path/to/downloaded.txt')
|
||||
|
||||
# List objects
|
||||
response = s3_client.list_objects_v2(Bucket='my-bucket')
|
||||
for obj in response.get('Contents', []):
|
||||
print(obj['Key'])
|
||||
```
|
||||
|
||||
## User Management
|
||||
|
||||
### Create Access Keys via Console
|
||||
|
||||
1. Login to Console: https://minio.michaelschiemer.de
|
||||
2. Navigate to "Access Keys" → "Create Access Key"
|
||||
3. Assign policies (read-only, read-write, admin)
|
||||
4. Save Access Key and Secret Key (only shown once!)
|
||||
|
||||
### Create Access Keys via mc CLI
|
||||
|
||||
```bash
|
||||
# Create new user
|
||||
mc admin user add minio myuser mypassword
|
||||
|
||||
# Create access key for user
|
||||
mc admin user svcacct add minio myuser --name my-access-key
|
||||
|
||||
# Output shows Access Key and Secret Key
|
||||
```
|
||||
|
||||
### Policy Management
|
||||
|
||||
```bash
|
||||
# List policies
|
||||
mc admin policy list minio
|
||||
|
||||
# Create custom policy
|
||||
mc admin policy add minio readwrite-policy /path/to/policy.json
|
||||
|
||||
# Assign policy to user
|
||||
mc admin policy set minio readwrite-policy user=myuser
|
||||
|
||||
# Remove policy from user
|
||||
mc admin policy remove minio readwrite-policy user=myuser
|
||||
```
|
||||
|
||||
**Example Policy** (`policy.json`):
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:PutObject",
|
||||
"s3:DeleteObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::my-bucket/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::my-bucket"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Application Stack
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Add to `application/.env`:
|
||||
|
||||
```env
|
||||
# MinIO Object Storage
|
||||
MINIO_ENDPOINT=https://minio-api.michaelschiemer.de
|
||||
MINIO_ACCESS_KEY=<access-key>
|
||||
MINIO_SECRET_KEY=<secret-key>
|
||||
MINIO_BUCKET=<bucket-name>
|
||||
MINIO_USE_SSL=true
|
||||
MINIO_REGION=us-east-1
|
||||
```
|
||||
|
||||
### PHP Integration
|
||||
|
||||
```php
|
||||
// Use AWS SDK or MinIO PHP SDK
|
||||
use Aws\S3\S3Client;
|
||||
|
||||
$s3Client = new S3Client([
|
||||
'version' => 'latest',
|
||||
'region' => $_ENV['MINIO_REGION'],
|
||||
'endpoint' => $_ENV['MINIO_ENDPOINT'],
|
||||
'use_path_style_endpoint' => true,
|
||||
'credentials' => [
|
||||
'key' => $_ENV['MINIO_ACCESS_KEY'],
|
||||
'secret' => $_ENV['MINIO_SECRET_KEY'],
|
||||
],
|
||||
]);
|
||||
```
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# backup-minio.sh
|
||||
|
||||
BACKUP_DIR="/backups/minio"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup MinIO data
|
||||
docker run --rm \
|
||||
-v minio-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/minio-data-$DATE.tar.gz -C /data .
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR/minio-data-$DATE.tar.gz"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop MinIO
|
||||
docker compose down
|
||||
|
||||
# Restore data
|
||||
docker run --rm \
|
||||
-v minio-data:/data \
|
||||
-v /backups/minio:/backup \
|
||||
alpine tar xzf /backup/minio-data-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Start MinIO
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Add to crontab:
|
||||
|
||||
```bash
|
||||
# Daily backup at 3 AM
|
||||
0 3 * * * /path/to/backup-minio.sh
|
||||
|
||||
# Keep only last 30 days
|
||||
0 4 * * * find /backups/minio -type f -mtime +30 -delete
|
||||
```
|
||||
|
||||
### Bucket-Level Backup (Recommended)
|
||||
|
||||
Use MinIO's built-in replication or external tools:
|
||||
|
||||
```bash
|
||||
# Sync bucket to external storage
|
||||
mc mirror minio/my-bucket /backup/my-bucket/
|
||||
|
||||
# Or sync to another MinIO instance
|
||||
mc mirror minio/my-bucket remote-minio/my-bucket/
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check MinIO health
|
||||
docker compose ps
|
||||
|
||||
# API health endpoint
|
||||
curl -f https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Check storage usage
|
||||
docker exec minio du -sh /data
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Check for errors
|
||||
docker compose logs minio | grep -i error
|
||||
|
||||
# Monitor API access
|
||||
docker compose logs -f minio | grep "GET /"
|
||||
```
|
||||
|
||||
### Storage Statistics
|
||||
|
||||
```bash
|
||||
# Check volume size
|
||||
docker volume inspect minio-data
|
||||
|
||||
# Check disk usage
|
||||
docker system df -v | grep minio
|
||||
|
||||
# List buckets via mc
|
||||
mc ls minio
|
||||
```
|
||||
|
||||
### Metrics (via mc)
|
||||
|
||||
```bash
|
||||
# Server info
|
||||
mc admin info minio
|
||||
|
||||
# Service status
|
||||
mc admin service status minio
|
||||
|
||||
# Trace operations
|
||||
mc admin trace minio
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### MinIO Configuration
|
||||
|
||||
For high-traffic scenarios, edit `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
# Increase concurrent operations
|
||||
- MINIO_CACHE_DRIVES=/mnt/cache1,/mnt/cache2
|
||||
- MINIO_CACHE_QUOTA=80
|
||||
- MINIO_CACHE_AFTER=0
|
||||
- MINIO_CACHE_WATERMARK_LOW=70
|
||||
- MINIO_CACHE_WATERMARK_HIGH=90
|
||||
```
|
||||
|
||||
### Storage Optimization
|
||||
|
||||
```bash
|
||||
# Monitor storage growth
|
||||
du -sh /var/lib/docker/volumes/minio-data/
|
||||
|
||||
# Enable compression (handled automatically by MinIO)
|
||||
|
||||
# Set bucket quotas via Console or mc
|
||||
mc admin quota set minio/my-bucket --hard 100GB
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Cannot Access Console
|
||||
|
||||
```bash
|
||||
# Check service is running
|
||||
docker compose ps
|
||||
|
||||
# Check Traefik routing
|
||||
docker exec traefik cat /etc/traefik/traefik.yml
|
||||
|
||||
# Check network
|
||||
docker network inspect traefik-public | grep minio
|
||||
|
||||
# Test from server
|
||||
curl -k https://localhost:9001
|
||||
```
|
||||
|
||||
### Authentication Failed
|
||||
|
||||
```bash
|
||||
# Verify environment variables
|
||||
docker exec minio env | grep MINIO_ROOT
|
||||
|
||||
# Check logs
|
||||
docker compose logs minio | grep -i auth
|
||||
|
||||
# Reset root password (stop container, remove volume, restart)
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
|
||||
```bash
|
||||
# Verify Traefik certificate
|
||||
docker exec traefik cat /acme.json | grep minio
|
||||
|
||||
# Test SSL
|
||||
openssl s_client -connect minio-api.michaelschiemer.de:443 \
|
||||
-servername minio-api.michaelschiemer.de < /dev/null
|
||||
```
|
||||
|
||||
### Storage Issues
|
||||
|
||||
```bash
|
||||
# Check volume mount
|
||||
docker exec minio df -h /data
|
||||
|
||||
# Check for corrupted data
|
||||
docker exec minio find /data -type f -name "*.json" | head
|
||||
|
||||
# Check disk space
|
||||
df -h /var/lib/docker/volumes/minio-data/
|
||||
```
|
||||
|
||||
### API Connection Errors
|
||||
|
||||
```bash
|
||||
# Verify endpoint URL
|
||||
curl -I https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Test with credentials
|
||||
curl -u minioadmin:<password> https://minio-api.michaelschiemer.de
|
||||
|
||||
# Check CORS settings (if needed for web apps)
|
||||
mc admin config set minio api cors_allow_origin "https://yourdomain.com"
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Strong Credentials**: Use strong passwords for root user and access keys
|
||||
2. **Change Default Root User**: Don't use `minioadmin` in production
|
||||
3. **SSL Only**: Always use HTTPS (enforced via Traefik)
|
||||
4. **Access Key Rotation**: Regularly rotate access keys
|
||||
5. **Policy-Based Access**: Use IAM policies to limit permissions
|
||||
6. **Bucket Policies**: Configure bucket-level policies
|
||||
7. **Audit Logging**: Enable audit logging for compliance
|
||||
8. **Encryption**: Enable encryption at rest and in transit
|
||||
|
||||
### Enable Audit Logging
|
||||
|
||||
```bash
|
||||
# Configure audit logging
|
||||
mc admin config set minio audit_webhook \
|
||||
endpoint=https://log-service.example.com/webhook \
|
||||
auth_token=<token>
|
||||
```
|
||||
|
||||
### Enable Encryption
|
||||
|
||||
```bash
|
||||
# Set encryption keys
|
||||
mc admin config set minio encryption \
|
||||
sse-s3 enable \
|
||||
kms endpoint=https://kms.example.com \
|
||||
kms-key-id=<key-id>
|
||||
```
|
||||
|
||||
### Update Stack
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Security Headers
|
||||
|
||||
Security headers are applied via Traefik's `default-chain@file` middleware:
|
||||
- HSTS
|
||||
- Content-Type Nosniff
|
||||
- XSS Protection
|
||||
- Frame Deny
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **MinIO Documentation**: https://min.io/docs/
|
||||
- **S3 API Compatibility**: https://min.io/docs/minio/linux/reference/minio-mc/mc.html
|
||||
- **MinIO Client (mc)**: https://min.io/docs/minio/linux/reference/minio-mc.html
|
||||
- **MinIO Erasure Coding**: https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html
|
||||
- **MinIO Security**: https://min.io/docs/minio/linux/operations/security.html
|
||||
50
deployment/legacy/stacks/stacks/minio/docker-compose.yml
Normal file
50
deployment/legacy/stacks/stacks/minio/docker-compose.yml
Normal file
@@ -0,0 +1,50 @@
|
||||
services:
|
||||
minio:
|
||||
image: minio/minio:latest
|
||||
container_name: minio
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- MINIO_ROOT_USER=${MINIO_ROOT_USER:-minioadmin}
|
||||
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
|
||||
command: server /data --console-address ":9001"
|
||||
volumes:
|
||||
- minio-data:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
|
||||
# API Router (S3-compatible endpoint)
|
||||
- "traefik.http.routers.minio-api.rule=Host(`${MINIO_API_DOMAIN:-minio-api.michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.minio-api.entrypoints=websecure"
|
||||
- "traefik.http.routers.minio-api.tls=true"
|
||||
- "traefik.http.routers.minio-api.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.minio-api.service=minio-api"
|
||||
- "traefik.http.routers.minio-api.middlewares=default-chain@file"
|
||||
- "traefik.http.services.minio-api.loadbalancer.server.port=9000"
|
||||
|
||||
# Console Router (Web UI)
|
||||
- "traefik.http.routers.minio-console.rule=Host(`${MINIO_CONSOLE_DOMAIN:-minio.michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.minio-console.entrypoints=websecure"
|
||||
- "traefik.http.routers.minio-console.tls=true"
|
||||
- "traefik.http.routers.minio-console.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.minio-console.service=minio-console"
|
||||
- "traefik.http.routers.minio-console.middlewares=default-chain@file"
|
||||
- "traefik.http.services.minio-console.loadbalancer.server.port=9001"
|
||||
|
||||
volumes:
|
||||
minio-data:
|
||||
name: minio-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
21
deployment/legacy/stacks/stacks/monitoring/.env.example
Normal file
21
deployment/legacy/stacks/stacks/monitoring/.env.example
Normal file
@@ -0,0 +1,21 @@
|
||||
# Monitoring Stack Environment Configuration
|
||||
# Copy to .env and configure with your actual values
|
||||
|
||||
# Domain Configuration
|
||||
DOMAIN=michaelschiemer.de
|
||||
|
||||
# Grafana Configuration
|
||||
GRAFANA_ADMIN_USER=admin
|
||||
GRAFANA_ADMIN_PASSWORD=changeme_secure_password
|
||||
|
||||
# Grafana Plugins (comma-separated)
|
||||
# Common useful plugins:
|
||||
# - grafana-clock-panel
|
||||
# - grafana-piechart-panel
|
||||
# - grafana-worldmap-panel
|
||||
GRAFANA_PLUGINS=
|
||||
|
||||
# Prometheus BasicAuth
|
||||
# Generate with: htpasswd -nb admin password
|
||||
# Format: username:hashed_password
|
||||
PROMETHEUS_AUTH=admin:$$apr1$$xyz...
|
||||
751
deployment/legacy/stacks/stacks/monitoring/README.md
Normal file
751
deployment/legacy/stacks/stacks/monitoring/README.md
Normal file
@@ -0,0 +1,751 @@
|
||||
# Stack 6: Monitoring (Portainer + Grafana + Prometheus)
|
||||
|
||||
Comprehensive monitoring stack for infrastructure and application observability.
|
||||
|
||||
## Overview
|
||||
|
||||
This stack provides complete monitoring and visualization capabilities for the entire infrastructure:
|
||||
- **Prometheus**: Time-series metrics collection and alerting
|
||||
- **Grafana**: Metrics visualization with pre-configured dashboards
|
||||
- **Portainer**: Container management UI
|
||||
- **Node Exporter**: Host system metrics (CPU, memory, disk, network)
|
||||
- **cAdvisor**: Container resource usage metrics
|
||||
- **Alertmanager**: Alert routing and management (via Prometheus)
|
||||
|
||||
## Features
|
||||
|
||||
### Prometheus
|
||||
- Multi-target scraping (node-exporter, cadvisor, traefik)
|
||||
- 15-second scrape interval for near real-time metrics
|
||||
- 15-day retention period
|
||||
- Pre-configured alert rules for critical conditions
|
||||
- Built-in alerting engine
|
||||
- Service discovery via static configs
|
||||
- HTTPS support with BasicAuth protection
|
||||
|
||||
### Grafana
|
||||
- Pre-configured Prometheus datasource
|
||||
- Three comprehensive dashboards:
|
||||
- **Docker Containers**: Container CPU, memory, network I/O, restarts
|
||||
- **Host System**: System CPU, memory, disk, network, uptime
|
||||
- **Traefik**: Request rates, response times, status codes, error rates
|
||||
- Auto-provisioning (no manual configuration needed)
|
||||
- HTTPS access via Traefik
|
||||
- 30-second auto-refresh
|
||||
- Dark theme for reduced eye strain
|
||||
|
||||
### Portainer
|
||||
- Web-based Docker management UI
|
||||
- Container start/stop/restart/logs
|
||||
- Stack management and deployment
|
||||
- Volume and network management
|
||||
- Resource usage visualization
|
||||
- HTTPS access via Traefik
|
||||
|
||||
### Node Exporter
|
||||
- Host system metrics:
|
||||
- CPU usage by core and mode
|
||||
- Memory usage and available memory
|
||||
- Disk usage by filesystem
|
||||
- Network I/O by interface
|
||||
- System load averages
|
||||
- System uptime
|
||||
|
||||
### cAdvisor
|
||||
- Container metrics:
|
||||
- CPU usage per container
|
||||
- Memory usage per container
|
||||
- Network I/O per container
|
||||
- Disk I/O per container
|
||||
- Container restart counts
|
||||
- Container health status
|
||||
|
||||
## Services
|
||||
|
||||
| Service | Domain | Port | Purpose |
|
||||
|---------|--------|------|---------|
|
||||
| Grafana | grafana.michaelschiemer.de | 3000 | Metrics visualization |
|
||||
| Prometheus | prometheus.michaelschiemer.de | 9090 | Metrics collection |
|
||||
| Portainer | portainer.michaelschiemer.de | 9000/9443 | Container management |
|
||||
| Node Exporter | - | 9100 | Host metrics (internal) |
|
||||
| cAdvisor | - | 8080 | Container metrics (internal) |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Traefik stack deployed and running (Stack 1)
|
||||
- Docker networks: `traefik-public`, `monitoring`
|
||||
- Docker Swarm initialized (if using swarm mode)
|
||||
- Domain DNS configured (grafana/prometheus/portainer subdomains)
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
monitoring/
|
||||
├── docker-compose.yml # Main stack definition
|
||||
├── .env.example # Environment template
|
||||
├── prometheus/
|
||||
│ ├── prometheus.yml # Prometheus configuration
|
||||
│ └── alerts.yml # Alert rules
|
||||
├── grafana/
|
||||
│ ├── provisioning/
|
||||
│ │ ├── datasources/
|
||||
│ │ │ └── prometheus.yml # Auto-configured datasource
|
||||
│ │ └── dashboards/
|
||||
│ │ └── dashboard.yml # Dashboard provisioning
|
||||
│ └── dashboards/
|
||||
│ ├── docker-containers.json # Container metrics dashboard
|
||||
│ ├── host-system.json # Host metrics dashboard
|
||||
│ └── traefik.json # Traefik metrics dashboard
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Configure Environment Variables
|
||||
|
||||
Edit `.env` and set the following variables:
|
||||
|
||||
```bash
|
||||
# Domain Configuration
|
||||
DOMAIN=michaelschiemer.de
|
||||
|
||||
# Grafana Configuration
|
||||
GRAFANA_ADMIN_USER=admin
|
||||
GRAFANA_ADMIN_PASSWORD=<generate-strong-password>
|
||||
|
||||
# Prometheus Configuration
|
||||
PROMETHEUS_USER=admin
|
||||
PROMETHEUS_PASSWORD=<generate-strong-password>
|
||||
|
||||
# Portainer Configuration
|
||||
PORTAINER_ADMIN_PASSWORD=<generate-strong-password>
|
||||
|
||||
# Network Configuration
|
||||
TRAEFIK_NETWORK=traefik-public
|
||||
MONITORING_NETWORK=monitoring
|
||||
```
|
||||
|
||||
### 3. Generate Strong Passwords
|
||||
|
||||
```bash
|
||||
# Generate random passwords
|
||||
openssl rand -base64 32
|
||||
|
||||
# For Prometheus BasicAuth (bcrypt hash)
|
||||
docker run --rm httpd:alpine htpasswd -nbB admin "your-password" | cut -d ":" -f 2
|
||||
```
|
||||
|
||||
### 4. Update Traefik BasicAuth (Optional)
|
||||
|
||||
If using Prometheus BasicAuth, add the bcrypt hash to Traefik labels in docker-compose.yml:
|
||||
|
||||
```yaml
|
||||
- "traefik.http.middlewares.prometheus-auth.basicauth.users=admin:$$2y$$05$$..."
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Deploy Stack
|
||||
|
||||
```bash
|
||||
cd /home/michael/dev/michaelschiemer/deployment/stacks/monitoring
|
||||
|
||||
# Deploy with Docker Compose
|
||||
docker compose up -d
|
||||
|
||||
# Or with Docker Stack (Swarm mode)
|
||||
docker stack deploy -c docker-compose.yml monitoring
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check running containers
|
||||
docker compose ps
|
||||
|
||||
# Check service logs
|
||||
docker compose logs -f grafana
|
||||
docker compose logs -f prometheus
|
||||
|
||||
# Check Prometheus targets
|
||||
curl -u admin:password https://prometheus.michaelschiemer.de/api/v1/targets
|
||||
```
|
||||
|
||||
### Initial Access
|
||||
|
||||
1. **Grafana**: https://grafana.michaelschiemer.de
|
||||
- Login: `admin` / `<GRAFANA_ADMIN_PASSWORD>`
|
||||
- Dashboards are pre-loaded and ready to use
|
||||
|
||||
2. **Prometheus**: https://prometheus.michaelschiemer.de
|
||||
- BasicAuth: `admin` / `<PROMETHEUS_PASSWORD>`
|
||||
- Check targets at `/targets`
|
||||
- View alerts at `/alerts`
|
||||
|
||||
3. **Portainer**: https://portainer.michaelschiemer.de
|
||||
- First login: Set admin password
|
||||
- Connect to local Docker environment
|
||||
|
||||
## Usage
|
||||
|
||||
### Grafana Dashboards
|
||||
|
||||
#### Docker Containers Dashboard
|
||||
Access: https://grafana.michaelschiemer.de/d/docker-containers
|
||||
|
||||
**Metrics Displayed**:
|
||||
- Container CPU Usage % (per container, timeseries)
|
||||
- Container Memory Usage (bytes per container, timeseries)
|
||||
- Containers Running (current count, stat)
|
||||
- Container Restarts in 5m (rate with thresholds, stat)
|
||||
- Container Network I/O (RX/TX per container, timeseries)
|
||||
|
||||
**Use Cases**:
|
||||
- Identify containers with high resource usage
|
||||
- Monitor container stability (restart rates)
|
||||
- Track network bandwidth consumption
|
||||
- Verify all expected containers are running
|
||||
|
||||
#### Host System Dashboard
|
||||
Access: https://grafana.michaelschiemer.de/d/host-system
|
||||
|
||||
**Metrics Displayed**:
|
||||
- CPU Usage % (historical and current)
|
||||
- Memory Usage % (historical and current)
|
||||
- Disk Usage % (root filesystem, historical and current)
|
||||
- Network I/O (RX/TX by interface)
|
||||
- System Uptime (seconds since boot)
|
||||
|
||||
**Thresholds**:
|
||||
- Green: < 80% usage
|
||||
- Yellow: 80-90% usage
|
||||
- Red: > 90% usage
|
||||
|
||||
**Use Cases**:
|
||||
- Monitor server health and resource utilization
|
||||
- Identify resource bottlenecks
|
||||
- Plan capacity upgrades
|
||||
- Track system stability (uptime)
|
||||
|
||||
#### Traefik Dashboard
|
||||
Access: https://grafana.michaelschiemer.de/d/traefik
|
||||
|
||||
**Metrics Displayed**:
|
||||
- Request Rate by Service (req/s, timeseries)
|
||||
- Response Time p95/p99 (milliseconds, timeseries)
|
||||
- HTTP Status Codes (2xx/4xx/5xx stacked, color-coded)
|
||||
- Service Status (Up/Down per service)
|
||||
- Requests per Minute (total)
|
||||
- 4xx Error Rate (percentage)
|
||||
- 5xx Error Rate (percentage)
|
||||
- Active Services (count)
|
||||
|
||||
**Thresholds**:
|
||||
- 4xx errors: Green < 5%, Yellow < 10%, Red ≥ 10%
|
||||
- 5xx errors: Green < 1%, Yellow < 5%, Red ≥ 5%
|
||||
|
||||
**Use Cases**:
|
||||
- Monitor HTTP traffic patterns
|
||||
- Identify performance issues (high latency)
|
||||
- Track error rates and types
|
||||
- Verify service availability
|
||||
|
||||
### Prometheus Queries
|
||||
|
||||
#### Common PromQL Examples
|
||||
|
||||
**CPU Usage**:
|
||||
```promql
|
||||
# Overall CPU usage
|
||||
100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
|
||||
|
||||
# Per-core CPU usage
|
||||
rate(node_cpu_seconds_total[5m]) * 100
|
||||
```
|
||||
|
||||
**Memory Usage**:
|
||||
```promql
|
||||
# Memory usage percentage
|
||||
100 - ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100)
|
||||
|
||||
# Memory available in GB
|
||||
node_memory_MemAvailable_bytes / 1024 / 1024 / 1024
|
||||
```
|
||||
|
||||
**Disk Usage**:
|
||||
```promql
|
||||
# Disk usage percentage
|
||||
100 - ((node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100)
|
||||
|
||||
# Disk I/O rate
|
||||
rate(node_disk_io_time_seconds_total[5m])
|
||||
```
|
||||
|
||||
**Container Metrics**:
|
||||
```promql
|
||||
# Container CPU usage
|
||||
sum(rate(container_cpu_usage_seconds_total{name!~".*exporter.*"}[5m])) by (name) * 100
|
||||
|
||||
# Container memory usage
|
||||
sum(container_memory_usage_bytes{name!~".*exporter.*"}) by (name)
|
||||
|
||||
# Container network I/O
|
||||
rate(container_network_receive_bytes_total[5m])
|
||||
rate(container_network_transmit_bytes_total[5m])
|
||||
```
|
||||
|
||||
**Traefik Metrics**:
|
||||
```promql
|
||||
# Request rate by service
|
||||
sum(rate(traefik_service_requests_total[5m])) by (service)
|
||||
|
||||
# Response time percentiles
|
||||
histogram_quantile(0.95, sum(rate(traefik_service_request_duration_seconds_bucket[5m])) by (service, le))
|
||||
|
||||
# Error rate
|
||||
sum(rate(traefik_service_requests_total{code=~"5.."}[5m])) / sum(rate(traefik_service_requests_total[5m])) * 100
|
||||
```
|
||||
|
||||
### Alert Management
|
||||
|
||||
#### Configured Alerts
|
||||
|
||||
Alerts are defined in `prometheus/alerts.yml`:
|
||||
|
||||
1. **HostHighCPU**: CPU usage > 80% for 5 minutes
|
||||
2. **HostHighMemory**: Memory usage > 80% for 5 minutes
|
||||
3. **HostDiskSpaceLow**: Disk usage > 80%
|
||||
4. **ContainerHighCPU**: Container CPU > 80% for 5 minutes
|
||||
5. **ContainerHighMemory**: Container memory > 80% for 5 minutes
|
||||
6. **ServiceDown**: Service unavailable
|
||||
7. **HighErrorRate**: Error rate > 5% for 5 minutes
|
||||
|
||||
#### View Active Alerts
|
||||
|
||||
```bash
|
||||
# Via Prometheus UI
|
||||
https://prometheus.michaelschiemer.de/alerts
|
||||
|
||||
# Via API
|
||||
curl -u admin:password https://prometheus.michaelschiemer.de/api/v1/alerts
|
||||
|
||||
# Check alert rules
|
||||
curl -u admin:password https://prometheus.michaelschiemer.de/api/v1/rules
|
||||
```
|
||||
|
||||
#### Silence Alerts
|
||||
|
||||
Use Prometheus UI or API to silence alerts during maintenance:
|
||||
|
||||
```bash
|
||||
# Silence via API (example)
|
||||
curl -X POST -u admin:password \
|
||||
https://prometheus.michaelschiemer.de/api/v1/alerts \
|
||||
-d 'alertname=HostHighCPU&duration=1h'
|
||||
```
|
||||
|
||||
### Portainer Usage
|
||||
|
||||
#### Container Management
|
||||
|
||||
1. Navigate to https://portainer.michaelschiemer.de
|
||||
2. Select "Local" environment
|
||||
3. Go to "Containers" section
|
||||
4. Available actions:
|
||||
- Start/Stop/Restart containers
|
||||
- View logs (live stream)
|
||||
- Inspect container details
|
||||
- Execute commands in containers
|
||||
- View resource statistics
|
||||
|
||||
#### Stack Management
|
||||
|
||||
1. Go to "Stacks" section
|
||||
2. View deployed stacks
|
||||
3. Actions available:
|
||||
- View stack definition
|
||||
- Update stack (edit compose file)
|
||||
- Stop/Start entire stack
|
||||
- Remove stack
|
||||
|
||||
#### Volume Management
|
||||
|
||||
1. Go to "Volumes" section
|
||||
2. View volume details and size
|
||||
3. Browse volume contents
|
||||
4. Backup/restore volumes
|
||||
|
||||
## Integration with Other Stacks
|
||||
|
||||
### Stack 1: Traefik
|
||||
- Provides HTTPS reverse proxy for Grafana, Prometheus, Portainer
|
||||
- Automatic SSL certificate management
|
||||
- BasicAuth middleware for Prometheus
|
||||
|
||||
### Stack 2: Gitea
|
||||
- Monitor Gitea container resources
|
||||
- Track HTTP requests to Gitea via Traefik dashboard
|
||||
- Alert on Gitea service downtime
|
||||
|
||||
### Stack 3: Docker Registry
|
||||
- Monitor registry container resources
|
||||
- Track registry HTTP requests
|
||||
- Alert on registry unavailability
|
||||
|
||||
### Stack 4: Application
|
||||
- Monitor PHP-FPM, Nginx, Redis, Worker containers
|
||||
- Track application response times
|
||||
- Monitor queue worker health
|
||||
|
||||
### Stack 5: PostgreSQL
|
||||
- Monitor database container resources
|
||||
- Track PostgreSQL metrics (if postgres_exporter added)
|
||||
- Alert on database unavailability
|
||||
|
||||
## Monitoring Best Practices
|
||||
|
||||
### 1. Regular Dashboard Review
|
||||
- Check dashboards daily for anomalies
|
||||
- Review error rates and response times
|
||||
- Monitor resource utilization trends
|
||||
|
||||
### 2. Alert Configuration
|
||||
- Tune alert thresholds based on baseline metrics
|
||||
- Avoid alert fatigue (too many non-critical alerts)
|
||||
- Document alert response procedures
|
||||
|
||||
### 3. Capacity Planning
|
||||
- Review resource usage trends weekly
|
||||
- Plan capacity upgrades before hitting limits
|
||||
- Monitor growth rates for proactive scaling
|
||||
|
||||
### 4. Performance Optimization
|
||||
- Identify containers with high resource usage
|
||||
- Optimize slow endpoints (high p95/p99 latency)
|
||||
- Balance load across services
|
||||
|
||||
### 5. Security Monitoring
|
||||
- Monitor failed authentication attempts
|
||||
- Track unusual traffic patterns
|
||||
- Review service availability trends
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Grafana Issues
|
||||
|
||||
#### Dashboard Not Loading
|
||||
```bash
|
||||
# Check Grafana logs
|
||||
docker compose logs grafana
|
||||
|
||||
# Verify datasource connection
|
||||
curl http://localhost:3000/api/health
|
||||
|
||||
# Restart Grafana
|
||||
docker compose restart grafana
|
||||
```
|
||||
|
||||
#### Missing Metrics
|
||||
```bash
|
||||
# Check Prometheus datasource
|
||||
curl http://prometheus:9090/api/v1/targets
|
||||
|
||||
# Verify Prometheus is scraping
|
||||
docker compose logs prometheus | grep "Scrape"
|
||||
|
||||
# Check network connectivity
|
||||
docker compose exec grafana ping prometheus
|
||||
```
|
||||
|
||||
### Prometheus Issues
|
||||
|
||||
#### Targets Down
|
||||
```bash
|
||||
# Check target status
|
||||
curl -u admin:password https://prometheus.michaelschiemer.de/api/v1/targets
|
||||
|
||||
# Verify target services are running
|
||||
docker compose ps
|
||||
|
||||
# Check Prometheus configuration
|
||||
docker compose exec prometheus cat /etc/prometheus/prometheus.yml
|
||||
|
||||
# Reload configuration
|
||||
curl -X POST -u admin:password https://prometheus.michaelschiemer.de/-/reload
|
||||
```
|
||||
|
||||
#### High Memory Usage
|
||||
```bash
|
||||
# Check Prometheus memory
|
||||
docker stats prometheus
|
||||
|
||||
# Reduce retention period in docker-compose.yml:
|
||||
# --storage.tsdb.retention.time=7d
|
||||
|
||||
# Reduce scrape interval in prometheus.yml:
|
||||
# scrape_interval: 30s
|
||||
```
|
||||
|
||||
### Node Exporter Issues
|
||||
|
||||
#### No Host Metrics
|
||||
```bash
|
||||
# Check node-exporter is running
|
||||
docker compose ps node-exporter
|
||||
|
||||
# Test metrics endpoint
|
||||
curl http://localhost:9100/metrics
|
||||
|
||||
# Check Prometheus scraping
|
||||
docker compose logs prometheus | grep node-exporter
|
||||
```
|
||||
|
||||
### cAdvisor Issues
|
||||
|
||||
#### No Container Metrics
|
||||
```bash
|
||||
# Check cAdvisor is running
|
||||
docker compose ps cadvisor
|
||||
|
||||
# Test metrics endpoint
|
||||
curl http://localhost:8080/metrics
|
||||
|
||||
# Verify Docker socket mount
|
||||
docker compose exec cadvisor ls -la /var/run/docker.sock
|
||||
```
|
||||
|
||||
### Portainer Issues
|
||||
|
||||
#### Cannot Access UI
|
||||
```bash
|
||||
# Check Portainer is running
|
||||
docker compose ps portainer
|
||||
|
||||
# Check Traefik routing
|
||||
docker compose -f ../traefik/docker-compose.yml logs
|
||||
|
||||
# Verify network connectivity
|
||||
docker network ls | grep monitoring
|
||||
```
|
||||
|
||||
#### Cannot Connect to Docker
|
||||
```bash
|
||||
# Verify Docker socket permissions
|
||||
ls -la /var/run/docker.sock
|
||||
|
||||
# Check Portainer logs
|
||||
docker compose logs portainer
|
||||
|
||||
# Restart Portainer
|
||||
docker compose restart portainer
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Prometheus Optimization
|
||||
|
||||
#### Reduce Memory Usage
|
||||
```yaml
|
||||
# In docker-compose.yml, adjust retention:
|
||||
command:
|
||||
- '--storage.tsdb.retention.time=7d' # Reduce from 15d
|
||||
- '--storage.tsdb.retention.size=5GB' # Add size limit
|
||||
```
|
||||
|
||||
#### Optimize Scrape Intervals
|
||||
```yaml
|
||||
# In prometheus/prometheus.yml:
|
||||
global:
|
||||
scrape_interval: 30s # Increase from 15s for less load
|
||||
evaluation_interval: 30s
|
||||
```
|
||||
|
||||
#### Reduce Cardinality
|
||||
```yaml
|
||||
# In prometheus/prometheus.yml, add metric_relabel_configs:
|
||||
metric_relabel_configs:
|
||||
- source_labels: [__name__]
|
||||
regex: 'unused_metric_.*'
|
||||
action: drop
|
||||
```
|
||||
|
||||
### Grafana Optimization
|
||||
|
||||
#### Reduce Query Load
|
||||
```json
|
||||
// In dashboard JSON, adjust refresh rate:
|
||||
"refresh": "1m" // Increase from 30s
|
||||
```
|
||||
|
||||
#### Optimize Panel Queries
|
||||
- Use recording rules for expensive queries
|
||||
- Reduce time range for heavy queries
|
||||
- Use appropriate resolution (step parameter)
|
||||
|
||||
### Storage Optimization
|
||||
|
||||
#### Prometheus Data Volume
|
||||
```bash
|
||||
# Check current size
|
||||
du -sh volumes/prometheus/
|
||||
|
||||
# Compact old data
|
||||
docker compose exec prometheus curl -X POST http://localhost:9090/api/v1/admin/tsdb/clean_tombstones
|
||||
```
|
||||
|
||||
#### Grafana Data Volume
|
||||
```bash
|
||||
# Check current size
|
||||
du -sh volumes/grafana/
|
||||
|
||||
# Clean old sessions
|
||||
docker compose exec grafana grafana-cli admin reset-admin-password
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Password Security
|
||||
- Use strong, randomly generated passwords
|
||||
- Store passwords securely (password manager)
|
||||
- Rotate passwords regularly
|
||||
- Use bcrypt for Prometheus BasicAuth
|
||||
|
||||
### 2. Network Security
|
||||
- Monitoring network is internal-only (except exporters)
|
||||
- Traefik handles SSL/TLS termination
|
||||
- BasicAuth protects Prometheus UI
|
||||
- Grafana requires login for dashboard access
|
||||
|
||||
### 3. Access Control
|
||||
- Limit Grafana admin access
|
||||
- Use Grafana organizations for multi-tenancy
|
||||
- Configure Prometheus with read-only access where possible
|
||||
- Restrict Portainer access to trusted users
|
||||
|
||||
### 4. Data Security
|
||||
- Prometheus stores metrics in plain text
|
||||
- Grafana encrypts passwords in database
|
||||
- Backup volumes contain sensitive data
|
||||
- Secure backups with encryption
|
||||
|
||||
### 5. Container Security
|
||||
- Use official Docker images
|
||||
- Keep images updated (security patches)
|
||||
- Run containers as non-root where possible
|
||||
- Limit container capabilities
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Backup Prometheus Data
|
||||
```bash
|
||||
# Stop Prometheus
|
||||
docker compose stop prometheus
|
||||
|
||||
# Backup data volume
|
||||
tar czf prometheus-backup-$(date +%Y%m%d).tar.gz -C volumes/prometheus .
|
||||
|
||||
# Restart Prometheus
|
||||
docker compose start prometheus
|
||||
```
|
||||
|
||||
### Backup Grafana Data
|
||||
```bash
|
||||
# Backup Grafana database and dashboards
|
||||
docker compose exec grafana tar czf - /var/lib/grafana > grafana-backup-$(date +%Y%m%d).tar.gz
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
```bash
|
||||
# Stop services
|
||||
docker compose down
|
||||
|
||||
# Restore Prometheus data
|
||||
tar xzf prometheus-backup-YYYYMMDD.tar.gz -C volumes/prometheus/
|
||||
|
||||
# Restore Grafana data
|
||||
docker compose up -d grafana
|
||||
docker compose exec grafana tar xzf - -C / < grafana-backup-YYYYMMDD.tar.gz
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Regular Tasks
|
||||
|
||||
#### Daily
|
||||
- Review dashboards for anomalies
|
||||
- Check active alerts
|
||||
- Verify all services are running
|
||||
|
||||
#### Weekly
|
||||
- Review resource usage trends
|
||||
- Check disk space usage
|
||||
- Update passwords if needed
|
||||
|
||||
#### Monthly
|
||||
- Review and update alert rules
|
||||
- Optimize slow queries
|
||||
- Clean up old data if needed
|
||||
- Update Docker images
|
||||
|
||||
### Update Procedure
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers with new images
|
||||
docker compose up -d
|
||||
|
||||
# Verify services are healthy
|
||||
docker compose ps
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
### Documentation
|
||||
- Prometheus: https://prometheus.io/docs/
|
||||
- Grafana: https://grafana.com/docs/
|
||||
- Portainer: https://docs.portainer.io/
|
||||
|
||||
### Logs
|
||||
```bash
|
||||
# View all logs
|
||||
docker compose logs
|
||||
|
||||
# Follow specific service logs
|
||||
docker compose logs -f grafana
|
||||
docker compose logs -f prometheus
|
||||
|
||||
# View last 100 lines
|
||||
docker compose logs --tail=100
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
```bash
|
||||
# Check service health
|
||||
docker compose ps
|
||||
|
||||
# Test endpoints
|
||||
curl http://localhost:9090/-/healthy # Prometheus
|
||||
curl http://localhost:3000/api/health # Grafana
|
||||
|
||||
# Check metrics
|
||||
curl http://localhost:9100/metrics # Node Exporter
|
||||
curl http://localhost:8080/metrics # cAdvisor
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Stack Version**: 1.0
|
||||
**Last Updated**: 2025-01-30
|
||||
**Maintained By**: DevOps Team
|
||||
@@ -0,0 +1,47 @@
|
||||
# Local Development Override for Monitoring Stack
|
||||
# Usage: docker compose -f docker-compose.yml -f docker-compose.local.yml up -d
|
||||
#
|
||||
# IMPORTANT: Must use LIST format labels like base config to prevent merging
|
||||
# Dictionary format + List format = Both remain (merged)
|
||||
# List format + List format = Properly merged/replaced
|
||||
#
|
||||
# This override file:
|
||||
# - Changes entrypoints from 'websecure' to 'web' (HTTP only for local dev)
|
||||
# - Removes TLS/Let's Encrypt configuration by NOT including those labels
|
||||
# - Keeps all other configuration from docker-compose.yml
|
||||
|
||||
services:
|
||||
portainer:
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.portainer.rule=Host(`portainer.${DOMAIN}`)"
|
||||
- "traefik.http.routers.portainer.entrypoints=web"
|
||||
- "traefik.http.routers.portainer.tls=false"
|
||||
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
|
||||
|
||||
prometheus:
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.prometheus.rule=Host(`prometheus.${DOMAIN}`)"
|
||||
- "traefik.http.routers.prometheus.entrypoints=web"
|
||||
- "traefik.http.routers.prometheus.tls=false"
|
||||
- "traefik.http.routers.prometheus.middlewares=prometheus-auth@docker"
|
||||
- "traefik.http.middlewares.prometheus-auth.basicauth.users=${PROMETHEUS_AUTH}"
|
||||
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
|
||||
|
||||
grafana:
|
||||
environment:
|
||||
# Update root URL for HTTP
|
||||
- GF_SERVER_ROOT_URL=http://grafana.${DOMAIN}:8081
|
||||
- GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER}
|
||||
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
|
||||
- GF_USERS_ALLOW_SIGN_UP=false
|
||||
- GF_INSTALL_PLUGINS=${GRAFANA_PLUGINS}
|
||||
- GF_LOG_LEVEL=info
|
||||
- GF_ANALYTICS_REPORTING_ENABLED=false
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.grafana.rule=Host(`grafana.${DOMAIN}`)"
|
||||
- "traefik.http.routers.grafana.entrypoints=web"
|
||||
- "traefik.http.routers.grafana.tls=false"
|
||||
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
|
||||
141
deployment/legacy/stacks/stacks/monitoring/docker-compose.yml
Normal file
141
deployment/legacy/stacks/stacks/monitoring/docker-compose.yml
Normal file
@@ -0,0 +1,141 @@
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce:latest
|
||||
container_name: portainer
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- portainer-data:/data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.portainer.rule=Host(`portainer.${DOMAIN}`)"
|
||||
- "traefik.http.routers.portainer.entrypoints=websecure"
|
||||
- "traefik.http.routers.portainer.tls=true"
|
||||
- "traefik.http.routers.portainer.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
|
||||
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
container_name: prometheus
|
||||
restart: unless-stopped
|
||||
user: "65534:65534"
|
||||
networks:
|
||||
- traefik-public
|
||||
- app-internal
|
||||
volumes:
|
||||
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- ./prometheus/alerts.yml:/etc/prometheus/alerts.yml:ro
|
||||
- prometheus-data:/prometheus
|
||||
command:
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
- '--storage.tsdb.path=/prometheus'
|
||||
- '--storage.tsdb.retention.time=30d'
|
||||
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
|
||||
- '--web.console.templates=/usr/share/prometheus/consoles'
|
||||
- '--web.enable-lifecycle'
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.prometheus.rule=Host(`prometheus.${DOMAIN}`)"
|
||||
- "traefik.http.routers.prometheus.entrypoints=websecure"
|
||||
- "traefik.http.routers.prometheus.tls=true"
|
||||
- "traefik.http.routers.prometheus.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.prometheus.middlewares=prometheus-auth@docker"
|
||||
- "traefik.http.middlewares.prometheus-auth.basicauth.users=${PROMETHEUS_AUTH}"
|
||||
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9090/-/healthy"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
container_name: grafana
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
- app-internal
|
||||
environment:
|
||||
- GF_SERVER_ROOT_URL=https://grafana.${DOMAIN}
|
||||
- GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER}
|
||||
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
|
||||
- GF_USERS_ALLOW_SIGN_UP=false
|
||||
- GF_INSTALL_PLUGINS=${GRAFANA_PLUGINS}
|
||||
- GF_LOG_LEVEL=info
|
||||
- GF_ANALYTICS_REPORTING_ENABLED=false
|
||||
volumes:
|
||||
- grafana-data:/var/lib/grafana
|
||||
- ./grafana/provisioning:/etc/grafana/provisioning:ro
|
||||
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.grafana.rule=Host(`grafana.${DOMAIN}`)"
|
||||
- "traefik.http.routers.grafana.entrypoints=websecure"
|
||||
- "traefik.http.routers.grafana.tls=true"
|
||||
- "traefik.http.routers.grafana.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
|
||||
depends_on:
|
||||
prometheus:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
node-exporter:
|
||||
image: prom/node-exporter:latest
|
||||
container_name: node-exporter
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9100/metrics"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
cadvisor:
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
container_name: cadvisor
|
||||
restart: unless-stopped
|
||||
privileged: true
|
||||
networks:
|
||||
- app-internal
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
- /dev/disk/:/dev/disk:ro
|
||||
devices:
|
||||
- /dev/kmsg
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/healthz"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
volumes:
|
||||
portainer-data:
|
||||
name: portainer-data
|
||||
prometheus-data:
|
||||
name: prometheus-data
|
||||
grafana-data:
|
||||
name: grafana-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
app-internal:
|
||||
external: true
|
||||
@@ -0,0 +1,397 @@
|
||||
{
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 1,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{name!~\".*exporter.*\"}[5m])) by (name) * 100",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{name}}"
|
||||
}
|
||||
],
|
||||
"title": "Container CPU Usage %",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(container_memory_usage_bytes{name!~\".*exporter.*\"}) by (name)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{name}}"
|
||||
}
|
||||
],
|
||||
"title": "Container Memory Usage",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [
|
||||
{
|
||||
"options": {
|
||||
"0": {
|
||||
"color": "red",
|
||||
"index": 1,
|
||||
"text": "Down"
|
||||
},
|
||||
"1": {
|
||||
"color": "green",
|
||||
"index": 0,
|
||||
"text": "Up"
|
||||
}
|
||||
},
|
||||
"type": "value"
|
||||
}
|
||||
],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "red",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "green",
|
||||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 0,
|
||||
"y": 8
|
||||
},
|
||||
"id": 3,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "none",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(container_last_seen{name!~\".*exporter.*\"}) > 0",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Containers Running",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 3
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 6,
|
||||
"y": 8
|
||||
},
|
||||
"id": 4,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "none",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_restart_count[5m])) > 0",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Container Restarts (5m)",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "Bps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 8
|
||||
},
|
||||
"id": 5,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_network_receive_bytes_total{name!~\".*exporter.*\"}[5m])) by (name)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{name}} RX"
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(container_network_transmit_bytes_total{name!~\".*exporter.*\"}[5m])) by (name)",
|
||||
"refId": "B",
|
||||
"legendFormat": "{{name}} TX"
|
||||
}
|
||||
],
|
||||
"title": "Container Network I/O",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"refresh": "30s",
|
||||
"schemaVersion": 36,
|
||||
"style": "dark",
|
||||
"tags": ["docker", "containers"],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {},
|
||||
"timezone": "",
|
||||
"title": "Docker Containers",
|
||||
"uid": "docker-containers",
|
||||
"version": 1
|
||||
}
|
||||
@@ -0,0 +1,591 @@
|
||||
{
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "line"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 1,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - (avg by(instance) (rate(node_cpu_seconds_total{mode=\"idle\"}[5m])) * 100)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{instance}}"
|
||||
}
|
||||
],
|
||||
"title": "CPU Usage %",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "line"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 80
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 90
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{instance}}"
|
||||
}
|
||||
],
|
||||
"title": "Memory Usage %",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "line"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 80
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 90
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 8
|
||||
},
|
||||
"id": 3,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - ((node_filesystem_avail_bytes{mountpoint=\"/\",fstype!=\"rootfs\"} / node_filesystem_size_bytes{mountpoint=\"/\",fstype!=\"rootfs\"}) * 100)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{instance}}"
|
||||
}
|
||||
],
|
||||
"title": "Disk Usage %",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "Bps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 8
|
||||
},
|
||||
"id": 4,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(node_network_receive_bytes_total[5m])",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{instance}} - {{device}} RX"
|
||||
},
|
||||
{
|
||||
"expr": "rate(node_network_transmit_bytes_total[5m])",
|
||||
"refId": "B",
|
||||
"legendFormat": "{{instance}} - {{device}} TX"
|
||||
}
|
||||
],
|
||||
"title": "Network I/O",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 80
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 90
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 0,
|
||||
"y": 16
|
||||
},
|
||||
"id": 5,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - (avg by(instance) (rate(node_cpu_seconds_total{mode=\"idle\"}[5m])) * 100)",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Current CPU Usage",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 80
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 90
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 6,
|
||||
"y": 16
|
||||
},
|
||||
"id": 6,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100)",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Current Memory Usage",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 80
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 90
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 12,
|
||||
"y": 16
|
||||
},
|
||||
"id": 7,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - ((node_filesystem_avail_bytes{mountpoint=\"/\",fstype!=\"rootfs\"} / node_filesystem_size_bytes{mountpoint=\"/\",fstype!=\"rootfs\"}) * 100)",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Current Disk Usage",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "s"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 18,
|
||||
"y": 16
|
||||
},
|
||||
"id": 8,
|
||||
"options": {
|
||||
"colorMode": "value",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "time() - node_boot_time_seconds",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "System Uptime",
|
||||
"type": "stat"
|
||||
}
|
||||
],
|
||||
"refresh": "30s",
|
||||
"schemaVersion": 36,
|
||||
"style": "dark",
|
||||
"tags": ["host", "system"],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {},
|
||||
"timezone": "",
|
||||
"title": "Host System",
|
||||
"uid": "host-system",
|
||||
"version": 1
|
||||
}
|
||||
@@ -0,0 +1,613 @@
|
||||
{
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 1,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(traefik_service_requests_total[5m])) by (service)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{service}}"
|
||||
}
|
||||
],
|
||||
"title": "Request Rate by Service",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "ms"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "mean", "max"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "histogram_quantile(0.95, sum(rate(traefik_service_request_duration_seconds_bucket[5m])) by (service, le)) * 1000",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{service}} p95"
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.99, sum(rate(traefik_service_request_duration_seconds_bucket[5m])) by (service, le)) * 1000",
|
||||
"refId": "B",
|
||||
"legendFormat": "{{service}} p99"
|
||||
}
|
||||
],
|
||||
"title": "Response Time (p95/p99)",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"tooltip": false,
|
||||
"viz": false,
|
||||
"legend": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byRegexp",
|
||||
"options": ".*2xx.*"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "color",
|
||||
"value": {
|
||||
"fixedColor": "green",
|
||||
"mode": "fixed"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byRegexp",
|
||||
"options": ".*4xx.*"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "color",
|
||||
"value": {
|
||||
"fixedColor": "yellow",
|
||||
"mode": "fixed"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byRegexp",
|
||||
"options": ".*5xx.*"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "color",
|
||||
"value": {
|
||||
"fixedColor": "red",
|
||||
"mode": "fixed"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 8
|
||||
},
|
||||
"id": 3,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": ["lastNotNull", "sum"],
|
||||
"displayMode": "table",
|
||||
"placement": "right"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(traefik_service_requests_total{code=~\"2..\"}[5m])) by (service)",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{service}} 2xx"
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(traefik_service_requests_total{code=~\"4..\"}[5m])) by (service)",
|
||||
"refId": "B",
|
||||
"legendFormat": "{{service}} 4xx"
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(traefik_service_requests_total{code=~\"5..\"}[5m])) by (service)",
|
||||
"refId": "C",
|
||||
"legendFormat": "{{service}} 5xx"
|
||||
}
|
||||
],
|
||||
"title": "HTTP Status Codes",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [
|
||||
{
|
||||
"options": {
|
||||
"0": {
|
||||
"color": "red",
|
||||
"index": 1,
|
||||
"text": "Down"
|
||||
},
|
||||
"1": {
|
||||
"color": "green",
|
||||
"index": 0,
|
||||
"text": "Up"
|
||||
}
|
||||
},
|
||||
"type": "value"
|
||||
}
|
||||
],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "red",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "green",
|
||||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 8
|
||||
},
|
||||
"id": 4,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "none",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "traefik_service_server_up",
|
||||
"refId": "A",
|
||||
"legendFormat": "{{service}}"
|
||||
}
|
||||
],
|
||||
"title": "Service Status",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 0,
|
||||
"y": 16
|
||||
},
|
||||
"id": 5,
|
||||
"options": {
|
||||
"colorMode": "value",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(traefik_service_requests_total[5m])) * 60",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Requests per Minute",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 5
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 10
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 6,
|
||||
"y": 16
|
||||
},
|
||||
"id": 6,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(sum(rate(traefik_service_requests_total{code=~\"4..\"}[5m])) / sum(rate(traefik_service_requests_total[5m]))) * 100",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "4xx Error Rate",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "yellow",
|
||||
"value": 1
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 5
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percent"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 12,
|
||||
"y": 16
|
||||
},
|
||||
"id": 7,
|
||||
"options": {
|
||||
"colorMode": "background",
|
||||
"graphMode": "area",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(sum(rate(traefik_service_requests_total{code=~\"5..\"}[5m])) / sum(rate(traefik_service_requests_total[5m]))) * 100",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "5xx Error Rate",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": "Prometheus",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 4,
|
||||
"w": 6,
|
||||
"x": 18,
|
||||
"y": 16
|
||||
},
|
||||
"id": 8,
|
||||
"options": {
|
||||
"colorMode": "value",
|
||||
"graphMode": "none",
|
||||
"justifyMode": "auto",
|
||||
"orientation": "auto",
|
||||
"reduceOptions": {
|
||||
"calcs": ["lastNotNull"],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "count(traefik_service_server_up == 1)",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Active Services",
|
||||
"type": "stat"
|
||||
}
|
||||
],
|
||||
"refresh": "30s",
|
||||
"schemaVersion": 36,
|
||||
"style": "dark",
|
||||
"tags": ["traefik", "proxy"],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {},
|
||||
"timezone": "",
|
||||
"title": "Traefik",
|
||||
"uid": "traefik",
|
||||
"version": 1
|
||||
}
|
||||
@@ -0,0 +1,26 @@
|
||||
[analytics]
|
||||
reporting_enabled = false
|
||||
check_for_updates = false
|
||||
|
||||
[plugins]
|
||||
enable_alpha = false
|
||||
plugin_admin_enabled = false
|
||||
plugin_admin_external_manage_enabled = false
|
||||
# Completely disable plugin catalog access
|
||||
grafana_com_url =
|
||||
# Disable background installer
|
||||
allow_loading_unsigned_plugins =
|
||||
# Disable plugin installation completely
|
||||
plugin_catalog_hidden_plugins = *
|
||||
|
||||
[plugin_management]
|
||||
check_for_updates = false
|
||||
enable_background_install = false
|
||||
# Disable all plugin management
|
||||
enabled = false
|
||||
|
||||
[log]
|
||||
level = info
|
||||
|
||||
[server]
|
||||
# Will be overridden by environment variable GF_SERVER_ROOT_URL
|
||||
@@ -0,0 +1,15 @@
|
||||
# Grafana Dashboard Provisioning
|
||||
# https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards
|
||||
|
||||
apiVersion: 1
|
||||
|
||||
providers:
|
||||
- name: 'Default'
|
||||
orgId: 1
|
||||
folder: ''
|
||||
type: file
|
||||
disableDeletion: false
|
||||
updateIntervalSeconds: 10
|
||||
allowUiUpdates: true
|
||||
options:
|
||||
path: /var/lib/grafana/dashboards
|
||||
@@ -0,0 +1,17 @@
|
||||
# Grafana Datasource Provisioning
|
||||
# https://grafana.com/docs/grafana/latest/administration/provisioning/#data-sources
|
||||
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: Prometheus
|
||||
type: prometheus
|
||||
access: proxy
|
||||
url: http://prometheus:9090
|
||||
isDefault: true
|
||||
editable: false
|
||||
jsonData:
|
||||
timeInterval: 15s
|
||||
queryTimeout: 60s
|
||||
httpMethod: POST
|
||||
version: 1
|
||||
245
deployment/legacy/stacks/stacks/monitoring/prometheus/alerts.yml
Normal file
245
deployment/legacy/stacks/stacks/monitoring/prometheus/alerts.yml
Normal file
@@ -0,0 +1,245 @@
|
||||
# Prometheus Alerting Rules
|
||||
# https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
|
||||
|
||||
groups:
|
||||
- name: infrastructure_alerts
|
||||
interval: 30s
|
||||
rules:
|
||||
# Host System Alerts
|
||||
- alert: HostHighCpuLoad
|
||||
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: infrastructure
|
||||
annotations:
|
||||
summary: "High CPU load on {{ $labels.instance }}"
|
||||
description: "CPU load is above 80% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: HostOutOfMemory
|
||||
expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 10
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
category: infrastructure
|
||||
annotations:
|
||||
summary: "Host out of memory on {{ $labels.instance }}"
|
||||
description: "Available memory is below 10% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: HostOutOfDiskSpace
|
||||
expr: (node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"} * 100) < 10
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
category: infrastructure
|
||||
annotations:
|
||||
summary: "Host out of disk space on {{ $labels.instance }}"
|
||||
description: "Disk space is below 10% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: HostDiskSpaceWarning
|
||||
expr: (node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"} * 100) < 20
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: infrastructure
|
||||
annotations:
|
||||
summary: "Disk space warning on {{ $labels.instance }}"
|
||||
description: "Disk space is below 20% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: HostHighDiskReadLatency
|
||||
expr: rate(node_disk_read_time_seconds_total[1m]) / rate(node_disk_reads_completed_total[1m]) > 0.1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: infrastructure
|
||||
annotations:
|
||||
summary: "High disk read latency on {{ $labels.instance }}"
|
||||
description: "Disk read latency is high (current value: {{ $value }}s)"
|
||||
|
||||
# Container Alerts
|
||||
- alert: ContainerKilled
|
||||
expr: time() - container_last_seen{name!~".*exporter.*"} > 60
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
category: container
|
||||
annotations:
|
||||
summary: "Container killed: {{ $labels.name }}"
|
||||
description: "Container {{ $labels.name }} has disappeared"
|
||||
|
||||
- alert: ContainerHighCpuUsage
|
||||
expr: (sum(rate(container_cpu_usage_seconds_total{name!~".*exporter.*"}[5m])) by (name) * 100) > 80
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: container
|
||||
annotations:
|
||||
summary: "High CPU usage in container {{ $labels.name }}"
|
||||
description: "Container CPU usage is above 80% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: ContainerHighMemoryUsage
|
||||
expr: (sum(container_memory_usage_bytes{name!~".*exporter.*"}) by (name) / sum(container_spec_memory_limit_bytes{name!~".*exporter.*"}) by (name) * 100) > 80
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: container
|
||||
annotations:
|
||||
summary: "High memory usage in container {{ $labels.name }}"
|
||||
description: "Container memory usage is above 80% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: ContainerVolumeUsage
|
||||
expr: (1 - (sum(container_fs_inodes_free) BY (instance) / sum(container_fs_inodes_total) BY (instance))) * 100 > 80
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: container
|
||||
annotations:
|
||||
summary: "Container volume usage on {{ $labels.instance }}"
|
||||
description: "Container volume usage is above 80% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: ContainerRestartCount
|
||||
expr: rate(container_restart_count[5m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
category: container
|
||||
annotations:
|
||||
summary: "Container restarting: {{ $labels.name }}"
|
||||
description: "Container {{ $labels.name }} is restarting frequently"
|
||||
|
||||
# Prometheus Self-Monitoring
|
||||
- alert: PrometheusTargetDown
|
||||
expr: up == 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
category: prometheus
|
||||
annotations:
|
||||
summary: "Prometheus target down: {{ $labels.job }}"
|
||||
description: "Target {{ $labels.job }} on {{ $labels.instance }} is down"
|
||||
|
||||
- alert: PrometheusConfigReloadFailure
|
||||
expr: prometheus_config_last_reload_successful == 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
category: prometheus
|
||||
annotations:
|
||||
summary: "Prometheus configuration reload failure"
|
||||
description: "Prometheus configuration reload has failed"
|
||||
|
||||
- alert: PrometheusTooManyRestarts
|
||||
expr: changes(process_start_time_seconds{job=~"prometheus"}[15m]) > 2
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
category: prometheus
|
||||
annotations:
|
||||
summary: "Prometheus restarting frequently"
|
||||
description: "Prometheus has restarted more than twice in the last 15 minutes"
|
||||
|
||||
- alert: PrometheusTargetScrapingSlow
|
||||
expr: prometheus_target_interval_length_seconds{quantile="0.9"} > 60
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: prometheus
|
||||
annotations:
|
||||
summary: "Prometheus target scraping slow"
|
||||
description: "Prometheus is scraping targets slowly (current value: {{ $value }}s)"
|
||||
|
||||
# Traefik Alerts
|
||||
- alert: TraefikServiceDown
|
||||
expr: count(traefik_service_server_up) by (service) == 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
category: traefik
|
||||
annotations:
|
||||
summary: "Traefik service down: {{ $labels.service }}"
|
||||
description: "Traefik service {{ $labels.service }} is down"
|
||||
|
||||
- alert: TraefikHighHttp4xxErrorRate
|
||||
expr: sum(rate(traefik_service_requests_total{code=~"4.."}[5m])) by (service) / sum(rate(traefik_service_requests_total[5m])) by (service) * 100 > 5
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
category: traefik
|
||||
annotations:
|
||||
summary: "High HTTP 4xx error rate for {{ $labels.service }}"
|
||||
description: "HTTP 4xx error rate is above 5% (current value: {{ $value }}%)"
|
||||
|
||||
- alert: TraefikHighHttp5xxErrorRate
|
||||
expr: sum(rate(traefik_service_requests_total{code=~"5.."}[5m])) by (service) / sum(rate(traefik_service_requests_total[5m])) by (service) * 100 > 1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
category: traefik
|
||||
annotations:
|
||||
summary: "High HTTP 5xx error rate for {{ $labels.service }}"
|
||||
description: "HTTP 5xx error rate is above 1% (current value: {{ $value }}%)"
|
||||
|
||||
- name: database_alerts
|
||||
interval: 30s
|
||||
rules:
|
||||
# PostgreSQL Alerts (uncomment when postgres-exporter is deployed)
|
||||
# - alert: PostgresqlDown
|
||||
# expr: pg_up == 0
|
||||
# for: 1m
|
||||
# labels:
|
||||
# severity: critical
|
||||
# category: database
|
||||
# annotations:
|
||||
# summary: "PostgreSQL down on {{ $labels.instance }}"
|
||||
# description: "PostgreSQL instance is down"
|
||||
|
||||
# - alert: PostgresqlTooManyConnections
|
||||
# expr: sum by (instance) (pg_stat_activity_count) > pg_settings_max_connections * 0.8
|
||||
# for: 5m
|
||||
# labels:
|
||||
# severity: warning
|
||||
# category: database
|
||||
# annotations:
|
||||
# summary: "Too many PostgreSQL connections on {{ $labels.instance }}"
|
||||
# description: "PostgreSQL connections are above 80% of max_connections"
|
||||
|
||||
# - alert: PostgresqlDeadLocks
|
||||
# expr: rate(pg_stat_database_deadlocks[1m]) > 0
|
||||
# for: 1m
|
||||
# labels:
|
||||
# severity: warning
|
||||
# category: database
|
||||
# annotations:
|
||||
# summary: "PostgreSQL deadlocks on {{ $labels.instance }}"
|
||||
# description: "PostgreSQL has deadlocks"
|
||||
|
||||
# Redis Alerts (uncomment when redis-exporter is deployed)
|
||||
# - alert: RedisDown
|
||||
# expr: redis_up == 0
|
||||
# for: 1m
|
||||
# labels:
|
||||
# severity: critical
|
||||
# category: cache
|
||||
# annotations:
|
||||
# summary: "Redis down on {{ $labels.instance }}"
|
||||
# description: "Redis instance is down"
|
||||
|
||||
# - alert: RedisOutOfMemory
|
||||
# expr: redis_memory_used_bytes / redis_memory_max_bytes * 100 > 90
|
||||
# for: 5m
|
||||
# labels:
|
||||
# severity: critical
|
||||
# category: cache
|
||||
# annotations:
|
||||
# summary: "Redis out of memory on {{ $labels.instance }}"
|
||||
# description: "Redis memory usage is above 90%"
|
||||
|
||||
# - alert: RedisTooManyConnections
|
||||
# expr: redis_connected_clients > 100
|
||||
# for: 5m
|
||||
# labels:
|
||||
# severity: warning
|
||||
# category: cache
|
||||
# annotations:
|
||||
# summary: "Too many Redis connections on {{ $labels.instance }}"
|
||||
# description: "Redis has too many client connections (current value: {{ $value }})"
|
||||
@@ -0,0 +1,82 @@
|
||||
# Prometheus Configuration
|
||||
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/
|
||||
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
external_labels:
|
||||
cluster: 'production'
|
||||
environment: 'michaelschiemer'
|
||||
|
||||
# Alertmanager configuration (optional)
|
||||
# alerting:
|
||||
# alertmanagers:
|
||||
# - static_configs:
|
||||
# - targets:
|
||||
# - alertmanager:9093
|
||||
|
||||
# Load alerting rules
|
||||
rule_files:
|
||||
- '/etc/prometheus/alerts.yml'
|
||||
|
||||
# Scrape configurations
|
||||
scrape_configs:
|
||||
# Prometheus self-monitoring
|
||||
- job_name: 'prometheus'
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
labels:
|
||||
service: 'prometheus'
|
||||
|
||||
# Node Exporter - Host system metrics
|
||||
- job_name: 'node-exporter'
|
||||
static_configs:
|
||||
- targets: ['node-exporter:9100']
|
||||
labels:
|
||||
service: 'node-exporter'
|
||||
instance: 'production-server'
|
||||
|
||||
# cAdvisor - Container metrics
|
||||
- job_name: 'cadvisor'
|
||||
static_configs:
|
||||
- targets: ['cadvisor:8080']
|
||||
labels:
|
||||
service: 'cadvisor'
|
||||
|
||||
# Traefik metrics
|
||||
- job_name: 'traefik'
|
||||
static_configs:
|
||||
- targets: ['traefik:8080']
|
||||
labels:
|
||||
service: 'traefik'
|
||||
|
||||
# PostgreSQL Exporter (if deployed)
|
||||
# Uncomment if you add postgres-exporter to postgresql stack
|
||||
# - job_name: 'postgres'
|
||||
# static_configs:
|
||||
# - targets: ['postgres-exporter:9187']
|
||||
# labels:
|
||||
# service: 'postgresql'
|
||||
|
||||
# Redis Exporter (if deployed)
|
||||
# Uncomment if you add redis-exporter to application stack
|
||||
# - job_name: 'redis'
|
||||
# static_configs:
|
||||
# - targets: ['redis-exporter:9121']
|
||||
# labels:
|
||||
# service: 'redis'
|
||||
|
||||
# Application metrics endpoint (if available)
|
||||
# Uncomment and configure if your PHP app exposes Prometheus metrics
|
||||
# - job_name: 'application'
|
||||
# static_configs:
|
||||
# - targets: ['app:9000']
|
||||
# labels:
|
||||
# service: 'application'
|
||||
|
||||
# Nginx metrics (if nginx-prometheus-exporter deployed)
|
||||
# - job_name: 'nginx'
|
||||
# static_configs:
|
||||
# - targets: ['nginx-exporter:9113']
|
||||
# labels:
|
||||
# service: 'nginx'
|
||||
257
deployment/legacy/stacks/stacks/postgresql-production/README.md
Normal file
257
deployment/legacy/stacks/stacks/postgresql-production/README.md
Normal file
@@ -0,0 +1,257 @@
|
||||
# PostgreSQL Production Stack - Production Database
|
||||
|
||||
## Overview
|
||||
|
||||
Production-ready PostgreSQL 16 database stack with automated backup system and performance optimization.
|
||||
|
||||
**Features**:
|
||||
- PostgreSQL 16 Alpine (lightweight, secure)
|
||||
- Automated daily backups with configurable retention
|
||||
- Performance-optimized configuration (2GB memory allocation)
|
||||
- Health checks and automatic recovery
|
||||
- Persistent storage with named volumes
|
||||
- Isolated `postgres-production-internal` network
|
||||
- Resource limits for stability
|
||||
|
||||
## Services
|
||||
|
||||
- **postgres-production** - PostgreSQL 16 database server
|
||||
- **postgres-production-backup** - Automated backup service with cron scheduling
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Docker and Docker Compose** installed
|
||||
2. **Environment file** created (`.env`)
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Secure Password
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
POSTGRES_DB=michaelschiemer
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=<generated-password>
|
||||
BACKUP_RETENTION_DAYS=7
|
||||
BACKUP_SCHEDULE=0 2 * * *
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Create environment file
|
||||
cp .env.example .env
|
||||
|
||||
# Generate and set password
|
||||
openssl rand -base64 32
|
||||
# Update POSTGRES_PASSWORD in .env
|
||||
|
||||
# Start services
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL is running
|
||||
docker exec postgres-production pg_isready -U postgres -d michaelschiemer
|
||||
|
||||
# Expected: postgres-production:5432 - accepting connections
|
||||
|
||||
# Check backup service
|
||||
docker compose logs postgres-production-backup
|
||||
|
||||
# Expected: Initial backup completed successfully
|
||||
```
|
||||
|
||||
## Integration with Production Application Stack
|
||||
|
||||
The Production Application Stack connects to this database via the `postgres-production-internal` network.
|
||||
|
||||
**Connection Configuration** in `deployment/stacks/production/.env`:
|
||||
```env
|
||||
DB_HOST=postgres-production
|
||||
DB_PORT=5432
|
||||
DB_DATABASE=michaelschiemer
|
||||
DB_USERNAME=postgres
|
||||
DB_PASSWORD=<same-as-postgres-production-password>
|
||||
```
|
||||
|
||||
**Network Connection**: The Production Application Stack must be connected to the `postgres-production-internal` network.
|
||||
|
||||
## Usage
|
||||
|
||||
### Database Access
|
||||
|
||||
#### From Host Machine
|
||||
|
||||
```bash
|
||||
# Connect to database
|
||||
docker exec -it postgres-production psql -U postgres -d michaelschiemer
|
||||
|
||||
# Run SQL query
|
||||
docker exec postgres-production psql -U postgres -d michaelschiemer -c "SELECT version();"
|
||||
```
|
||||
|
||||
#### From Application Container
|
||||
|
||||
```bash
|
||||
# Connection string format
|
||||
postgresql://postgres:password@postgres-production:5432/michaelschiemer
|
||||
```
|
||||
|
||||
### Backup Management
|
||||
|
||||
#### Manual Backup
|
||||
|
||||
```bash
|
||||
# Trigger manual backup
|
||||
docker exec postgres-production-backup /scripts/backup.sh
|
||||
|
||||
# List backups
|
||||
ls -lh backups/
|
||||
|
||||
# Example output:
|
||||
# postgres_michaelschiemer_20250130_020000.sql.gz
|
||||
```
|
||||
|
||||
#### Restore from Backup
|
||||
|
||||
```bash
|
||||
# List available backups
|
||||
docker exec postgres-production-backup ls -lh /backups
|
||||
|
||||
# Restore specific backup
|
||||
docker exec -it postgres-production-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_020000.sql.gz
|
||||
|
||||
# ⚠️ WARNING: This will DROP and RECREATE the database!
|
||||
```
|
||||
|
||||
## Network Isolation
|
||||
|
||||
This stack uses its own isolated network:
|
||||
- **Network**: `postgres-production-internal`
|
||||
- **Purpose**: Isolate Production database from other services
|
||||
- **Access**: Only services explicitly connected to this network can access the database
|
||||
|
||||
**Connecting Application Stack**:
|
||||
```yaml
|
||||
# In deployment/stacks/production/docker-compose.production.yml
|
||||
networks:
|
||||
postgres-production-internal:
|
||||
external: true
|
||||
name: postgres-production-internal
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Network Isolation
|
||||
- PostgreSQL only accessible via `postgres-production-internal` network
|
||||
- No external ports exposed
|
||||
- Service-to-service communication only
|
||||
|
||||
### Authentication
|
||||
- Strong password required (generated with `openssl rand -base64 32`)
|
||||
- No default passwords
|
||||
- Password stored in environment variables only
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check service health
|
||||
docker compose ps
|
||||
|
||||
# Expected: Both services "healthy"
|
||||
|
||||
# Manual health check
|
||||
docker exec postgres-production pg_isready -U postgres -d michaelschiemer
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# Database container stats
|
||||
docker stats postgres-production --no-stream
|
||||
|
||||
# Disk usage
|
||||
docker exec postgres-production du -sh /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# PostgreSQL logs
|
||||
docker compose logs postgres-production
|
||||
|
||||
# Backup logs
|
||||
docker compose logs postgres-production-backup
|
||||
|
||||
# Real-time monitoring
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Database Won't Start
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker compose logs postgres-production
|
||||
|
||||
# Common issues:
|
||||
# 1. Invalid configuration
|
||||
docker exec postgres-production postgres --check
|
||||
|
||||
# 2. Permission issues
|
||||
docker exec postgres-production ls -la /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Connection Refused from Application
|
||||
|
||||
```bash
|
||||
# 1. Check PostgreSQL is running
|
||||
docker compose ps postgres-production
|
||||
|
||||
# 2. Verify network
|
||||
docker network inspect postgres-production-internal | grep postgres-production
|
||||
|
||||
# 3. Check if application is connected to network
|
||||
docker network inspect postgres-production-internal | grep app
|
||||
```
|
||||
|
||||
## Differences from Staging Stack
|
||||
|
||||
| Aspect | Production | Staging |
|
||||
|--------|-----------|---------|
|
||||
| **Container Name** | postgres-production | postgres-staging |
|
||||
| **Network** | postgres-production-internal | postgres-staging-internal |
|
||||
| **Volume** | postgres-production-data | postgres-staging-data |
|
||||
| **Database** | michaelschiemer | michaelschiemer_staging |
|
||||
| **Backup Retention** | 7 days (configurable) | 7 days (configurable) |
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
|
||||
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
|
||||
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html
|
||||
|
||||
@@ -0,0 +1,70 @@
|
||||
# PostgreSQL Configuration for Production
|
||||
# Optimized for 2GB memory allocation
|
||||
|
||||
# Connection Settings
|
||||
listen_addresses = '*'
|
||||
max_connections = 100
|
||||
superuser_reserved_connections = 3
|
||||
|
||||
# Memory Settings (for 2GB allocation)
|
||||
shared_buffers = 512MB
|
||||
effective_cache_size = 1536MB
|
||||
maintenance_work_mem = 128MB
|
||||
work_mem = 5MB
|
||||
|
||||
# Checkpoint Settings
|
||||
checkpoint_completion_target = 0.9
|
||||
wal_buffers = 16MB
|
||||
default_statistics_target = 100
|
||||
|
||||
# Query Planner
|
||||
random_page_cost = 1.1
|
||||
effective_io_concurrency = 200
|
||||
|
||||
# Write-Ahead Logging
|
||||
wal_level = replica
|
||||
max_wal_size = 2GB
|
||||
min_wal_size = 1GB
|
||||
|
||||
# Logging
|
||||
log_destination = 'stderr'
|
||||
logging_collector = on
|
||||
log_directory = 'log'
|
||||
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
|
||||
log_rotation_age = 1d
|
||||
log_rotation_size = 100MB
|
||||
log_line_prefix = '%m [%p] %u@%d '
|
||||
log_timezone = 'Europe/Berlin'
|
||||
|
||||
# What to Log
|
||||
log_min_duration_statement = 1000
|
||||
log_checkpoints = on
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_lock_waits = on
|
||||
log_statement = 'ddl'
|
||||
|
||||
# Autovacuum
|
||||
autovacuum = on
|
||||
autovacuum_max_workers = 3
|
||||
autovacuum_naptime = 1min
|
||||
|
||||
# Client Connection Defaults
|
||||
datestyle = 'iso, mdy'
|
||||
timezone = 'Europe/Berlin'
|
||||
lc_messages = 'en_US.utf8'
|
||||
lc_monetary = 'en_US.utf8'
|
||||
lc_numeric = 'en_US.utf8'
|
||||
lc_time = 'en_US.utf8'
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# Lock Management
|
||||
deadlock_timeout = 1s
|
||||
|
||||
# Statement Timeout (prevent long-running queries)
|
||||
statement_timeout = 30000 # 30 seconds
|
||||
|
||||
# Parallel Query Execution
|
||||
max_parallel_workers_per_gather = 2
|
||||
max_parallel_workers = 4
|
||||
max_worker_processes = 4
|
||||
@@ -0,0 +1,72 @@
|
||||
services:
|
||||
# PostgreSQL Production Database
|
||||
postgres-production:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres-production
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- postgres-production-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- PGDATA=/var/lib/postgresql/data/pgdata
|
||||
volumes:
|
||||
- postgres-production-data:/var/lib/postgresql/data
|
||||
- ./conf.d:/etc/postgresql/conf.d:ro
|
||||
- ./backups:/backups
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command:
|
||||
- postgres
|
||||
- -c
|
||||
- config_file=/etc/postgresql/conf.d/postgresql.conf
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer}"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
shm_size: 256mb
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
reservations:
|
||||
memory: 512M
|
||||
|
||||
# Automated Backup Service for Production
|
||||
postgres-production-backup:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres-production-backup
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- postgres-production-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_HOST=postgres-production
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
|
||||
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 2 * * *}
|
||||
volumes:
|
||||
- ./backups:/backups
|
||||
- ./scripts:/scripts:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: ["/scripts/backup-entrypoint.sh"]
|
||||
depends_on:
|
||||
postgres-production:
|
||||
condition: service_healthy
|
||||
|
||||
volumes:
|
||||
postgres-production-data:
|
||||
name: postgres-production-data
|
||||
|
||||
networks:
|
||||
postgres-production-internal:
|
||||
name: postgres-production-internal
|
||||
driver: bridge
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
echo "🔄 PostgreSQL Backup Service Starting..."
|
||||
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
|
||||
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
|
||||
|
||||
# Install cronie for scheduled backups
|
||||
apk add --no-cache dcron
|
||||
|
||||
# Create cron job
|
||||
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
|
||||
|
||||
# Ensure backup directory exists
|
||||
mkdir -p /backups
|
||||
|
||||
# Run initial backup
|
||||
echo "🚀 Running initial backup..."
|
||||
/scripts/backup.sh
|
||||
|
||||
# Start cron in foreground
|
||||
echo "✅ Backup service ready - cron daemon starting"
|
||||
crond -f -l 2
|
||||
55
deployment/legacy/stacks/stacks/postgresql-production/scripts/backup.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql-production/scripts/backup.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
|
||||
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Backup - $(date)"
|
||||
echo "======================================"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
|
||||
# Create backup
|
||||
echo "📦 Creating backup: ${BACKUP_FILE}"
|
||||
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d "${POSTGRES_DB}" \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--create \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
| gzip > "${BACKUP_FILE}"
|
||||
|
||||
# Verify backup was created
|
||||
if [ -f "${BACKUP_FILE}" ]; then
|
||||
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
|
||||
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
|
||||
else
|
||||
echo "❌ Backup failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Cleanup old backups
|
||||
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
|
||||
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
|
||||
|
||||
# List current backups
|
||||
echo ""
|
||||
echo "📊 Current backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
|
||||
echo ""
|
||||
echo "✅ Backup completed successfully"
|
||||
echo "======================================"
|
||||
55
deployment/legacy/stacks/stacks/postgresql-production/scripts/restore.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql-production/scripts/restore.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: ./restore.sh <backup-file>"
|
||||
echo ""
|
||||
echo "Available backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_FILE="$1"
|
||||
|
||||
if [ ! -f "${BACKUP_FILE}" ]; then
|
||||
echo "❌ Backup file not found: ${BACKUP_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Restore - $(date)"
|
||||
echo "======================================"
|
||||
echo "📦 Backup file: ${BACKUP_FILE}"
|
||||
echo ""
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
echo ""
|
||||
|
||||
# Warning
|
||||
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
|
||||
echo "⚠️ Database: ${POSTGRES_DB}"
|
||||
echo ""
|
||||
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
|
||||
sleep 10
|
||||
|
||||
echo ""
|
||||
echo "🔄 Restoring database..."
|
||||
|
||||
# Restore backup
|
||||
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d postgres
|
||||
|
||||
echo ""
|
||||
echo "✅ Database restored successfully"
|
||||
echo "======================================"
|
||||
263
deployment/legacy/stacks/stacks/postgresql-staging/README.md
Normal file
263
deployment/legacy/stacks/stacks/postgresql-staging/README.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# PostgreSQL Staging Stack - Staging Database
|
||||
|
||||
## Overview
|
||||
|
||||
Staging PostgreSQL 16 database stack with automated backup system for testing and development.
|
||||
|
||||
**Features**:
|
||||
- PostgreSQL 16 Alpine (lightweight, secure)
|
||||
- Automated daily backups with shorter retention (3 days)
|
||||
- Performance-optimized configuration (1GB memory allocation)
|
||||
- Health checks and automatic recovery
|
||||
- Persistent storage with named volumes
|
||||
- Isolated `postgres-staging-internal` network
|
||||
- Resource limits optimized for staging workloads
|
||||
|
||||
## Services
|
||||
|
||||
- **postgres-staging** - PostgreSQL 16 database server
|
||||
- **postgres-staging-backup** - Automated backup service with cron scheduling
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Docker and Docker Compose** installed
|
||||
2. **Environment file** created (`.env`)
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Secure Password
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
POSTGRES_DB=michaelschiemer_staging
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=<generated-password>
|
||||
BACKUP_RETENTION_DAYS=3
|
||||
BACKUP_SCHEDULE=0 3 * * *
|
||||
```
|
||||
|
||||
**Note**: Staging uses shorter backup retention (3 days) and runs backups at 3 AM (1 hour after production) to reduce resource contention.
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Create environment file
|
||||
cp .env.example .env
|
||||
|
||||
# Generate and set password
|
||||
openssl rand -base64 32
|
||||
# Update POSTGRES_PASSWORD in .env
|
||||
|
||||
# Start services
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL is running
|
||||
docker exec postgres-staging pg_isready -U postgres -d michaelschiemer_staging
|
||||
|
||||
# Expected: postgres-staging:5432 - accepting connections
|
||||
|
||||
# Check backup service
|
||||
docker compose logs postgres-staging-backup
|
||||
|
||||
# Expected: Initial backup completed successfully
|
||||
```
|
||||
|
||||
## Integration with Staging Application Stack
|
||||
|
||||
The Staging Application Stack connects to this database via the `postgres-staging-internal` network.
|
||||
|
||||
**Connection Configuration** in `docker-compose.staging.yml`:
|
||||
```env
|
||||
DB_HOST=postgres-staging
|
||||
DB_PORT=5432
|
||||
DB_DATABASE=michaelschiemer_staging
|
||||
DB_USERNAME=postgres
|
||||
DB_PASSWORD=<same-as-postgres-staging-password>
|
||||
```
|
||||
|
||||
**Network Connection**: The Staging Application Stack must be connected to the `postgres-staging-internal` network.
|
||||
|
||||
## Usage
|
||||
|
||||
### Database Access
|
||||
|
||||
#### From Host Machine
|
||||
|
||||
```bash
|
||||
# Connect to database
|
||||
docker exec -it postgres-staging psql -U postgres -d michaelschiemer_staging
|
||||
|
||||
# Run SQL query
|
||||
docker exec postgres-staging psql -U postgres -d michaelschiemer_staging -c "SELECT version();"
|
||||
```
|
||||
|
||||
#### From Application Container
|
||||
|
||||
```bash
|
||||
# Connection string format
|
||||
postgresql://postgres:password@postgres-staging:5432/michaelschiemer_staging
|
||||
```
|
||||
|
||||
### Backup Management
|
||||
|
||||
#### Manual Backup
|
||||
|
||||
```bash
|
||||
# Trigger manual backup
|
||||
docker exec postgres-staging-backup /scripts/backup.sh
|
||||
|
||||
# List backups
|
||||
ls -lh backups/
|
||||
|
||||
# Example output:
|
||||
# postgres_michaelschiemer_staging_20250130_030000.sql.gz
|
||||
```
|
||||
|
||||
#### Restore from Backup
|
||||
|
||||
```bash
|
||||
# List available backups
|
||||
docker exec postgres-staging-backup ls -lh /backups
|
||||
|
||||
# Restore specific backup
|
||||
docker exec -it postgres-staging-backup /scripts/restore.sh /backups/postgres_michaelschiemer_staging_20250130_030000.sql.gz
|
||||
|
||||
# ⚠️ WARNING: This will DROP and RECREATE the database!
|
||||
```
|
||||
|
||||
## Network Isolation
|
||||
|
||||
This stack uses its own isolated network:
|
||||
- **Network**: `postgres-staging-internal`
|
||||
- **Purpose**: Isolate Staging database from Production and other services
|
||||
- **Access**: Only services explicitly connected to this network can access the database
|
||||
|
||||
**Connecting Application Stack**:
|
||||
```yaml
|
||||
# In docker-compose.staging.yml
|
||||
networks:
|
||||
postgres-staging-internal:
|
||||
external: true
|
||||
name: postgres-staging-internal
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Network Isolation
|
||||
- PostgreSQL only accessible via `postgres-staging-internal` network
|
||||
- No external ports exposed
|
||||
- Service-to-service communication only
|
||||
|
||||
### Authentication
|
||||
- Strong password required (generated with `openssl rand -base64 32`)
|
||||
- No default passwords
|
||||
- Password stored in environment variables only
|
||||
|
||||
**Note**: Staging passwords can be different from Production, but should still be secure.
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check service health
|
||||
docker compose ps
|
||||
|
||||
# Expected: Both services "healthy"
|
||||
|
||||
# Manual health check
|
||||
docker exec postgres-staging pg_isready -U postgres -d michaelschiemer_staging
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# Database container stats
|
||||
docker stats postgres-staging --no-stream
|
||||
|
||||
# Disk usage
|
||||
docker exec postgres-staging du -sh /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# PostgreSQL logs
|
||||
docker compose logs postgres-staging
|
||||
|
||||
# Backup logs
|
||||
docker compose logs postgres-staging-backup
|
||||
|
||||
# Real-time monitoring
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Database Won't Start
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker compose logs postgres-staging
|
||||
|
||||
# Common issues:
|
||||
# 1. Invalid configuration
|
||||
docker exec postgres-staging postgres --check
|
||||
|
||||
# 2. Permission issues
|
||||
docker exec postgres-staging ls -la /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Connection Refused from Application
|
||||
|
||||
```bash
|
||||
# 1. Check PostgreSQL is running
|
||||
docker compose ps postgres-staging
|
||||
|
||||
# 2. Verify network
|
||||
docker network inspect postgres-staging-internal | grep postgres-staging
|
||||
|
||||
# 3. Check if application is connected to network
|
||||
docker network inspect postgres-staging-internal | grep staging-app
|
||||
```
|
||||
|
||||
## Differences from Production Stack
|
||||
|
||||
| Aspect | Production | Staging |
|
||||
|--------|-----------|---------|
|
||||
| **Container Name** | postgres-production | postgres-staging |
|
||||
| **Network** | postgres-production-internal | postgres-staging-internal |
|
||||
| **Volume** | postgres-production-data | postgres-staging-data |
|
||||
| **Database** | michaelschiemer | michaelschiemer_staging |
|
||||
| **Memory Limit** | 2GB | 1GB |
|
||||
| **Backup Retention** | 7 days | 3 days |
|
||||
| **Backup Schedule** | 2 AM | 3 AM |
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
|
||||
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
|
||||
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html
|
||||
|
||||
@@ -0,0 +1,70 @@
|
||||
# PostgreSQL Configuration for Production
|
||||
# Optimized for 2GB memory allocation
|
||||
|
||||
# Connection Settings
|
||||
listen_addresses = '*'
|
||||
max_connections = 100
|
||||
superuser_reserved_connections = 3
|
||||
|
||||
# Memory Settings (for 2GB allocation)
|
||||
shared_buffers = 512MB
|
||||
effective_cache_size = 1536MB
|
||||
maintenance_work_mem = 128MB
|
||||
work_mem = 5MB
|
||||
|
||||
# Checkpoint Settings
|
||||
checkpoint_completion_target = 0.9
|
||||
wal_buffers = 16MB
|
||||
default_statistics_target = 100
|
||||
|
||||
# Query Planner
|
||||
random_page_cost = 1.1
|
||||
effective_io_concurrency = 200
|
||||
|
||||
# Write-Ahead Logging
|
||||
wal_level = replica
|
||||
max_wal_size = 2GB
|
||||
min_wal_size = 1GB
|
||||
|
||||
# Logging
|
||||
log_destination = 'stderr'
|
||||
logging_collector = on
|
||||
log_directory = 'log'
|
||||
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
|
||||
log_rotation_age = 1d
|
||||
log_rotation_size = 100MB
|
||||
log_line_prefix = '%m [%p] %u@%d '
|
||||
log_timezone = 'Europe/Berlin'
|
||||
|
||||
# What to Log
|
||||
log_min_duration_statement = 1000
|
||||
log_checkpoints = on
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_lock_waits = on
|
||||
log_statement = 'ddl'
|
||||
|
||||
# Autovacuum
|
||||
autovacuum = on
|
||||
autovacuum_max_workers = 3
|
||||
autovacuum_naptime = 1min
|
||||
|
||||
# Client Connection Defaults
|
||||
datestyle = 'iso, mdy'
|
||||
timezone = 'Europe/Berlin'
|
||||
lc_messages = 'en_US.utf8'
|
||||
lc_monetary = 'en_US.utf8'
|
||||
lc_numeric = 'en_US.utf8'
|
||||
lc_time = 'en_US.utf8'
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# Lock Management
|
||||
deadlock_timeout = 1s
|
||||
|
||||
# Statement Timeout (prevent long-running queries)
|
||||
statement_timeout = 30000 # 30 seconds
|
||||
|
||||
# Parallel Query Execution
|
||||
max_parallel_workers_per_gather = 2
|
||||
max_parallel_workers = 4
|
||||
max_worker_processes = 4
|
||||
@@ -0,0 +1,72 @@
|
||||
services:
|
||||
# PostgreSQL Staging Database
|
||||
postgres-staging:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres-staging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- postgres-staging-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer_staging}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- PGDATA=/var/lib/postgresql/data/pgdata
|
||||
volumes:
|
||||
- postgres-staging-data:/var/lib/postgresql/data
|
||||
- ./conf.d:/etc/postgresql/conf.d:ro
|
||||
- ./backups:/backups
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command:
|
||||
- postgres
|
||||
- -c
|
||||
- config_file=/etc/postgresql/conf.d/postgresql.conf
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer_staging}"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
shm_size: 256mb
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
reservations:
|
||||
memory: 256M
|
||||
|
||||
# Automated Backup Service for Staging
|
||||
postgres-staging-backup:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres-staging-backup
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- postgres-staging-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_HOST=postgres-staging
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer_staging}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-3}
|
||||
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 3 * * *}
|
||||
volumes:
|
||||
- ./backups:/backups
|
||||
- ./scripts:/scripts:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: ["/scripts/backup-entrypoint.sh"]
|
||||
depends_on:
|
||||
postgres-staging:
|
||||
condition: service_healthy
|
||||
|
||||
volumes:
|
||||
postgres-staging-data:
|
||||
name: postgres-staging-data
|
||||
|
||||
networks:
|
||||
postgres-staging-internal:
|
||||
name: postgres-staging-internal
|
||||
driver: bridge
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
echo "🔄 PostgreSQL Backup Service Starting..."
|
||||
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
|
||||
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
|
||||
|
||||
# Install cronie for scheduled backups
|
||||
apk add --no-cache dcron
|
||||
|
||||
# Create cron job
|
||||
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
|
||||
|
||||
# Ensure backup directory exists
|
||||
mkdir -p /backups
|
||||
|
||||
# Run initial backup
|
||||
echo "🚀 Running initial backup..."
|
||||
/scripts/backup.sh
|
||||
|
||||
# Start cron in foreground
|
||||
echo "✅ Backup service ready - cron daemon starting"
|
||||
crond -f -l 2
|
||||
55
deployment/legacy/stacks/stacks/postgresql-staging/scripts/backup.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql-staging/scripts/backup.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
|
||||
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Backup - $(date)"
|
||||
echo "======================================"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
|
||||
# Create backup
|
||||
echo "📦 Creating backup: ${BACKUP_FILE}"
|
||||
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d "${POSTGRES_DB}" \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--create \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
| gzip > "${BACKUP_FILE}"
|
||||
|
||||
# Verify backup was created
|
||||
if [ -f "${BACKUP_FILE}" ]; then
|
||||
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
|
||||
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
|
||||
else
|
||||
echo "❌ Backup failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Cleanup old backups
|
||||
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
|
||||
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
|
||||
|
||||
# List current backups
|
||||
echo ""
|
||||
echo "📊 Current backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
|
||||
echo ""
|
||||
echo "✅ Backup completed successfully"
|
||||
echo "======================================"
|
||||
55
deployment/legacy/stacks/stacks/postgresql-staging/scripts/restore.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql-staging/scripts/restore.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: ./restore.sh <backup-file>"
|
||||
echo ""
|
||||
echo "Available backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_FILE="$1"
|
||||
|
||||
if [ ! -f "${BACKUP_FILE}" ]; then
|
||||
echo "❌ Backup file not found: ${BACKUP_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Restore - $(date)"
|
||||
echo "======================================"
|
||||
echo "📦 Backup file: ${BACKUP_FILE}"
|
||||
echo ""
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
echo ""
|
||||
|
||||
# Warning
|
||||
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
|
||||
echo "⚠️ Database: ${POSTGRES_DB}"
|
||||
echo ""
|
||||
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
|
||||
sleep 10
|
||||
|
||||
echo ""
|
||||
echo "🔄 Restoring database..."
|
||||
|
||||
# Restore backup
|
||||
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d postgres
|
||||
|
||||
echo ""
|
||||
echo "✅ Database restored successfully"
|
||||
echo "======================================"
|
||||
22
deployment/legacy/stacks/stacks/postgresql/.env.example
Normal file
22
deployment/legacy/stacks/stacks/postgresql/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# PostgreSQL Stack Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# PostgreSQL Configuration
|
||||
POSTGRES_DB=michaelschiemer
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=<generate-with-openssl-rand-base64-32>
|
||||
|
||||
# Backup Configuration
|
||||
BACKUP_RETENTION_DAYS=7
|
||||
BACKUP_SCHEDULE=0 2 * * *
|
||||
|
||||
# Connection Settings (for application integration)
|
||||
# Use these in Stack 4 (Application) .env:
|
||||
# DB_HOST=postgres
|
||||
# DB_PORT=5432
|
||||
# DB_NAME=michaelschiemer
|
||||
# DB_USER=postgres
|
||||
# DB_PASS=<same-as-POSTGRES_PASSWORD>
|
||||
681
deployment/legacy/stacks/stacks/postgresql/README.md
Normal file
681
deployment/legacy/stacks/stacks/postgresql/README.md
Normal file
@@ -0,0 +1,681 @@
|
||||
# PostgreSQL Stack - Production Database with Automated Backups
|
||||
|
||||
## Overview
|
||||
|
||||
Production-ready PostgreSQL 16 database with automated backup system and performance optimization.
|
||||
|
||||
**Features**:
|
||||
- PostgreSQL 16 Alpine (lightweight, secure)
|
||||
- Automated daily backups with configurable retention
|
||||
- Performance-optimized configuration (2GB memory allocation)
|
||||
- Health checks and automatic recovery
|
||||
- Persistent storage with named volumes
|
||||
- Isolated app-internal network
|
||||
- Resource limits for stability
|
||||
|
||||
## Services
|
||||
|
||||
- **postgres** - PostgreSQL 16 database server
|
||||
- **postgres-backup** - Automated backup service with cron scheduling
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running**
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **App-Internal Network Created**
|
||||
```bash
|
||||
docker network create app-internal
|
||||
```
|
||||
(Created automatically by Stack 4 - Application)
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Secure Password
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
POSTGRES_PASSWORD=<generated-password>
|
||||
```
|
||||
|
||||
### 3. Review Configuration
|
||||
|
||||
**Database Settings** (`.env`):
|
||||
- `POSTGRES_DB` - Database name (default: michaelschiemer)
|
||||
- `POSTGRES_USER` - Database user (default: postgres)
|
||||
- `POSTGRES_PASSWORD` - Database password (REQUIRED)
|
||||
|
||||
**Backup Settings** (`.env`):
|
||||
- `BACKUP_RETENTION_DAYS` - Keep backups for N days (default: 7)
|
||||
- `BACKUP_SCHEDULE` - Cron expression (default: `0 2 * * *` = 2 AM daily)
|
||||
|
||||
**Performance Tuning** (`conf.d/postgresql.conf`):
|
||||
- Optimized for 2GB memory allocation
|
||||
- Connection pooling (max 100 connections)
|
||||
- Write-ahead logging for reliability
|
||||
- Query logging for slow queries (>1s)
|
||||
- Parallel query execution enabled
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Create environment file
|
||||
cp .env.example .env
|
||||
|
||||
# Generate and set password
|
||||
openssl rand -base64 32
|
||||
# Update POSTGRES_PASSWORD in .env
|
||||
|
||||
# Ensure app-internal network exists
|
||||
docker network inspect app-internal || docker network create app-internal
|
||||
|
||||
# Start services
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL is running
|
||||
docker exec postgres pg_isready -U postgres -d michaelschiemer
|
||||
|
||||
# Expected: postgres:5432 - accepting connections
|
||||
|
||||
# Check backup service
|
||||
docker compose logs postgres-backup
|
||||
|
||||
# Expected: Initial backup completed successfully
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Database Access
|
||||
|
||||
#### From Host Machine
|
||||
|
||||
```bash
|
||||
# Connect to database
|
||||
docker exec -it postgres psql -U postgres -d michaelschiemer
|
||||
|
||||
# Run SQL query
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "SELECT version();"
|
||||
```
|
||||
|
||||
#### From Application Container
|
||||
|
||||
```bash
|
||||
# Connection string format
|
||||
postgresql://postgres:password@postgres:5432/michaelschiemer
|
||||
|
||||
# Example with environment variables (Stack 4 - Application)
|
||||
DB_HOST=postgres
|
||||
DB_PORT=5432
|
||||
DB_NAME=michaelschiemer
|
||||
DB_USER=postgres
|
||||
DB_PASS=<same-as-POSTGRES_PASSWORD>
|
||||
```
|
||||
|
||||
### Backup Management
|
||||
|
||||
#### Manual Backup
|
||||
|
||||
```bash
|
||||
# Trigger manual backup
|
||||
docker exec postgres-backup /scripts/backup.sh
|
||||
|
||||
# List backups
|
||||
ls -lh backups/
|
||||
|
||||
# Example output:
|
||||
# postgres_michaelschiemer_20250130_020000.sql.gz
|
||||
# postgres_michaelschiemer_20250131_020000.sql.gz
|
||||
```
|
||||
|
||||
#### Restore from Backup
|
||||
|
||||
```bash
|
||||
# List available backups
|
||||
docker exec postgres-backup ls -lh /backups
|
||||
|
||||
# Restore specific backup
|
||||
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_020000.sql.gz
|
||||
|
||||
# ⚠️ WARNING: This will DROP and RECREATE the database!
|
||||
# Confirm after 10 second countdown
|
||||
```
|
||||
|
||||
#### Download Backup
|
||||
|
||||
```bash
|
||||
# Copy backup to host
|
||||
docker cp postgres-backup:/backups/postgres_michaelschiemer_20250130_020000.sql.gz ./local-backup.sql.gz
|
||||
|
||||
# Extract and inspect
|
||||
gunzip -c local-backup.sql.gz | less
|
||||
```
|
||||
|
||||
### Database Maintenance
|
||||
|
||||
#### Vacuum and Analyze
|
||||
|
||||
```bash
|
||||
# Full vacuum (recommended weekly)
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM FULL ANALYZE;"
|
||||
|
||||
# Quick vacuum (automatic, but can run manually)
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM ANALYZE;"
|
||||
```
|
||||
|
||||
#### Check Database Size
|
||||
|
||||
```bash
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT
|
||||
pg_size_pretty(pg_database_size('michaelschiemer')) as db_size,
|
||||
pg_size_pretty(pg_total_relation_size('users')) as users_table_size;
|
||||
"
|
||||
```
|
||||
|
||||
#### Connection Statistics
|
||||
|
||||
```bash
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT
|
||||
datname,
|
||||
numbackends as connections,
|
||||
xact_commit as commits,
|
||||
xact_rollback as rollbacks
|
||||
FROM pg_stat_database
|
||||
WHERE datname = 'michaelschiemer';
|
||||
"
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
#### Active Queries
|
||||
|
||||
```bash
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT
|
||||
pid,
|
||||
usename,
|
||||
application_name,
|
||||
state,
|
||||
query_start,
|
||||
query
|
||||
FROM pg_stat_activity
|
||||
WHERE state != 'idle'
|
||||
ORDER BY query_start;
|
||||
"
|
||||
```
|
||||
|
||||
#### Slow Queries
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL logs for slow queries (>1s)
|
||||
docker exec postgres tail -f /var/lib/postgresql/data/pgdata/log/postgresql-*.log
|
||||
```
|
||||
|
||||
#### Index Usage
|
||||
|
||||
```bash
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT
|
||||
schemaname,
|
||||
tablename,
|
||||
indexname,
|
||||
idx_scan as index_scans,
|
||||
idx_tup_read as tuples_read
|
||||
FROM pg_stat_user_indexes
|
||||
ORDER BY idx_scan DESC;
|
||||
"
|
||||
```
|
||||
|
||||
## Integration with Other Stacks
|
||||
|
||||
### Stack 4: Application
|
||||
|
||||
Update `deployment/stacks/application/.env`:
|
||||
|
||||
```env
|
||||
# Database Configuration
|
||||
DB_HOST=postgres
|
||||
DB_PORT=5432
|
||||
DB_NAME=michaelschiemer
|
||||
DB_USER=postgres
|
||||
DB_PASS=<same-as-postgres-stack-password>
|
||||
```
|
||||
|
||||
**Connection Test** from Application:
|
||||
```bash
|
||||
# From app container
|
||||
docker exec app php -r "
|
||||
\$dsn = 'pgsql:host=postgres;port=5432;dbname=michaelschiemer';
|
||||
\$pdo = new PDO(\$dsn, 'postgres', getenv('DB_PASS'));
|
||||
echo 'Connection successful: ' . \$pdo->query('SELECT version()')->fetchColumn();
|
||||
"
|
||||
```
|
||||
|
||||
### Stack 2: Gitea (Optional PostgreSQL Backend)
|
||||
|
||||
If migrating Gitea from MySQL to PostgreSQL:
|
||||
|
||||
```env
|
||||
# In deployment/stacks/gitea/.env
|
||||
DB_TYPE=postgres
|
||||
DB_HOST=postgres
|
||||
DB_NAME=gitea
|
||||
DB_USER=postgres
|
||||
DB_PASS=<same-password>
|
||||
```
|
||||
|
||||
**Note**: Requires creating separate `gitea` database:
|
||||
```bash
|
||||
docker exec postgres psql -U postgres -c "CREATE DATABASE gitea;"
|
||||
```
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Automated Backup Strategy
|
||||
|
||||
**Schedule**: Daily at 2:00 AM (configurable via `BACKUP_SCHEDULE`)
|
||||
|
||||
**Retention**: 7 days (configurable via `BACKUP_RETENTION_DAYS`)
|
||||
|
||||
**Location**: `./backups/` directory on host
|
||||
|
||||
**Format**: Compressed SQL dumps (`postgres_<dbname>_<timestamp>.sql.gz`)
|
||||
|
||||
### Manual Backup Workflow
|
||||
|
||||
```bash
|
||||
# 1. Create manual backup
|
||||
docker exec postgres-backup /scripts/backup.sh
|
||||
|
||||
# 2. Verify backup
|
||||
ls -lh backups/
|
||||
|
||||
# 3. Test backup integrity (optional)
|
||||
gunzip -t backups/postgres_michaelschiemer_20250130_020000.sql.gz
|
||||
```
|
||||
|
||||
### Disaster Recovery
|
||||
|
||||
#### Scenario: Complete Database Loss
|
||||
|
||||
```bash
|
||||
# 1. Stop application to prevent writes
|
||||
cd ../application
|
||||
docker compose stop
|
||||
|
||||
# 2. Remove corrupted database
|
||||
cd ../postgresql
|
||||
docker compose down
|
||||
docker volume rm postgres-data
|
||||
|
||||
# 3. Recreate database
|
||||
docker compose up -d
|
||||
|
||||
# 4. Wait for PostgreSQL to initialize
|
||||
docker compose logs -f postgres
|
||||
|
||||
# 5. Restore from latest backup
|
||||
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_<latest>.sql.gz
|
||||
|
||||
# 6. Verify restoration
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "\dt"
|
||||
|
||||
# 7. Restart application
|
||||
cd ../application
|
||||
docker compose start
|
||||
```
|
||||
|
||||
#### Scenario: Point-in-Time Recovery
|
||||
|
||||
```bash
|
||||
# 1. List available backups
|
||||
docker exec postgres-backup ls -lh /backups
|
||||
|
||||
# 2. Choose backup timestamp
|
||||
# postgres_michaelschiemer_20250130_143000.sql.gz
|
||||
|
||||
# 3. Restore to that point
|
||||
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_143000.sql.gz
|
||||
```
|
||||
|
||||
### Off-site Backup
|
||||
|
||||
**Recommended**: Copy backups to external storage
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# backup-offsite.sh - Run daily after local backup
|
||||
|
||||
BACKUP_DIR="./backups"
|
||||
REMOTE_HOST="backup-server.example.com"
|
||||
REMOTE_PATH="/backups/michaelschiemer/postgresql"
|
||||
|
||||
# Sync backups to remote server
|
||||
rsync -avz --delete \
|
||||
"${BACKUP_DIR}/" \
|
||||
"${REMOTE_HOST}:${REMOTE_PATH}/"
|
||||
|
||||
echo "✅ Off-site backup completed"
|
||||
```
|
||||
|
||||
**Alternative: S3 Upload**
|
||||
```bash
|
||||
# Using AWS CLI
|
||||
aws s3 sync ./backups/ s3://my-backup-bucket/postgresql/ --delete
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Connection Security
|
||||
|
||||
**Network Isolation**:
|
||||
- PostgreSQL only accessible via `app-internal` network
|
||||
- No external ports exposed
|
||||
- Service-to-service communication only
|
||||
|
||||
**Authentication**:
|
||||
- Strong password required (generated with `openssl rand -base64 32`)
|
||||
- No default passwords
|
||||
- Password stored in environment variables only
|
||||
|
||||
### Backup Security
|
||||
|
||||
**Encryption** (recommended for production):
|
||||
```bash
|
||||
# Encrypt backup before off-site storage
|
||||
gpg --symmetric --cipher-algo AES256 backups/postgres_michaelschiemer_*.sql.gz
|
||||
|
||||
# Decrypt when needed
|
||||
gpg --decrypt backups/postgres_michaelschiemer_*.sql.gz.gpg | gunzip | psql
|
||||
```
|
||||
|
||||
**Access Control**:
|
||||
- Backup directory mounted as read-only in other containers
|
||||
- Backup service has write access only
|
||||
- Host filesystem permissions: `chmod 700 backups/`
|
||||
|
||||
### Update Security
|
||||
|
||||
```bash
|
||||
# Update PostgreSQL image
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers with new image
|
||||
docker compose up -d
|
||||
|
||||
# Verify version
|
||||
docker exec postgres psql -U postgres -c "SELECT version();"
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check service health
|
||||
docker compose ps
|
||||
|
||||
# Expected: Both services "healthy"
|
||||
|
||||
# Manual health check
|
||||
docker exec postgres pg_isready -U postgres -d michaelschiemer
|
||||
|
||||
# Check backup service
|
||||
docker compose logs postgres-backup | grep "✅ Backup completed"
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# Database container stats
|
||||
docker stats postgres --no-stream
|
||||
|
||||
# Expected:
|
||||
# - Memory: ~200-800MB (under 2GB limit)
|
||||
# - CPU: <50% sustained
|
||||
|
||||
# Disk usage
|
||||
docker exec postgres du -sh /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# PostgreSQL logs
|
||||
docker compose logs postgres
|
||||
|
||||
# Backup logs
|
||||
docker compose logs postgres-backup
|
||||
|
||||
# Real-time monitoring
|
||||
docker compose logs -f
|
||||
|
||||
# PostgreSQL server logs (inside container)
|
||||
docker exec postgres tail -f /var/lib/postgresql/data/pgdata/log/postgresql-*.log
|
||||
```
|
||||
|
||||
### Alerts
|
||||
|
||||
**Recommended Monitoring**:
|
||||
- Backup success/failure notifications
|
||||
- Disk space warnings (>80% full)
|
||||
- Connection count monitoring
|
||||
- Slow query alerts
|
||||
- Replication lag (if using replication)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Database Won't Start
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker compose logs postgres
|
||||
|
||||
# Common issues:
|
||||
# 1. Invalid configuration
|
||||
docker exec postgres postgres --check
|
||||
|
||||
# 2. Corrupted data directory
|
||||
docker compose down
|
||||
docker volume rm postgres-data
|
||||
docker compose up -d
|
||||
|
||||
# 3. Permission issues
|
||||
docker exec postgres ls -la /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Backup Failures
|
||||
|
||||
```bash
|
||||
# Check backup service logs
|
||||
docker compose logs postgres-backup
|
||||
|
||||
# Common issues:
|
||||
# 1. Disk full
|
||||
df -h
|
||||
|
||||
# 2. Connection to PostgreSQL failed
|
||||
docker exec postgres-backup pg_isready -h postgres -U postgres
|
||||
|
||||
# 3. Manual backup test
|
||||
docker exec postgres-backup /scripts/backup.sh
|
||||
```
|
||||
|
||||
### Connection Refused from Application
|
||||
|
||||
```bash
|
||||
# 1. Check PostgreSQL is running
|
||||
docker compose ps postgres
|
||||
|
||||
# 2. Verify network
|
||||
docker network inspect app-internal | grep postgres
|
||||
|
||||
# 3. Test connection
|
||||
docker exec app nc -zv postgres 5432
|
||||
|
||||
# 4. Check credentials
|
||||
docker exec app printenv | grep DB_
|
||||
```
|
||||
|
||||
### Slow Queries
|
||||
|
||||
```bash
|
||||
# Enable extended logging
|
||||
docker exec postgres psql -U postgres -c "ALTER SYSTEM SET log_min_duration_statement = 500;"
|
||||
docker compose restart postgres
|
||||
|
||||
# Check for missing indexes
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT
|
||||
schemaname,
|
||||
tablename,
|
||||
attname,
|
||||
n_distinct,
|
||||
correlation
|
||||
FROM pg_stats
|
||||
WHERE schemaname = 'public'
|
||||
ORDER BY correlation;
|
||||
"
|
||||
```
|
||||
|
||||
### Out of Disk Space
|
||||
|
||||
```bash
|
||||
# Check disk usage
|
||||
df -h
|
||||
|
||||
# Check database size
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
SELECT pg_size_pretty(pg_database_size('michaelschiemer'));
|
||||
"
|
||||
|
||||
# Vacuum to reclaim space
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM FULL;"
|
||||
|
||||
# Clean old backups manually
|
||||
find ./backups -name "*.sql.gz" -mtime +30 -delete
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Current Configuration (2GB Memory)
|
||||
|
||||
**`conf.d/postgresql.conf`** optimized for:
|
||||
- **Memory**: 2GB allocated (512MB shared buffers, 1.5GB effective cache)
|
||||
- **Connections**: 100 max connections
|
||||
- **Workers**: 4 parallel workers
|
||||
- **Checkpoint**: 2GB max WAL size
|
||||
|
||||
### Scaling Up (4GB+ Memory)
|
||||
|
||||
```ini
|
||||
# conf.d/postgresql.conf
|
||||
shared_buffers = 1GB # 25% of RAM
|
||||
effective_cache_size = 3GB # 75% of RAM
|
||||
maintenance_work_mem = 256MB
|
||||
work_mem = 10MB
|
||||
max_connections = 200
|
||||
max_parallel_workers = 8
|
||||
```
|
||||
|
||||
### Query Optimization
|
||||
|
||||
```bash
|
||||
# Analyze query performance
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
EXPLAIN ANALYZE
|
||||
SELECT * FROM users WHERE email = 'test@example.com';
|
||||
"
|
||||
|
||||
# Create index for frequently queried columns
|
||||
docker exec postgres psql -U postgres -d michaelschiemer -c "
|
||||
CREATE INDEX idx_users_email ON users(email);
|
||||
"
|
||||
```
|
||||
|
||||
### Connection Pooling
|
||||
|
||||
**Recommended**: Use PgBouncer for connection pooling in high-traffic scenarios
|
||||
|
||||
```yaml
|
||||
# Add to docker-compose.yml
|
||||
pgbouncer:
|
||||
image: pgbouncer/pgbouncer:latest
|
||||
environment:
|
||||
- DATABASES_HOST=postgres
|
||||
- DATABASES_PORT=5432
|
||||
- DATABASES_DBNAME=michaelschiemer
|
||||
- PGBOUNCER_POOL_MODE=transaction
|
||||
- PGBOUNCER_MAX_CLIENT_CONN=1000
|
||||
- PGBOUNCER_DEFAULT_POOL_SIZE=25
|
||||
```
|
||||
|
||||
## Upgrading PostgreSQL
|
||||
|
||||
### Minor Version Upgrade (e.g., 16.0 → 16.1)
|
||||
|
||||
```bash
|
||||
# Pull latest 16.x image
|
||||
docker compose pull
|
||||
|
||||
# Recreate container
|
||||
docker compose up -d
|
||||
|
||||
# Verify version
|
||||
docker exec postgres psql -U postgres -c "SELECT version();"
|
||||
```
|
||||
|
||||
### Major Version Upgrade (e.g., 16 → 17)
|
||||
|
||||
```bash
|
||||
# 1. Create full backup
|
||||
docker exec postgres-backup /scripts/backup.sh
|
||||
|
||||
# 2. Stop services
|
||||
docker compose down
|
||||
|
||||
# 3. Update docker-compose.yml
|
||||
# Change: postgres:16-alpine → postgres:17-alpine
|
||||
|
||||
# 4. Remove old data volume
|
||||
docker volume rm postgres-data
|
||||
|
||||
# 5. Start new version
|
||||
docker compose up -d
|
||||
|
||||
# 6. Restore data
|
||||
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_<latest>.sql.gz
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
|
||||
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
|
||||
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html
|
||||
- **Security Hardening**: https://www.postgresql.org/docs/16/security.html
|
||||
@@ -0,0 +1,70 @@
|
||||
# PostgreSQL Configuration for Production
|
||||
# Optimized for 2GB memory allocation
|
||||
|
||||
# Connection Settings
|
||||
listen_addresses = '*'
|
||||
max_connections = 100
|
||||
superuser_reserved_connections = 3
|
||||
|
||||
# Memory Settings (for 2GB allocation)
|
||||
shared_buffers = 512MB
|
||||
effective_cache_size = 1536MB
|
||||
maintenance_work_mem = 128MB
|
||||
work_mem = 5MB
|
||||
|
||||
# Checkpoint Settings
|
||||
checkpoint_completion_target = 0.9
|
||||
wal_buffers = 16MB
|
||||
default_statistics_target = 100
|
||||
|
||||
# Query Planner
|
||||
random_page_cost = 1.1
|
||||
effective_io_concurrency = 200
|
||||
|
||||
# Write-Ahead Logging
|
||||
wal_level = replica
|
||||
max_wal_size = 2GB
|
||||
min_wal_size = 1GB
|
||||
|
||||
# Logging
|
||||
log_destination = 'stderr'
|
||||
logging_collector = on
|
||||
log_directory = 'log'
|
||||
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
|
||||
log_rotation_age = 1d
|
||||
log_rotation_size = 100MB
|
||||
log_line_prefix = '%m [%p] %u@%d '
|
||||
log_timezone = 'Europe/Berlin'
|
||||
|
||||
# What to Log
|
||||
log_min_duration_statement = 1000
|
||||
log_checkpoints = on
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_lock_waits = on
|
||||
log_statement = 'ddl'
|
||||
|
||||
# Autovacuum
|
||||
autovacuum = on
|
||||
autovacuum_max_workers = 3
|
||||
autovacuum_naptime = 1min
|
||||
|
||||
# Client Connection Defaults
|
||||
datestyle = 'iso, mdy'
|
||||
timezone = 'Europe/Berlin'
|
||||
lc_messages = 'en_US.utf8'
|
||||
lc_monetary = 'en_US.utf8'
|
||||
lc_numeric = 'en_US.utf8'
|
||||
lc_time = 'en_US.utf8'
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# Lock Management
|
||||
deadlock_timeout = 1s
|
||||
|
||||
# Statement Timeout (prevent long-running queries)
|
||||
statement_timeout = 30000 # 30 seconds
|
||||
|
||||
# Parallel Query Execution
|
||||
max_parallel_workers_per_gather = 2
|
||||
max_parallel_workers = 4
|
||||
max_worker_processes = 4
|
||||
@@ -0,0 +1,69 @@
|
||||
services:
|
||||
# PostgreSQL Database
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- PGDATA=/var/lib/postgresql/data/pgdata
|
||||
volumes:
|
||||
- postgres-data:/var/lib/postgresql/data
|
||||
- ./conf.d:/etc/postgresql/conf.d:ro
|
||||
- ./backups:/backups
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command: >
|
||||
postgres
|
||||
-c config_file=/etc/postgresql/conf.d/postgresql.conf
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer}"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
shm_size: 256mb
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
reservations:
|
||||
memory: 512M
|
||||
|
||||
# Automated Backup Service
|
||||
postgres-backup:
|
||||
image: postgres:16-alpine
|
||||
container_name: postgres-backup
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- POSTGRES_HOST=postgres
|
||||
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
|
||||
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 2 * * *}
|
||||
volumes:
|
||||
- ./backups:/backups
|
||||
- ./scripts:/scripts:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: ["/scripts/backup-entrypoint.sh"]
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
|
||||
volumes:
|
||||
postgres-data:
|
||||
name: postgres-data
|
||||
|
||||
networks:
|
||||
app-internal:
|
||||
external: true
|
||||
23
deployment/legacy/stacks/stacks/postgresql/scripts/backup-entrypoint.sh
Executable file
23
deployment/legacy/stacks/stacks/postgresql/scripts/backup-entrypoint.sh
Executable file
@@ -0,0 +1,23 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
echo "🔄 PostgreSQL Backup Service Starting..."
|
||||
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
|
||||
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
|
||||
|
||||
# Install cronie for scheduled backups
|
||||
apk add --no-cache dcron
|
||||
|
||||
# Create cron job
|
||||
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
|
||||
|
||||
# Ensure backup directory exists
|
||||
mkdir -p /backups
|
||||
|
||||
# Run initial backup
|
||||
echo "🚀 Running initial backup..."
|
||||
/scripts/backup.sh
|
||||
|
||||
# Start cron in foreground
|
||||
echo "✅ Backup service ready - cron daemon starting"
|
||||
crond -f -l 2
|
||||
55
deployment/legacy/stacks/stacks/postgresql/scripts/backup.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql/scripts/backup.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
|
||||
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Backup - $(date)"
|
||||
echo "======================================"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
|
||||
# Create backup
|
||||
echo "📦 Creating backup: ${BACKUP_FILE}"
|
||||
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d "${POSTGRES_DB}" \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--create \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
| gzip > "${BACKUP_FILE}"
|
||||
|
||||
# Verify backup was created
|
||||
if [ -f "${BACKUP_FILE}" ]; then
|
||||
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
|
||||
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
|
||||
else
|
||||
echo "❌ Backup failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Cleanup old backups
|
||||
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
|
||||
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
|
||||
|
||||
# List current backups
|
||||
echo ""
|
||||
echo "📊 Current backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
|
||||
echo ""
|
||||
echo "✅ Backup completed successfully"
|
||||
echo "======================================"
|
||||
55
deployment/legacy/stacks/stacks/postgresql/scripts/restore.sh
Executable file
55
deployment/legacy/stacks/stacks/postgresql/scripts/restore.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="/backups"
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: ./restore.sh <backup-file>"
|
||||
echo ""
|
||||
echo "Available backups:"
|
||||
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_FILE="$1"
|
||||
|
||||
if [ ! -f "${BACKUP_FILE}" ]; then
|
||||
echo "❌ Backup file not found: ${BACKUP_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "======================================"
|
||||
echo "PostgreSQL Restore - $(date)"
|
||||
echo "======================================"
|
||||
echo "📦 Backup file: ${BACKUP_FILE}"
|
||||
echo ""
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
|
||||
echo "⏳ Waiting for PostgreSQL..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "✅ PostgreSQL is ready"
|
||||
echo ""
|
||||
|
||||
# Warning
|
||||
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
|
||||
echo "⚠️ Database: ${POSTGRES_DB}"
|
||||
echo ""
|
||||
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
|
||||
sleep 10
|
||||
|
||||
echo ""
|
||||
echo "🔄 Restoring database..."
|
||||
|
||||
# Restore backup
|
||||
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
|
||||
-h "${POSTGRES_HOST}" \
|
||||
-U "${POSTGRES_USER}" \
|
||||
-d postgres
|
||||
|
||||
echo ""
|
||||
echo "✅ Database restored successfully"
|
||||
echo "======================================"
|
||||
48
deployment/legacy/stacks/stacks/production/.env.example
Normal file
48
deployment/legacy/stacks/stacks/production/.env.example
Normal file
@@ -0,0 +1,48 @@
|
||||
# Application Stack Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# Application Domain
|
||||
APP_DOMAIN=michaelschiemer.de
|
||||
|
||||
# Application Settings
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_URL=https://michaelschiemer.de
|
||||
|
||||
# Database Configuration
|
||||
# Note: Using PostgreSQL from postgres stack
|
||||
DB_HOST=postgres
|
||||
DB_PORT=5432
|
||||
DB_NAME=michaelschiemer
|
||||
DB_USER=appuser
|
||||
DB_PASS=<generate-with-openssl-rand-base64-32>
|
||||
|
||||
# Redis Configuration
|
||||
# Generate password with: openssl rand -base64 32
|
||||
REDIS_PASSWORD=<generate-with-openssl-rand-base64-32>
|
||||
|
||||
# Cache Configuration
|
||||
CACHE_DRIVER=redis
|
||||
CACHE_PREFIX=app
|
||||
|
||||
# Session Configuration
|
||||
SESSION_DRIVER=redis
|
||||
SESSION_LIFETIME=120
|
||||
|
||||
# Queue Worker Configuration
|
||||
QUEUE_DRIVER=redis
|
||||
QUEUE_CONNECTION=default
|
||||
QUEUE_WORKER_SLEEP=3
|
||||
QUEUE_WORKER_TRIES=3
|
||||
QUEUE_WORKER_TIMEOUT=60
|
||||
|
||||
# Git Repository Configuration (optional - if set, container will clone/pull code on start)
|
||||
# Uncomment to enable Git-based deployment:
|
||||
# GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
# GIT_BRANCH=main
|
||||
# GIT_TOKEN=
|
||||
# GIT_USERNAME=
|
||||
# GIT_PASSWORD=
|
||||
921
deployment/legacy/stacks/stacks/production/README.md
Normal file
921
deployment/legacy/stacks/stacks/production/README.md
Normal file
@@ -0,0 +1,921 @@
|
||||
# Application Stack - PHP Application with Nginx, Redis, Queue & Scheduler
|
||||
|
||||
## Overview
|
||||
|
||||
Production-ready PHP application stack with multi-service architecture for high-performance web applications.
|
||||
|
||||
**Features**:
|
||||
- PHP-FPM 8.3+ application runtime
|
||||
- Nginx web server with optimized configuration
|
||||
- Redis for caching, sessions, and queue backend
|
||||
- Dedicated queue worker for background job processing
|
||||
- Scheduler for cron job execution
|
||||
- SSL via Traefik with automatic Let's Encrypt certificates
|
||||
- Private Docker Registry integration
|
||||
- Health checks and automatic restart policies
|
||||
|
||||
## Services
|
||||
|
||||
- **app** (PHP-FPM) - Application runtime handling PHP code execution
|
||||
- **nginx** (Nginx 1.25) - Web server proxying requests to PHP-FPM
|
||||
- **redis** (Redis 7) - Cache, session, and queue backend
|
||||
- **queue-worker** - Background job processor
|
||||
- **scheduler** - Cron job executor
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### 1. Traefik Stack Running
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### 2. DNS Configuration
|
||||
Point `michaelschiemer.de` to your server IP (94.16.110.151)
|
||||
|
||||
### 3. Docker Registry Access
|
||||
```bash
|
||||
# Login to private registry
|
||||
docker login registry.michaelschiemer.de
|
||||
|
||||
# Verify access
|
||||
docker pull registry.michaelschiemer.de/michaelschiemer-app:latest
|
||||
```
|
||||
|
||||
### 4. Application Image Built
|
||||
```bash
|
||||
# Build and push application image
|
||||
docker build -t registry.michaelschiemer.de/michaelschiemer-app:latest .
|
||||
docker push registry.michaelschiemer.de/michaelschiemer-app:latest
|
||||
```
|
||||
|
||||
### 5. PostgreSQL Production Stack Available
|
||||
The separate `postgresql-production` stack must be running. See `deployment/stacks/postgresql-production/README.md` for setup.
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Passwords
|
||||
|
||||
```bash
|
||||
# Database password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Redis password
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
DB_PASS=<generated-database-password>
|
||||
REDIS_PASSWORD=<generated-redis-password>
|
||||
```
|
||||
|
||||
### 3. Configure Database Connection
|
||||
|
||||
**If using Stack 5 (PostgreSQL)**:
|
||||
```env
|
||||
DB_HOST=postgres
|
||||
DB_PORT=5432
|
||||
DB_NAME=michaelschiemer
|
||||
DB_USER=appuser
|
||||
```
|
||||
|
||||
**If using Stack 2 Gitea MySQL**:
|
||||
```env
|
||||
DB_HOST=mysql
|
||||
DB_PORT=3306
|
||||
DB_NAME=michaelschiemer
|
||||
DB_USER=appuser
|
||||
```
|
||||
|
||||
**If using external database**:
|
||||
```env
|
||||
DB_HOST=<external-host>
|
||||
DB_PORT=<port>
|
||||
DB_NAME=<database>
|
||||
DB_USER=<username>
|
||||
```
|
||||
|
||||
### 4. Adjust Queue Worker Settings (Optional)
|
||||
|
||||
```env
|
||||
# Queue worker configuration
|
||||
QUEUE_WORKER_SLEEP=3 # Sleep between job checks (seconds)
|
||||
QUEUE_WORKER_TRIES=3 # Max attempts per job
|
||||
QUEUE_WORKER_TIMEOUT=60 # Max execution time per job (seconds)
|
||||
```
|
||||
|
||||
### 5. Configure Application Settings
|
||||
|
||||
```env
|
||||
APP_ENV=production # production, staging, or development
|
||||
APP_DEBUG=false # Enable debug mode (false for production)
|
||||
APP_URL=https://michaelschiemer.de
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Ensure Traefik is running
|
||||
docker network inspect traefik-public
|
||||
|
||||
# Create .env file
|
||||
cp .env.example .env
|
||||
# Edit .env with generated passwords
|
||||
|
||||
# Start application stack
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Test application endpoint
|
||||
curl https://michaelschiemer.de/health
|
||||
# Expected: HTTP 200 "healthy"
|
||||
|
||||
# Check service status
|
||||
docker compose ps
|
||||
# All services should show "healthy" status
|
||||
|
||||
# View logs
|
||||
docker compose logs app # Application logs
|
||||
docker compose logs nginx # Web server logs
|
||||
docker compose logs redis # Redis logs
|
||||
docker compose logs queue-worker # Queue worker logs
|
||||
docker compose logs scheduler # Scheduler logs
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Accessing the Application
|
||||
|
||||
**Main Application**:
|
||||
- URL: https://michaelschiemer.de
|
||||
- SSL: Automatic via Traefik + Let's Encrypt
|
||||
- Auth: Configured in application
|
||||
|
||||
### Managing Services
|
||||
|
||||
```bash
|
||||
# Start stack
|
||||
docker compose up -d
|
||||
|
||||
# Stop stack
|
||||
docker compose down
|
||||
|
||||
# Restart specific service
|
||||
docker compose restart app
|
||||
docker compose restart nginx
|
||||
|
||||
# View logs (follow mode)
|
||||
docker compose logs -f app
|
||||
docker compose logs -f queue-worker
|
||||
|
||||
# Execute commands in app container
|
||||
docker compose exec app php console.php db:migrate
|
||||
docker compose exec app php console.php cache:clear
|
||||
|
||||
# Access Redis CLI
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
### Queue Management
|
||||
|
||||
```bash
|
||||
# Monitor queue worker
|
||||
docker compose logs -f queue-worker
|
||||
|
||||
# Restart queue worker (e.g., after code changes)
|
||||
docker compose restart queue-worker
|
||||
|
||||
# Check queue status
|
||||
docker compose exec app php console.php queue:status
|
||||
|
||||
# Process specific queue
|
||||
docker compose exec app php console.php queue:work --queue=emails
|
||||
|
||||
# Clear failed jobs
|
||||
docker compose exec app php console.php queue:retry-failed
|
||||
```
|
||||
|
||||
### Scheduler Management
|
||||
|
||||
```bash
|
||||
# View scheduler logs
|
||||
docker compose logs -f scheduler
|
||||
|
||||
# List scheduled tasks
|
||||
docker compose exec app php console.php scheduler:list
|
||||
|
||||
# Run scheduler manually (for testing)
|
||||
docker compose exec app php console.php scheduler:run
|
||||
|
||||
# Restart scheduler
|
||||
docker compose restart scheduler
|
||||
```
|
||||
|
||||
### Cache Management
|
||||
|
||||
```bash
|
||||
# Clear application cache
|
||||
docker compose exec app php console.php cache:clear
|
||||
|
||||
# Clear specific cache tags
|
||||
docker compose exec app php console.php cache:forget user:123
|
||||
|
||||
# View cache statistics
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> INFO stats
|
||||
|
||||
# Monitor cache in real-time
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> MONITOR
|
||||
```
|
||||
|
||||
## Integration with Other Stacks
|
||||
|
||||
### Stack 1: Traefik (Reverse Proxy)
|
||||
|
||||
**Automatic SSL & Routing**:
|
||||
- Traefik labels in docker-compose.yml configure routing
|
||||
- SSL certificates automatically obtained via Let's Encrypt
|
||||
- Middleware chain applies security headers, rate limiting, etc.
|
||||
|
||||
**Verify Integration**:
|
||||
```bash
|
||||
# Check Traefik router
|
||||
docker exec traefik traefik healthcheck
|
||||
|
||||
# View Traefik logs
|
||||
docker logs traefik | grep michaelschiemer.de
|
||||
```
|
||||
|
||||
### Stack 3: Docker Registry
|
||||
|
||||
**Image Pulling**:
|
||||
- Stack pulls application image from private registry
|
||||
- Credentials configured via docker login
|
||||
|
||||
**Update Application**:
|
||||
```bash
|
||||
# Pull latest image
|
||||
docker compose pull app queue-worker scheduler
|
||||
|
||||
# Recreate containers with new image
|
||||
docker compose up -d --force-recreate app queue-worker scheduler
|
||||
```
|
||||
|
||||
### Database Stack: PostgreSQL Production (Separate Stack)
|
||||
|
||||
**Connection**:
|
||||
- Production Application Stack connects to separate `postgresql-production` stack
|
||||
- Database container: `postgres-production`
|
||||
- Network: `postgres-production-internal`
|
||||
- Database service must be on same Docker network (`postgres-production-internal`)
|
||||
- Connection configured via DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASS
|
||||
|
||||
**Run Migrations**:
|
||||
```bash
|
||||
# Apply database migrations
|
||||
docker compose exec app php console.php db:migrate
|
||||
|
||||
# Check migration status
|
||||
docker compose exec app php console.php db:status
|
||||
|
||||
# Rollback migration
|
||||
docker compose exec app php console.php db:rollback
|
||||
```
|
||||
|
||||
### Stack 2: Gitea (Optional - Shared MySQL)
|
||||
|
||||
**If using Gitea's MySQL**:
|
||||
```env
|
||||
DB_HOST=mysql
|
||||
DB_PORT=3306
|
||||
DB_NAME=michaelschiemer # Create separate database
|
||||
DB_USER=appuser # Create dedicated user
|
||||
```
|
||||
|
||||
**Create Database**:
|
||||
```bash
|
||||
docker exec mysql mysql -u root -p
|
||||
CREATE DATABASE michaelschiemer CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE USER 'appuser'@'%' IDENTIFIED BY '<password>';
|
||||
GRANT ALL PRIVILEGES ON michaelschiemer.* TO 'appuser'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# backup-application.sh
|
||||
|
||||
BACKUP_DIR="/backups/application"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup application storage volume
|
||||
docker run --rm \
|
||||
-v app-storage:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/app-storage-$DATE.tar.gz -C /data .
|
||||
|
||||
# Backup application logs
|
||||
docker run --rm \
|
||||
-v app-logs:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/app-logs-$DATE.tar.gz -C /data .
|
||||
|
||||
# Backup Redis data (if persistence enabled)
|
||||
docker run --rm \
|
||||
-v redis-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/redis-data-$DATE.tar.gz -C /data .
|
||||
|
||||
# Backup .env file
|
||||
cp .env $BACKUP_DIR/env-$DATE
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR/*-$DATE.*"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop stack
|
||||
docker compose down
|
||||
|
||||
# Restore storage volume
|
||||
docker run --rm \
|
||||
-v app-storage:/data \
|
||||
-v /backups/application:/backup \
|
||||
alpine tar xzf /backup/app-storage-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Restore logs
|
||||
docker run --rm \
|
||||
-v app-logs:/data \
|
||||
-v /backups/application:/backup \
|
||||
alpine tar xzf /backup/app-logs-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Restore Redis data
|
||||
docker run --rm \
|
||||
-v redis-data:/data \
|
||||
-v /backups/application:/backup \
|
||||
alpine tar xzf /backup/redis-data-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Restore .env
|
||||
cp /backups/application/env-YYYYMMDD_HHMMSS .env
|
||||
|
||||
# Start stack
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Add to crontab:
|
||||
```bash
|
||||
# Daily backup at 2 AM
|
||||
0 2 * * * /path/to/backup-application.sh
|
||||
|
||||
# Keep only last 14 days
|
||||
0 3 * * * find /backups/application -type f -mtime +14 -delete
|
||||
```
|
||||
|
||||
### Redis Persistence
|
||||
|
||||
**Automatic Persistence** (configured in docker-compose.yml):
|
||||
- RDB snapshots: save 900 1, save 300 10, save 60 10000
|
||||
- AOF (Append-Only File): appendonly yes, appendfsync everysec
|
||||
|
||||
**Manual Redis Backup**:
|
||||
```bash
|
||||
# Trigger manual save
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> SAVE
|
||||
|
||||
# Export RDB file
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> BGSAVE
|
||||
docker cp redis:/data/dump.rdb ./redis-backup.rdb
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check all service health
|
||||
docker compose ps
|
||||
|
||||
# Application health
|
||||
curl https://michaelschiemer.de/health
|
||||
|
||||
# Redis health
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> PING
|
||||
# Expected: PONG
|
||||
|
||||
# PHP-FPM health
|
||||
docker compose exec app php-fpm-healthcheck
|
||||
```
|
||||
|
||||
### Log Management
|
||||
|
||||
```bash
|
||||
# View all logs
|
||||
docker compose logs
|
||||
|
||||
# Follow logs for specific service
|
||||
docker compose logs -f app
|
||||
docker compose logs -f nginx
|
||||
docker compose logs -f queue-worker
|
||||
docker compose logs -f scheduler
|
||||
|
||||
# View last 100 lines
|
||||
docker compose logs --tail=100 app
|
||||
|
||||
# View logs since specific time
|
||||
docker compose logs --since 2024-01-01T00:00:00 app
|
||||
|
||||
# Search logs
|
||||
docker compose logs app | grep ERROR
|
||||
docker compose logs nginx | grep 404
|
||||
```
|
||||
|
||||
### Performance Metrics
|
||||
|
||||
```bash
|
||||
# Container resource usage
|
||||
docker stats app nginx redis queue-worker scheduler
|
||||
|
||||
# Redis statistics
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> INFO stats
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> INFO memory
|
||||
|
||||
# Redis slow log
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> SLOWLOG GET 10
|
||||
|
||||
# Nginx status (if enabled)
|
||||
docker compose exec nginx curl http://localhost/nginx_status
|
||||
```
|
||||
|
||||
### Application Monitoring
|
||||
|
||||
```bash
|
||||
# Application metrics endpoint (if implemented)
|
||||
curl https://michaelschiemer.de/metrics
|
||||
|
||||
# Queue statistics
|
||||
docker compose exec app php console.php queue:stats
|
||||
|
||||
# Cache hit rate
|
||||
docker compose exec app php console.php cache:stats
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Application Not Accessible
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
docker compose ps
|
||||
|
||||
# Check Traefik routing
|
||||
docker exec traefik traefik healthcheck
|
||||
|
||||
# View Nginx logs
|
||||
docker compose logs nginx
|
||||
|
||||
# Test internal connectivity
|
||||
docker compose exec nginx curl http://app:9000/health
|
||||
```
|
||||
|
||||
### PHP-FPM Errors
|
||||
|
||||
```bash
|
||||
# View PHP-FPM logs
|
||||
docker compose logs app
|
||||
|
||||
# Check PHP-FPM pool status
|
||||
docker compose exec app php-fpm-healthcheck
|
||||
|
||||
# Restart PHP-FPM
|
||||
docker compose restart app
|
||||
|
||||
# Check PHP configuration
|
||||
docker compose exec app php -i | grep error
|
||||
```
|
||||
|
||||
### Redis Connection Issues
|
||||
|
||||
```bash
|
||||
# Test Redis connection
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> PING
|
||||
|
||||
# Check Redis logs
|
||||
docker compose logs redis
|
||||
|
||||
# Verify Redis password
|
||||
grep REDIS_PASSWORD .env
|
||||
|
||||
# Test connection from app
|
||||
docker compose exec app php console.php redis:test
|
||||
```
|
||||
|
||||
### Queue Worker Not Processing Jobs
|
||||
|
||||
```bash
|
||||
# Check queue worker status
|
||||
docker compose ps queue-worker
|
||||
|
||||
# View queue worker logs
|
||||
docker compose logs queue-worker
|
||||
|
||||
# Check if queue worker process is running
|
||||
docker compose exec queue-worker pgrep -f queue:work
|
||||
|
||||
# Restart queue worker
|
||||
docker compose restart queue-worker
|
||||
|
||||
# Check queue size
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> LLEN queues:default
|
||||
```
|
||||
|
||||
### Scheduler Not Running
|
||||
|
||||
```bash
|
||||
# Check scheduler status
|
||||
docker compose ps scheduler
|
||||
|
||||
# View scheduler logs
|
||||
docker compose logs scheduler
|
||||
|
||||
# Verify scheduler process
|
||||
docker compose exec scheduler pgrep -f scheduler:run
|
||||
|
||||
# Restart scheduler
|
||||
docker compose restart scheduler
|
||||
|
||||
# Test scheduler manually
|
||||
docker compose exec scheduler php console.php scheduler:run --once
|
||||
```
|
||||
|
||||
### High Memory Usage
|
||||
|
||||
```bash
|
||||
# Check container memory usage
|
||||
docker stats --no-stream
|
||||
|
||||
# Redis memory usage
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> INFO memory
|
||||
|
||||
# Adjust Redis max memory (in .env or docker-compose.yml)
|
||||
# --maxmemory 512mb (default)
|
||||
|
||||
# Clear Redis cache if needed
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> FLUSHDB
|
||||
```
|
||||
|
||||
### Slow Response Times
|
||||
|
||||
```bash
|
||||
# Check Nginx access logs
|
||||
docker compose logs nginx | grep "request_time"
|
||||
|
||||
# Enable PHP slow log (in php.ini or PHP-FPM pool config)
|
||||
# slowlog = /var/log/php-fpm-slow.log
|
||||
# request_slowlog_timeout = 5s
|
||||
|
||||
# Check database query performance
|
||||
docker compose exec app php console.php db:explain
|
||||
|
||||
# Check cache hit ratio
|
||||
docker compose exec redis redis-cli -a <REDIS_PASSWORD> INFO stats
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
|
||||
```bash
|
||||
# Check Traefik certificate
|
||||
docker exec traefik cat /acme.json | grep michaelschiemer.de
|
||||
|
||||
# Force certificate renewal (via Traefik)
|
||||
docker restart traefik
|
||||
|
||||
# Test SSL
|
||||
openssl s_client -connect michaelschiemer.de:443 -servername michaelschiemer.de < /dev/null
|
||||
```
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
```bash
|
||||
# Test database connection from app
|
||||
docker compose exec app php console.php db:test
|
||||
|
||||
# Verify database is accessible
|
||||
# If Stack 5 (PostgreSQL):
|
||||
docker exec postgres pg_isready
|
||||
|
||||
# If Stack 2 (MySQL):
|
||||
docker exec mysql mysqladmin ping
|
||||
|
||||
# Check database credentials in .env
|
||||
grep DB_ .env
|
||||
|
||||
# Test connection manually
|
||||
docker compose exec app php -r "new PDO('mysql:host=mysql;dbname=michaelschiemer', 'appuser', 'password');"
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Environment Variables**: Never commit .env to version control
|
||||
2. **Strong Passwords**: Use `openssl rand -base64 32` for all passwords
|
||||
3. **Redis Password**: Always set REDIS_PASSWORD in production
|
||||
4. **Database Access**: Use dedicated database user with minimal privileges
|
||||
5. **File Permissions**: Ensure storage directories have correct ownership
|
||||
6. **Updates**: Regularly update Docker images and dependencies
|
||||
7. **Network Isolation**: app-internal network isolates services from external access
|
||||
|
||||
### Security Headers
|
||||
|
||||
**Nginx Configuration** (`nginx/conf.d/default.conf`):
|
||||
- X-Frame-Options: SAMEORIGIN
|
||||
- X-Content-Type-Options: nosniff
|
||||
- X-XSS-Protection: 1; mode=block
|
||||
- Referrer-Policy: strict-origin-when-cross-origin
|
||||
|
||||
**Traefik Middleware** (via default-chain@file):
|
||||
- HSTS
|
||||
- Additional security headers
|
||||
- Rate limiting
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
**Nginx Rate Limits** (configured in nginx/conf.d/default.conf):
|
||||
- API endpoints: 10 requests/second (burst 20)
|
||||
- General requests: 30 requests/second (burst 50)
|
||||
|
||||
**Adjust Rate Limits**:
|
||||
```nginx
|
||||
# Edit nginx/conf.d/default.conf
|
||||
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
|
||||
limit_req_zone $binary_remote_addr zone=general_limit:10m rate=30r/s;
|
||||
```
|
||||
|
||||
### File Upload Security
|
||||
|
||||
```nginx
|
||||
# Client body size limit (nginx/conf.d/default.conf)
|
||||
client_max_body_size 100M; # Adjust based on requirements
|
||||
```
|
||||
|
||||
### Sensitive File Protection
|
||||
|
||||
**Automatically Protected** (nginx/conf.d/default.conf):
|
||||
- .env, .git, .gitignore, .gitattributes
|
||||
- composer.json, composer.lock
|
||||
- package.json, package-lock.json
|
||||
- /storage (except /storage/public)
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Redis Optimization
|
||||
|
||||
**Memory Management**:
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
--maxmemory 512mb # Adjust based on available RAM
|
||||
--maxmemory-policy allkeys-lru # Eviction policy
|
||||
```
|
||||
|
||||
**Persistence**:
|
||||
```yaml
|
||||
# Adjust RDB snapshot frequency
|
||||
--save 900 1 # After 900 sec if 1 key changed
|
||||
--save 300 10 # After 300 sec if 10 keys changed
|
||||
--save 60 10000 # After 60 sec if 10000 keys changed
|
||||
|
||||
# AOF settings
|
||||
--appendonly yes
|
||||
--appendfsync everysec # or: always, no
|
||||
```
|
||||
|
||||
**Connection Pooling**:
|
||||
```php
|
||||
// In application code
|
||||
$redis = new Redis();
|
||||
$redis->pconnect('redis', 6379); // Persistent connection
|
||||
```
|
||||
|
||||
### Nginx Optimization
|
||||
|
||||
**Worker Processes** (add to nginx.conf if needed):
|
||||
```nginx
|
||||
worker_processes auto;
|
||||
worker_connections 1024;
|
||||
```
|
||||
|
||||
**Gzip Compression** (configured):
|
||||
- Level: 6 (balance between compression ratio and CPU usage)
|
||||
- Types: text/plain, text/css, text/javascript, application/json, etc.
|
||||
- Min length: 1024 bytes
|
||||
|
||||
**Static File Caching** (configured):
|
||||
- Expires: 1 year for immutable assets
|
||||
- Cache-Control: public, immutable
|
||||
- Access log: disabled for static files
|
||||
|
||||
**Buffer Tuning** (configured):
|
||||
```nginx
|
||||
fastcgi_buffer_size 128k;
|
||||
fastcgi_buffers 256 16k;
|
||||
fastcgi_busy_buffers_size 256k;
|
||||
fastcgi_temp_file_write_size 256k;
|
||||
```
|
||||
|
||||
### PHP-FPM Optimization
|
||||
|
||||
**Pool Configuration** (adjust in Dockerfile or php-fpm.conf):
|
||||
```ini
|
||||
pm = dynamic
|
||||
pm.max_children = 50
|
||||
pm.start_servers = 5
|
||||
pm.min_spare_servers = 5
|
||||
pm.max_spare_servers = 35
|
||||
pm.max_requests = 500
|
||||
```
|
||||
|
||||
**OPcache** (enable in php.ini):
|
||||
```ini
|
||||
opcache.enable=1
|
||||
opcache.memory_consumption=128
|
||||
opcache.interned_strings_buffer=8
|
||||
opcache.max_accelerated_files=10000
|
||||
opcache.revalidate_freq=2
|
||||
```
|
||||
|
||||
**Timeout Settings** (configured in nginx):
|
||||
```nginx
|
||||
fastcgi_connect_timeout 60s;
|
||||
fastcgi_send_timeout 180s;
|
||||
fastcgi_read_timeout 180s;
|
||||
```
|
||||
|
||||
### Queue Worker Optimization
|
||||
|
||||
**Worker Count**:
|
||||
```bash
|
||||
# Run multiple queue workers for parallel processing
|
||||
docker compose up -d --scale queue-worker=3
|
||||
```
|
||||
|
||||
**Queue Configuration** (.env):
|
||||
```env
|
||||
QUEUE_WORKER_SLEEP=3 # Lower = more responsive, higher = less CPU
|
||||
QUEUE_WORKER_TRIES=3 # Retry failed jobs
|
||||
QUEUE_WORKER_TIMEOUT=60 # Increase for long-running jobs
|
||||
```
|
||||
|
||||
**Separate Queues**:
|
||||
```bash
|
||||
# Start workers for specific queues
|
||||
docker compose exec app php console.php queue:work --queue=high-priority
|
||||
docker compose exec app php console.php queue:work --queue=emails
|
||||
docker compose exec app php console.php queue:work --queue=default
|
||||
```
|
||||
|
||||
### Database Query Optimization
|
||||
|
||||
```bash
|
||||
# Analyze slow queries
|
||||
docker compose exec app php console.php db:explain
|
||||
|
||||
# Enable query logging
|
||||
docker compose exec app php console.php db:log enable
|
||||
|
||||
# Cache query results
|
||||
docker compose exec app php console.php cache:queries
|
||||
```
|
||||
|
||||
## Scheduler Configuration
|
||||
|
||||
### Cron Jobs
|
||||
|
||||
**Scheduler runs** `php console.php scheduler:run` continuously.
|
||||
|
||||
**Define Scheduled Tasks** (in application code):
|
||||
```php
|
||||
// Example: app/Console/Kernel.php
|
||||
protected function schedule(Schedule $schedule): void
|
||||
{
|
||||
// Run every minute
|
||||
$schedule->command('cache:clear')->everyMinute();
|
||||
|
||||
// Run hourly
|
||||
$schedule->command('reports:generate')->hourly();
|
||||
|
||||
// Run daily at 2 AM
|
||||
$schedule->command('cleanup:old-logs')->dailyAt('02:00');
|
||||
|
||||
// Run weekly on Sundays
|
||||
$schedule->command('backup:database')->weekly();
|
||||
}
|
||||
```
|
||||
|
||||
### Scheduler Monitoring
|
||||
|
||||
```bash
|
||||
# View scheduler logs
|
||||
docker compose logs -f scheduler
|
||||
|
||||
# List scheduled tasks
|
||||
docker compose exec app php console.php scheduler:list
|
||||
|
||||
# Run scheduler manually (for testing)
|
||||
docker compose exec app php console.php scheduler:run --once
|
||||
```
|
||||
|
||||
## Update Stack
|
||||
|
||||
### Update Application Code
|
||||
|
||||
```bash
|
||||
# Build new image
|
||||
docker build -t registry.michaelschiemer.de/michaelschiemer-app:latest .
|
||||
|
||||
# Push to registry
|
||||
docker push registry.michaelschiemer.de/michaelschiemer-app:latest
|
||||
|
||||
# Pull and recreate containers
|
||||
docker compose pull
|
||||
docker compose up -d --force-recreate app queue-worker scheduler
|
||||
|
||||
# Run migrations if needed
|
||||
docker compose exec app php console.php db:migrate
|
||||
```
|
||||
|
||||
### Update Nginx Configuration
|
||||
|
||||
```bash
|
||||
# Edit nginx configuration
|
||||
nano nginx/conf.d/default.conf
|
||||
|
||||
# Test configuration
|
||||
docker compose exec nginx nginx -t
|
||||
|
||||
# Reload Nginx
|
||||
docker compose exec nginx nginx -s reload
|
||||
|
||||
# Or restart Nginx
|
||||
docker compose restart nginx
|
||||
```
|
||||
|
||||
### Update Stack Configuration
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Docker Compose Documentation**: https://docs.docker.com/compose/
|
||||
- **Nginx Documentation**: https://nginx.org/en/docs/
|
||||
- **Redis Documentation**: https://redis.io/documentation
|
||||
- **PHP-FPM Documentation**: https://www.php.net/manual/en/install.fpm.php
|
||||
- **Traefik v3 Documentation**: https://doc.traefik.io/traefik/
|
||||
|
||||
## Stack Integration Summary
|
||||
|
||||
**Depends On**:
|
||||
- Stack 1 (Traefik) - SSL and reverse proxy
|
||||
- Stack 3 (Docker Registry) - Application image storage
|
||||
- Stack 5 (Database) - Data persistence
|
||||
|
||||
**Provides**:
|
||||
- PHP application runtime
|
||||
- Web server with SSL
|
||||
- Background job processing
|
||||
- Scheduled task execution
|
||||
- Caching infrastructure
|
||||
@@ -0,0 +1,129 @@
|
||||
# Nginx Configuration for PHP-FPM Application
|
||||
# Optimized for production with security headers and performance tuning
|
||||
|
||||
# Upstream PHP-FPM
|
||||
upstream php-upstream {
|
||||
server app:9000;
|
||||
}
|
||||
|
||||
# Rate limiting zones
|
||||
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
|
||||
limit_req_zone $binary_remote_addr zone=general_limit:10m rate=30r/s;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
root /var/www/html/public;
|
||||
index index.php index.html;
|
||||
|
||||
# Security Headers (additional to Traefik middleware)
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# Client body size limit (adjust for file uploads)
|
||||
client_max_body_size 100M;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/x-javascript
|
||||
text/xml
|
||||
application/xml
|
||||
application/xml+rss
|
||||
image/svg+xml;
|
||||
|
||||
# Logging
|
||||
access_log /var/log/nginx/access.log combined;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
|
||||
# Health check endpoint (for Docker healthcheck)
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# Deny access to sensitive files
|
||||
location ~ /\. {
|
||||
deny all;
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
location ~ ^/(\.env|\.git|\.gitignore|\.gitattributes|composer\.(json|lock)|package(-lock)?\.json) {
|
||||
deny all;
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
# Static files - serve directly for performance
|
||||
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot|webp)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
access_log off;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# API endpoints - rate limited
|
||||
location ^~ /api/ {
|
||||
limit_req zone=api_limit burst=20 nodelay;
|
||||
limit_req_status 429;
|
||||
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# PHP-FPM processing
|
||||
location ~ \.php$ {
|
||||
limit_req zone=general_limit burst=50 nodelay;
|
||||
|
||||
try_files $uri =404;
|
||||
fastcgi_split_path_info ^(.+\.php)(/.+)$;
|
||||
|
||||
fastcgi_pass php-upstream;
|
||||
fastcgi_index index.php;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_path_info;
|
||||
|
||||
# FastCGI parameters
|
||||
include fastcgi_params;
|
||||
|
||||
# PHP-FPM timeouts
|
||||
fastcgi_connect_timeout 60s;
|
||||
fastcgi_send_timeout 180s;
|
||||
fastcgi_read_timeout 180s;
|
||||
|
||||
# Buffer settings
|
||||
fastcgi_buffer_size 128k;
|
||||
fastcgi_buffers 256 16k;
|
||||
fastcgi_busy_buffers_size 256k;
|
||||
fastcgi_temp_file_write_size 256k;
|
||||
|
||||
# Hide PHP version
|
||||
fastcgi_hide_header X-Powered-By;
|
||||
}
|
||||
|
||||
# Fallback to index.php for non-existent files (framework routing)
|
||||
location / {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Deny access to storage directory (except public subdirectory)
|
||||
location ^~ /storage {
|
||||
deny all;
|
||||
}
|
||||
|
||||
location ^~ /storage/public {
|
||||
allow all;
|
||||
}
|
||||
}
|
||||
8
deployment/legacy/stacks/stacks/redis/.env.example
Normal file
8
deployment/legacy/stacks/stacks/redis/.env.example
Normal file
@@ -0,0 +1,8 @@
|
||||
# Redis Stack Environment Configuration
|
||||
# Copy to .env and adjust for your environment
|
||||
|
||||
# Redis Password (REQUIRED in production)
|
||||
REDIS_PASSWORD=your-secure-redis-password-here
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
291
deployment/legacy/stacks/stacks/redis/README.md
Normal file
291
deployment/legacy/stacks/stacks/redis/README.md
Normal file
@@ -0,0 +1,291 @@
|
||||
# Redis Stack
|
||||
|
||||
External Redis Stack for the Custom PHP Framework application.
|
||||
|
||||
## Overview
|
||||
|
||||
This Redis Stack provides:
|
||||
- **Cache Backend**: Application-level caching (sessions, query results, objects)
|
||||
- **Queue Backend**: Background job queue storage
|
||||
- **Session Storage**: User session persistence
|
||||
- **Rate Limiting**: Request rate limiting storage
|
||||
|
||||
**Architecture Pattern**: Following the same external stack pattern as PostgreSQL for consistency and separation of concerns.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Create environment configuration
|
||||
cp .env.example .env
|
||||
|
||||
# 2. Set Redis password
|
||||
echo "REDIS_PASSWORD=$(openssl rand -base64 32)" >> .env
|
||||
|
||||
# 3. Create app-internal network (if not exists)
|
||||
docker network create app-internal
|
||||
|
||||
# 4. Start Redis Stack
|
||||
docker compose up -d
|
||||
|
||||
# 5. Verify health
|
||||
docker compose ps
|
||||
docker compose exec redis-stack redis-cli ping
|
||||
# Expected: PONG
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```env
|
||||
REDIS_PASSWORD=your-secure-password # Required for production
|
||||
TZ=Europe/Berlin # Timezone for logs
|
||||
```
|
||||
|
||||
### Redis Configuration
|
||||
|
||||
Redis configuration is in `redis.conf` with production-optimized settings:
|
||||
|
||||
**Persistence:**
|
||||
- RDB Snapshots: 15min/5min/1min intervals
|
||||
- AOF (Append Only File): `everysec` fsync policy
|
||||
- Combined persistence for data durability
|
||||
|
||||
**Memory Management:**
|
||||
- Max Memory: 512MB (configurable)
|
||||
- Eviction Policy: `allkeys-lru` (Least Recently Used)
|
||||
- Lazy freeing for async deletion
|
||||
|
||||
**Performance:**
|
||||
- TCP keepalive: 300s
|
||||
- Active rehashing enabled
|
||||
- Optimized client output buffers
|
||||
|
||||
### Security
|
||||
|
||||
**Password Protection:**
|
||||
```bash
|
||||
# Set via Docker Secrets (recommended)
|
||||
echo "your-redis-password" | docker secret create redis_password -
|
||||
|
||||
# Or via environment variable
|
||||
REDIS_PASSWORD=your-password docker compose up -d
|
||||
```
|
||||
|
||||
**Network Isolation:**
|
||||
- Only accessible via `app-internal` Docker network
|
||||
- Not exposed to public internet
|
||||
- Protected mode enabled
|
||||
|
||||
## Health Monitoring
|
||||
|
||||
```bash
|
||||
# Check Redis health
|
||||
docker compose exec redis-stack redis-cli ping
|
||||
|
||||
# Get Redis info
|
||||
docker compose exec redis-stack redis-cli info
|
||||
|
||||
# Monitor real-time activity
|
||||
docker compose exec redis-stack redis-cli monitor
|
||||
|
||||
# Check memory usage
|
||||
docker compose exec redis-stack redis-cli info memory
|
||||
|
||||
# View slow queries
|
||||
docker compose exec redis-stack redis-cli slowlog get 10
|
||||
```
|
||||
|
||||
## Integration with Application Stack
|
||||
|
||||
### Production Setup
|
||||
|
||||
```bash
|
||||
# In application root directory
|
||||
docker compose \
|
||||
-f docker-compose.base.yml \
|
||||
-f docker-compose.production.yml \
|
||||
-f docker-compose.postgres-override.yml \
|
||||
-f docker-compose.redis-override.yml \
|
||||
up -d
|
||||
```
|
||||
|
||||
### Application Environment Variables
|
||||
|
||||
```env
|
||||
REDIS_HOST=redis-stack # Container name from this stack
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=your-password # Same as Redis Stack password
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Backup
|
||||
|
||||
Redis persistence is handled automatically via:
|
||||
- **RDB**: Snapshot backups in `/data/dump.rdb`
|
||||
- **AOF**: Append-only log in `/data/appendonly.aof`
|
||||
|
||||
**Manual Backup:**
|
||||
```bash
|
||||
# Trigger immediate RDB snapshot
|
||||
docker compose exec redis-stack redis-cli BGSAVE
|
||||
|
||||
# Copy backup files
|
||||
docker cp redis-stack:/data/dump.rdb ./backups/redis-$(date +%Y%m%d).rdb
|
||||
```
|
||||
|
||||
### Restore
|
||||
|
||||
```bash
|
||||
# Stop Redis
|
||||
docker compose stop redis-stack
|
||||
|
||||
# Restore backup
|
||||
docker cp ./backups/redis-backup.rdb redis-stack:/data/dump.rdb
|
||||
|
||||
# Start Redis
|
||||
docker compose start redis-stack
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
|
||||
```bash
|
||||
# Clear specific database
|
||||
docker compose exec redis-stack redis-cli -n 0 FLUSHDB
|
||||
|
||||
# Clear all databases (DANGEROUS!)
|
||||
docker compose exec redis-stack redis-cli FLUSHALL
|
||||
|
||||
# Remove old AOF/RDB files
|
||||
docker compose exec redis-stack sh -c "rm -f /data/*.aof.old /data/*.rdb.old"
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Memory Optimization
|
||||
|
||||
```bash
|
||||
# Check memory fragmentation
|
||||
docker compose exec redis-stack redis-cli info memory | grep fragmentation
|
||||
|
||||
# Defragment if needed (Redis 4.0+)
|
||||
docker compose exec redis-stack redis-cli MEMORY PURGE
|
||||
```
|
||||
|
||||
### Connection Limits
|
||||
|
||||
Adjust `maxclients` in `redis.conf` based on your application needs:
|
||||
```conf
|
||||
maxclients 10000 # Default, increase if needed
|
||||
```
|
||||
|
||||
### Persistence Trade-offs
|
||||
|
||||
**For Cache-Only Usage:**
|
||||
```conf
|
||||
# Disable persistence for maximum performance
|
||||
save ""
|
||||
appendonly no
|
||||
```
|
||||
|
||||
**For Critical Data:**
|
||||
```conf
|
||||
# More frequent snapshots
|
||||
save 300 1
|
||||
save 60 10
|
||||
appendfsync always # Slower but safest
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Issues
|
||||
|
||||
```bash
|
||||
# Test connection from application container
|
||||
docker exec php sh -c 'redis-cli -h redis-stack -a $REDIS_PASSWORD ping'
|
||||
|
||||
# Check network connectivity
|
||||
docker network inspect app-internal
|
||||
```
|
||||
|
||||
### Memory Issues
|
||||
|
||||
```bash
|
||||
# Check current memory usage
|
||||
docker compose exec redis-stack redis-cli info memory
|
||||
|
||||
# View evicted keys
|
||||
docker compose exec redis-stack redis-cli info stats | grep evicted
|
||||
|
||||
# Increase memory limit (edit docker-compose.yml)
|
||||
# deploy.resources.limits.memory: 1G
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
```bash
|
||||
# Check slow queries
|
||||
docker compose exec redis-stack redis-cli slowlog get 10
|
||||
|
||||
# Monitor commands in real-time
|
||||
docker compose exec redis-stack redis-cli monitor
|
||||
|
||||
# Analyze key patterns
|
||||
docker compose exec redis-stack redis-cli --bigkeys
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Network Topology
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ app-internal Network │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────────┐ │
|
||||
│ │ PHP │────────▶│ redis-stack │ │
|
||||
│ │ App │ │ (Cache) │ │
|
||||
│ └──────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────┐ │
|
||||
│ │ Queue │────────▶ │
|
||||
│ │ Worker │ (Shared Redis) │
|
||||
│ └──────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Data Separation
|
||||
|
||||
**Database Indexes:**
|
||||
- DB 0: Application cache (default)
|
||||
- DB 1: Session storage
|
||||
- DB 2: Queue backend
|
||||
- DB 3: Rate limiting
|
||||
- DB 4-15: Available for other uses
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use passwords** in production
|
||||
2. **Monitor memory usage** - set appropriate `maxmemory` limit
|
||||
3. **Enable persistence** for critical data (sessions, queue)
|
||||
4. **Regular backups** if using Redis as primary data store
|
||||
5. **Network isolation** - never expose Redis port publicly
|
||||
6. **Health checks** - monitor Redis availability
|
||||
7. **Resource limits** - set Docker memory limits
|
||||
8. **Logging** - check Redis logs for issues
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Application Docker Compose](../../docker-compose.production.yml)
|
||||
- [Redis Override Configuration](../../docker-compose.redis-override.yml)
|
||||
- [PostgreSQL Stack](../postgresql/README.md) (similar pattern)
|
||||
- [Redis Official Documentation](https://redis.io/documentation)
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- Check Redis logs: `docker compose logs redis-stack`
|
||||
- Test connectivity: `docker compose exec redis-stack redis-cli ping`
|
||||
- Review configuration: `docker compose exec redis-stack redis-cli config get '*'`
|
||||
37
deployment/legacy/stacks/stacks/redis/docker-compose.yml
Normal file
37
deployment/legacy/stacks/stacks/redis/docker-compose.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
services:
|
||||
# Redis Cache & Queue Backend
|
||||
redis-stack:
|
||||
image: redis:7-alpine
|
||||
container_name: redis-stack
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
- ./redis.conf:/usr/local/etc/redis/redis.conf:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command: >
|
||||
redis-server /usr/local/etc/redis/redis.conf
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
reservations:
|
||||
memory: 128M
|
||||
|
||||
volumes:
|
||||
redis-data:
|
||||
name: redis-data
|
||||
|
||||
networks:
|
||||
app-internal:
|
||||
external: true
|
||||
81
deployment/legacy/stacks/stacks/redis/redis.conf
Normal file
81
deployment/legacy/stacks/stacks/redis/redis.conf
Normal file
@@ -0,0 +1,81 @@
|
||||
# Redis Configuration for Production Stack
|
||||
# Based on Redis 7.x best practices
|
||||
|
||||
# Network Configuration
|
||||
bind 0.0.0.0
|
||||
port 6379
|
||||
tcp-backlog 511
|
||||
timeout 0
|
||||
tcp-keepalive 300
|
||||
|
||||
# General Settings
|
||||
daemonize no
|
||||
supervised no
|
||||
pidfile /var/run/redis.pid
|
||||
loglevel notice
|
||||
databases 16
|
||||
|
||||
# Persistence Configuration
|
||||
# RDB Snapshots
|
||||
# After 900 sec (15 min) if at least 1 key changed
|
||||
save 900 1
|
||||
# After 300 sec (5 min) if at least 10 keys changed
|
||||
save 300 10
|
||||
# After 60 sec if at least 10000 keys changed
|
||||
save 60 10000
|
||||
|
||||
stop-writes-on-bgsave-error yes
|
||||
rdbcompression yes
|
||||
rdbchecksum yes
|
||||
dbfilename dump.rdb
|
||||
dir /data
|
||||
|
||||
# AOF Persistence (Append Only File)
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
appendfsync everysec
|
||||
no-appendfsync-on-rewrite no
|
||||
auto-aof-rewrite-percentage 100
|
||||
auto-aof-rewrite-min-size 64mb
|
||||
|
||||
# Memory Management
|
||||
maxmemory 512mb
|
||||
maxmemory-policy allkeys-lru
|
||||
maxmemory-samples 5
|
||||
|
||||
# Lazy Freeing (async deletion)
|
||||
lazyfree-lazy-eviction yes
|
||||
lazyfree-lazy-expire yes
|
||||
lazyfree-lazy-server-del yes
|
||||
replica-lazy-flush yes
|
||||
|
||||
# Security
|
||||
# NOTE: Password should be set via REDIS_PASSWORD environment variable
|
||||
# requirepass will be set via redis-cli CONFIG SET after startup
|
||||
protected-mode yes
|
||||
|
||||
# Limits
|
||||
maxclients 10000
|
||||
|
||||
# Slow Log
|
||||
slowlog-log-slower-than 10000
|
||||
slowlog-max-len 128
|
||||
|
||||
# Advanced Config
|
||||
hash-max-ziplist-entries 512
|
||||
hash-max-ziplist-value 64
|
||||
list-max-ziplist-size -2
|
||||
set-max-intset-entries 512
|
||||
zset-max-ziplist-entries 128
|
||||
zset-max-ziplist-value 64
|
||||
|
||||
# Performance Tuning
|
||||
activerehashing yes
|
||||
client-output-buffer-limit normal 0 0 0
|
||||
client-output-buffer-limit replica 256mb 64mb 60
|
||||
client-output-buffer-limit pubsub 32mb 8mb 60
|
||||
|
||||
# Disable dangerous commands (optional)
|
||||
# rename-command FLUSHDB ""
|
||||
# rename-command FLUSHALL ""
|
||||
# rename-command CONFIG ""
|
||||
14
deployment/legacy/stacks/stacks/registry/.env.example
Normal file
14
deployment/legacy/stacks/stacks/registry/.env.example
Normal file
@@ -0,0 +1,14 @@
|
||||
# Docker Registry Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# Registry Domain
|
||||
REGISTRY_DOMAIN=registry.michaelschiemer.de
|
||||
|
||||
# Registry HTTP Secret (generate with: openssl rand -hex 32)
|
||||
REGISTRY_HTTP_SECRET=<generate-random-secret>
|
||||
|
||||
# BasicAuth Users (managed via auth/htpasswd file)
|
||||
# Generate with: htpasswd -Bn username
|
||||
26
deployment/legacy/stacks/stacks/registry/CREATE_AUTH.sh
Executable file
26
deployment/legacy/stacks/stacks/registry/CREATE_AUTH.sh
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
# Script to create htpasswd file for Docker Registry
|
||||
# Usage: ./CREATE_AUTH.sh <username> <password>
|
||||
|
||||
if [ -z "$1" ] || [ -z "$2" ]; then
|
||||
echo "Usage: $0 <username> <password>"
|
||||
echo "Example: $0 admin mypassword"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
USERNAME=$1
|
||||
PASSWORD=$2
|
||||
AUTH_DIR="$(dirname "$0")/auth"
|
||||
|
||||
# Create auth directory if it doesn't exist
|
||||
mkdir -p "$AUTH_DIR"
|
||||
|
||||
# Create or update htpasswd file
|
||||
docker run --rm --entrypoint htpasswd httpd:2 -Bbn "$USERNAME" "$PASSWORD" > "$AUTH_DIR/htpasswd"
|
||||
|
||||
# Set proper permissions
|
||||
chmod 644 "$AUTH_DIR/htpasswd"
|
||||
|
||||
echo "✅ htpasswd file created successfully!"
|
||||
echo "Username: $USERNAME"
|
||||
echo "Location: $AUTH_DIR/htpasswd"
|
||||
636
deployment/legacy/stacks/stacks/registry/README.md
Normal file
636
deployment/legacy/stacks/stacks/registry/README.md
Normal file
@@ -0,0 +1,636 @@
|
||||
# Docker Registry Stack - Private Container Registry
|
||||
|
||||
## Overview
|
||||
|
||||
Private Docker Registry mit BasicAuth für sichere Container Image Storage.
|
||||
|
||||
**Features**:
|
||||
- Private Docker Registry v2.8
|
||||
- BasicAuth Authentifizierung
|
||||
- SSL via Traefik
|
||||
- Automatic garbage collection
|
||||
- Image deletion support
|
||||
- Persistent storage
|
||||
|
||||
## Services
|
||||
|
||||
- **registry.michaelschiemer.de** - Docker Registry (BasicAuth protected)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running**
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **DNS Configuration**
|
||||
Point `registry.michaelschiemer.de` to your server IP (94.16.110.151)
|
||||
|
||||
3. **htpasswd Utility**
|
||||
```bash
|
||||
# Install if not available
|
||||
sudo apt-get install apache2-utils
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Registry HTTP Secret
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
REGISTRY_HTTP_SECRET=<generated-secret>
|
||||
```
|
||||
|
||||
### 3. Create Registry Users
|
||||
|
||||
```bash
|
||||
# Create htpasswd file with first user
|
||||
htpasswd -Bc auth/htpasswd admin
|
||||
|
||||
# Add additional users
|
||||
htpasswd -B auth/htpasswd developer
|
||||
|
||||
# Verify users
|
||||
cat auth/htpasswd
|
||||
```
|
||||
|
||||
**Important**: Use `-B` (bcrypt) for best security. `-c` creates new file (only for first user).
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Ensure Traefik is running
|
||||
docker network inspect traefik-public
|
||||
|
||||
# Create auth directory and users
|
||||
mkdir -p auth
|
||||
htpasswd -Bc auth/htpasswd admin
|
||||
|
||||
# Start registry
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Test registry endpoint
|
||||
curl https://registry.michaelschiemer.de/v2/
|
||||
|
||||
# Expected: Authentication required (401)
|
||||
|
||||
# Test with credentials
|
||||
curl -u admin:yourpassword https://registry.michaelschiemer.de/v2/_catalog
|
||||
|
||||
# Expected: {"repositories":[]}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Docker Login
|
||||
|
||||
```bash
|
||||
# Login to registry
|
||||
docker login registry.michaelschiemer.de
|
||||
|
||||
# Enter username and password when prompted
|
||||
```
|
||||
|
||||
### Push Images
|
||||
|
||||
```bash
|
||||
# Tag local image for registry
|
||||
docker tag myapp:latest registry.michaelschiemer.de/myapp:latest
|
||||
|
||||
# Push to registry
|
||||
docker push registry.michaelschiemer.de/myapp:latest
|
||||
```
|
||||
|
||||
### Pull Images
|
||||
|
||||
```bash
|
||||
# Pull from registry
|
||||
docker pull registry.michaelschiemer.de/myapp:latest
|
||||
```
|
||||
|
||||
### List Images
|
||||
|
||||
```bash
|
||||
# List all repositories
|
||||
curl -u admin:password https://registry.michaelschiemer.de/v2/_catalog
|
||||
|
||||
# List tags for repository
|
||||
curl -u admin:password https://registry.michaelschiemer.de/v2/myapp/tags/list
|
||||
```
|
||||
|
||||
### Delete Images
|
||||
|
||||
```bash
|
||||
# Get image digest
|
||||
curl -I -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
|
||||
-u admin:password \
|
||||
https://registry.michaelschiemer.de/v2/myapp/manifests/latest
|
||||
|
||||
# Delete by digest
|
||||
curl -X DELETE -u admin:password \
|
||||
https://registry.michaelschiemer.de/v2/myapp/manifests/sha256:...
|
||||
|
||||
# Run garbage collection
|
||||
docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml
|
||||
```
|
||||
|
||||
## Integration with Other Stacks
|
||||
|
||||
### Gitea Actions (Stack 2)
|
||||
|
||||
Push built images from Gitea Actions:
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/build.yml
|
||||
name: Build and Push
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Login to Registry
|
||||
run: |
|
||||
echo "${{ secrets.REGISTRY_PASSWORD }}" | \
|
||||
docker login registry.michaelschiemer.de \
|
||||
-u "${{ secrets.REGISTRY_USER }}" \
|
||||
--password-stdin
|
||||
|
||||
- name: Build and Push
|
||||
run: |
|
||||
docker build -t registry.michaelschiemer.de/myapp:${{ github.sha }} .
|
||||
docker push registry.michaelschiemer.de/myapp:${{ github.sha }}
|
||||
```
|
||||
|
||||
### Application Stack (Stack 4)
|
||||
|
||||
Pull images in application deployment:
|
||||
|
||||
```yaml
|
||||
# In application docker-compose.yml
|
||||
services:
|
||||
app:
|
||||
image: registry.michaelschiemer.de/myapp:latest
|
||||
# ... rest of configuration
|
||||
```
|
||||
|
||||
**Note**: Ensure Docker daemon has registry credentials configured.
|
||||
|
||||
## User Management
|
||||
|
||||
### Add User
|
||||
|
||||
```bash
|
||||
# Add new user
|
||||
htpasswd -B auth/htpasswd newuser
|
||||
|
||||
# Restart registry to apply
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
### Remove User
|
||||
|
||||
```bash
|
||||
# Edit htpasswd file and remove user line
|
||||
nano auth/htpasswd
|
||||
|
||||
# Restart registry
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
### Change Password
|
||||
|
||||
```bash
|
||||
# Update password (removes old entry)
|
||||
htpasswd -B auth/htpasswd username
|
||||
|
||||
# Restart registry
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# backup-registry.sh
|
||||
|
||||
BACKUP_DIR="/backups/registry"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup registry data
|
||||
docker run --rm \
|
||||
-v registry-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/registry-data-$DATE.tar.gz -C /data .
|
||||
|
||||
# Backup auth configuration
|
||||
tar czf $BACKUP_DIR/registry-auth-$DATE.tar.gz auth/
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR/*-$DATE.tar.gz"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop registry
|
||||
docker compose down
|
||||
|
||||
# Restore registry data
|
||||
docker run --rm \
|
||||
-v registry-data:/data \
|
||||
-v /backups/registry:/backup \
|
||||
alpine tar xzf /backup/registry-data-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Restore auth
|
||||
tar xzf /backups/registry/registry-auth-YYYYMMDD_HHMMSS.tar.gz
|
||||
|
||||
# Start registry
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Add to crontab:
|
||||
|
||||
```bash
|
||||
# Daily backup at 3 AM
|
||||
0 3 * * * /path/to/backup-registry.sh
|
||||
|
||||
# Keep only last 14 days
|
||||
0 4 * * * find /backups/registry -type f -mtime +14 -delete
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check registry health
|
||||
docker compose ps
|
||||
|
||||
# Registry health endpoint
|
||||
curl -f https://registry.michaelschiemer.de/v2/
|
||||
|
||||
# Check storage usage
|
||||
docker exec registry du -sh /var/lib/registry
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Check for errors
|
||||
docker compose logs registry | grep -i error
|
||||
|
||||
# Monitor access logs
|
||||
docker compose logs -f registry | grep "GET /v2"
|
||||
```
|
||||
|
||||
### Storage Statistics
|
||||
|
||||
```bash
|
||||
# Check volume size
|
||||
docker volume inspect registry-data
|
||||
|
||||
# Check disk usage
|
||||
docker system df -v | grep registry
|
||||
|
||||
# List images in registry
|
||||
curl -u admin:password https://registry.michaelschiemer.de/v2/_catalog | jq
|
||||
```
|
||||
|
||||
## Garbage Collection
|
||||
|
||||
### Manual Garbage Collection
|
||||
|
||||
```bash
|
||||
# Run garbage collection
|
||||
docker exec registry bin/registry garbage-collect \
|
||||
/etc/docker/registry/config.yml
|
||||
|
||||
# With dry-run
|
||||
docker exec registry bin/registry garbage-collect \
|
||||
--dry-run \
|
||||
/etc/docker/registry/config.yml
|
||||
```
|
||||
|
||||
### Scheduled Garbage Collection
|
||||
|
||||
Add to crontab (on production server):
|
||||
|
||||
```bash
|
||||
# Weekly garbage collection (Sunday 2 AM)
|
||||
0 2 * * 0 docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml
|
||||
```
|
||||
|
||||
**Note**: Automatic upload purging is enabled (168h old uploads cleaned every 24h).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Authentication Failed
|
||||
|
||||
```bash
|
||||
# Check htpasswd file exists
|
||||
ls -la auth/htpasswd
|
||||
|
||||
# Verify htpasswd format (should be bcrypt)
|
||||
cat auth/htpasswd
|
||||
# Format: username:$2y$...
|
||||
|
||||
# Test authentication
|
||||
curl -u username:password https://registry.michaelschiemer.de/v2/
|
||||
|
||||
# Check registry logs
|
||||
docker compose logs registry | grep auth
|
||||
```
|
||||
|
||||
### Cannot Push Images
|
||||
|
||||
```bash
|
||||
# Verify Docker login
|
||||
cat ~/.docker/config.json | grep registry.michaelschiemer.de
|
||||
|
||||
# Re-login
|
||||
docker logout registry.michaelschiemer.de
|
||||
docker login registry.michaelschiemer.de
|
||||
|
||||
# Check storage space
|
||||
df -h /var/lib/docker
|
||||
|
||||
# Check registry logs
|
||||
docker compose logs -f registry
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
|
||||
```bash
|
||||
# Verify Traefik certificate
|
||||
docker exec traefik cat /acme.json | grep registry.michaelschiemer.de
|
||||
|
||||
# Force certificate renewal (via Traefik)
|
||||
# Remove acme.json and restart Traefik
|
||||
|
||||
# Test SSL
|
||||
openssl s_client -connect registry.michaelschiemer.de:443 -servername registry.michaelschiemer.de < /dev/null
|
||||
```
|
||||
|
||||
### Registry Not Accessible
|
||||
|
||||
```bash
|
||||
# Check service is running
|
||||
docker compose ps
|
||||
|
||||
# Check Traefik routing
|
||||
docker exec traefik cat /etc/traefik/traefik.yml
|
||||
|
||||
# Check network
|
||||
docker network inspect traefik-public | grep registry
|
||||
|
||||
# Test from server
|
||||
curl -k https://localhost:5000/v2/
|
||||
```
|
||||
|
||||
### Storage Issues
|
||||
|
||||
```bash
|
||||
# Check volume mount
|
||||
docker exec registry df -h /var/lib/registry
|
||||
|
||||
# Check for corrupted layers
|
||||
docker exec registry find /var/lib/registry -type f -name "data" | wc -l
|
||||
|
||||
# Run garbage collection
|
||||
docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml
|
||||
|
||||
# Check for orphaned data
|
||||
docker volume prune
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Strong Passwords**: Use bcrypt (htpasswd -B) with strong passwords
|
||||
2. **SSL Only**: Always use HTTPS (enforced via Traefik)
|
||||
3. **User Management**: Regularly review and rotate credentials
|
||||
4. **Access Logging**: Monitor access logs for suspicious activity
|
||||
5. **Firewall**: Only expose port 443 (handled by Traefik)
|
||||
6. **Backup Encryption**: Encrypt backups containing sensitive data
|
||||
7. **Minimal Permissions**: Limit registry access to necessary users
|
||||
|
||||
### Update Stack
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Security Headers
|
||||
|
||||
Security headers are applied via Traefik's `default-chain@file` middleware:
|
||||
- HSTS
|
||||
- Content-Type Nosniff
|
||||
- XSS Protection
|
||||
- Frame Deny
|
||||
|
||||
## Docker Daemon Configuration
|
||||
|
||||
### Registry Access Methods
|
||||
|
||||
The registry is accessible via two methods:
|
||||
|
||||
1. **HTTPS via Traefik**: `registry.michaelschiemer.de` (recommended)
|
||||
- Uses SSL/TLS encryption
|
||||
- No insecure registry configuration needed
|
||||
- Preferred for production use
|
||||
|
||||
2. **HTTP Direct**: `registry.michaelschiemer.de`
|
||||
- Direct access to registry port
|
||||
- Requires insecure registry configuration
|
||||
- Useful for internal/local access
|
||||
|
||||
### Option 1: Using HTTPS Endpoint (Recommended)
|
||||
|
||||
No Docker daemon configuration needed. Just login and push:
|
||||
|
||||
```bash
|
||||
# Login via HTTPS endpoint
|
||||
docker login registry.michaelschiemer.de
|
||||
|
||||
# Push images
|
||||
docker push registry.michaelschiemer.de/framework:latest
|
||||
```
|
||||
|
||||
### Option 2: Using HTTP Endpoint (Direct Access)
|
||||
|
||||
If you need to use `registry.michaelschiemer.de`, configure Docker to allow insecure registries:
|
||||
|
||||
```bash
|
||||
# Edit daemon.json
|
||||
sudo nano /etc/docker/daemon.json
|
||||
```
|
||||
|
||||
Add `registry.michaelschiemer.de` to `insecure-registries`:
|
||||
```json
|
||||
{
|
||||
"insecure-registries": ["registry.michaelschiemer.de"],
|
||||
"registry-mirrors": [],
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "10m",
|
||||
"max-file": "3"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Only add to `insecure-registries` if using the HTTP endpoint. Use HTTPS endpoint (`registry.michaelschiemer.de`) to avoid insecure registry configuration.
|
||||
|
||||
```bash
|
||||
# Restart Docker
|
||||
sudo systemctl restart docker
|
||||
|
||||
# Verify
|
||||
docker info | grep Registry
|
||||
```
|
||||
|
||||
### Configure Credentials
|
||||
|
||||
```bash
|
||||
# Login once per machine
|
||||
docker login registry.michaelschiemer.de
|
||||
|
||||
# Credentials stored in ~/.docker/config.json
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Registry Configuration
|
||||
|
||||
For high-traffic registries, edit `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
# Increase concurrent operations
|
||||
- REGISTRY_STORAGE_MAXCONCURRENCY=50
|
||||
|
||||
# Cache settings
|
||||
- REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR=inmemory
|
||||
|
||||
# Rate limiting
|
||||
- REGISTRY_HTTP_RATELIMIT_REQUESTS_PER_SECOND=100
|
||||
```
|
||||
|
||||
### Storage Optimization
|
||||
|
||||
```bash
|
||||
# Enable compression for layers (reduces storage)
|
||||
# Already enabled in v2.8
|
||||
|
||||
# Monitor storage growth
|
||||
du -sh /var/lib/docker/volumes/registry-data/
|
||||
|
||||
# Schedule regular garbage collection
|
||||
# See "Scheduled Garbage Collection" section
|
||||
```
|
||||
|
||||
## Migration from Docker Hub
|
||||
|
||||
### Pull and Re-push Images
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# migrate-images.sh
|
||||
|
||||
IMAGES=(
|
||||
"nginx:latest"
|
||||
"node:18-alpine"
|
||||
"postgres:14"
|
||||
)
|
||||
|
||||
for image in "${IMAGES[@]}"; do
|
||||
echo "Migrating $image..."
|
||||
|
||||
# Pull from Docker Hub
|
||||
docker pull $image
|
||||
|
||||
# Tag for private registry
|
||||
docker tag $image registry.michaelschiemer.de/$image
|
||||
|
||||
# Push to private registry
|
||||
docker push registry.michaelschiemer.de/$image
|
||||
|
||||
echo "✅ Migrated $image"
|
||||
done
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### Registry API v2
|
||||
|
||||
```bash
|
||||
# List catalog
|
||||
GET /v2/_catalog
|
||||
|
||||
# List tags
|
||||
GET /v2/<name>/tags/list
|
||||
|
||||
# Get manifest
|
||||
GET /v2/<name>/manifests/<reference>
|
||||
|
||||
# Delete manifest
|
||||
DELETE /v2/<name>/manifests/<digest>
|
||||
|
||||
# Check blob exists
|
||||
HEAD /v2/<name>/blobs/<digest>
|
||||
```
|
||||
|
||||
**Authentication**: All endpoints require BasicAuth.
|
||||
|
||||
**Documentation**: https://docs.docker.com/registry/spec/api/
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Docker Registry Documentation**: https://docs.docker.com/registry/
|
||||
- **Registry Configuration**: https://docs.docker.com/registry/configuration/
|
||||
- **Storage Drivers**: https://docs.docker.com/registry/storage-drivers/
|
||||
- **Token Authentication**: https://docs.docker.com/registry/spec/auth/token/
|
||||
2
deployment/legacy/stacks/stacks/registry/auth/htpasswd
Normal file
2
deployment/legacy/stacks/stacks/registry/auth/htpasswd
Normal file
@@ -0,0 +1,2 @@
|
||||
admin:$2y$05$frHsEHd3NmM.LUMZ86dJe.aAZK5D5Qoh1w0mnXY0pfbbFmtU8aD5y
|
||||
|
||||
52
deployment/legacy/stacks/stacks/registry/docker-compose.yml
Normal file
52
deployment/legacy/stacks/stacks/registry/docker-compose.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
services:
|
||||
registry:
|
||||
image: registry:2.8
|
||||
container_name: registry
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
ports:
|
||||
- "0.0.0.0:5000:5000"
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- REGISTRY_STORAGE_DELETE_ENABLED=true
|
||||
- REGISTRY_AUTH=htpasswd
|
||||
- REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm
|
||||
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
|
||||
- REGISTRY_HTTP_SECRET=${REGISTRY_HTTP_SECRET}
|
||||
# Storage configuration
|
||||
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry
|
||||
# Garbage collection configuration (moved to config.yml if needed)
|
||||
volumes:
|
||||
- registry-data:/var/lib/registry
|
||||
- ./auth:/auth:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
|
||||
# HTTP Router
|
||||
- "traefik.http.routers.registry.rule=Host(`${REGISTRY_DOMAIN:-registry.michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.registry.entrypoints=websecure"
|
||||
- "traefik.http.routers.registry.tls=true"
|
||||
- "traefik.http.routers.registry.tls.certresolver=letsencrypt"
|
||||
|
||||
# Service
|
||||
- "traefik.http.services.registry.loadbalancer.server.port=5000"
|
||||
|
||||
# Middleware
|
||||
- "traefik.http.routers.registry.middlewares=default-chain@file"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q --header='Authorization: Basic YWRtaW46cmVnaXN0cnktc2VjdXJlLXBhc3N3b3JkLTIwMjU=' http://localhost:5000/v2/ || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
volumes:
|
||||
registry-data:
|
||||
name: registry-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
48
deployment/legacy/stacks/stacks/semaphore/.env.example
Normal file
48
deployment/legacy/stacks/stacks/semaphore/.env.example
Normal file
@@ -0,0 +1,48 @@
|
||||
# Semaphore CI Stack - Environment Configuration
|
||||
# Copy this file to .env and adjust values as needed
|
||||
|
||||
# ============================================
|
||||
# MySQL Database Configuration
|
||||
# ============================================
|
||||
MYSQL_ROOT_PASSWORD=semaphore_root
|
||||
MYSQL_DATABASE=semaphore
|
||||
MYSQL_USER=semaphore
|
||||
MYSQL_PASSWORD=semaphore
|
||||
|
||||
# ============================================
|
||||
# Semaphore Configuration
|
||||
# ============================================
|
||||
|
||||
# Port binding (default: 3000)
|
||||
# Only accessible via localhost (127.0.0.1)
|
||||
SEMAPHORE_PORT=3000
|
||||
|
||||
# Admin User Configuration
|
||||
SEMAPHORE_ADMIN=admin
|
||||
SEMAPHORE_ADMIN_NAME=Administrator
|
||||
SEMAPHORE_ADMIN_EMAIL=admin@localhost
|
||||
SEMAPHORE_ADMIN_PASSWORD=admin
|
||||
|
||||
# Playbook Storage Path (inside container)
|
||||
SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore
|
||||
|
||||
# Access Key Encryption
|
||||
# Generate with: head -c32 /dev/urandom | base64
|
||||
# IMPORTANT: Change this in production!
|
||||
SEMAPHORE_ACCESS_KEY_ENCRYPTION=change-me-in-production
|
||||
|
||||
# ============================================
|
||||
# Optional: LDAP Configuration
|
||||
# ============================================
|
||||
# SEMAPHORE_LDAP_ENABLED=false
|
||||
# SEMAPHORE_LDAP_HOST=ldap.example.com
|
||||
# SEMAPHORE_LDAP_PORT=389
|
||||
# SEMAPHORE_LDAP_DN=cn=admin,dc=example,dc=com
|
||||
# SEMAPHORE_LDAP_PASSWORD=ldap_password
|
||||
# SEMAPHORE_LDAP_BASE_DN=dc=example,dc=com
|
||||
# SEMAPHORE_LDAP_USER_FILTER=(uid=%s)
|
||||
|
||||
# ============================================
|
||||
# Optional: Webhook Configuration
|
||||
# ============================================
|
||||
# SEMAPHORE_WEBHOOK_URL=http://localhost:8080/webhook
|
||||
201
deployment/legacy/stacks/stacks/semaphore/QUICKSTART.md
Normal file
201
deployment/legacy/stacks/stacks/semaphore/QUICKSTART.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# ?? Quick Start: Semaphore mit Repository verbinden
|
||||
|
||||
Kurze Schritt-f?r-Schritt-Anleitung, um Semaphore mit deinem Git-Repository zu verbinden.
|
||||
|
||||
## Schritt 1: Semaphore starten
|
||||
|
||||
```bash
|
||||
cd deployment/stacks/semaphore
|
||||
cp env.example .env
|
||||
```
|
||||
|
||||
**WICHTIG**: Generiere einen Encryption Key:
|
||||
```bash
|
||||
head -c32 /dev/urandom | base64
|
||||
```
|
||||
Kopiere den Key und setze ihn in `.env`:
|
||||
```env
|
||||
SEMAPHORE_ACCESS_KEY_ENCRYPTION=<dein-generierter-key>
|
||||
```
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Schritt 2: Semaphore ?ffnen
|
||||
|
||||
?ffne im Browser: **http://localhost:9300**
|
||||
|
||||
**Login**:
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
## Schritt 3: Projekt erstellen
|
||||
|
||||
1. Klicke auf **"New Project"** oder **"Create Project"**
|
||||
2. Name: **"michaelschiemer"**
|
||||
3. Klicke auf **"Create"**
|
||||
|
||||
## Schritt 4: Inventory erstellen (Hosts definieren)
|
||||
|
||||
### 4.1 Inventory anlegen
|
||||
|
||||
1. Gehe zu **Inventories** ? **New Inventory**
|
||||
2. Name: **"Production Hosts"**
|
||||
3. Klicke auf **"Create"**
|
||||
|
||||
### 4.2 Host hinzuf?gen
|
||||
|
||||
1. Klicke auf dein Inventory ? **"Add Host"**
|
||||
2. F?lle aus:
|
||||
|
||||
```
|
||||
Name: production
|
||||
Address: 94.16.110.151
|
||||
SSH Username: deploy
|
||||
SSH Port: 22
|
||||
```
|
||||
|
||||
3. Klicke auf **"Save"**
|
||||
|
||||
### 4.3 SSH-Key hinzuf?gen
|
||||
|
||||
1. Gehe zu **Keys** (in der Seitenleiste)
|
||||
2. Klicke auf **"New Key"**
|
||||
3. Name: **"Deployment Key"**
|
||||
4. F?ge deinen SSH Private Key ein (aus `~/.ssh/production`):
|
||||
```bash
|
||||
cat ~/.ssh/production
|
||||
```
|
||||
5. Klicke auf **"Save"**
|
||||
6. Gehe zur?ck zu deinem Inventory und w?hle den Key bei deinem Host aus
|
||||
|
||||
## Schritt 5: Template erstellen (Playbook verwenden)
|
||||
|
||||
### 5.1 Template f?r CI Tests
|
||||
|
||||
1. Gehe zu **Templates** ? **New Template**
|
||||
2. **Name**: "Run CI Tests"
|
||||
3. **Inventory**: W?hle "Production Hosts"
|
||||
4. **Playbook Path**: `/tmp/semaphore/playbooks/ci-tests.yml`
|
||||
5. **Variables** (klicke auf "Variables"):
|
||||
```yaml
|
||||
repo_url: https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
repo_branch: main
|
||||
```
|
||||
6. Klicke auf **"Save"**
|
||||
|
||||
### 5.2 Template f?r Docker Build
|
||||
|
||||
1. **Templates** ? **New Template**
|
||||
2. **Name**: "Build Docker Image"
|
||||
3. **Inventory**: "Production Hosts"
|
||||
4. **Playbook Path**: `/tmp/semaphore/playbooks/docker-build.yml`
|
||||
5. **Variables**:
|
||||
```yaml
|
||||
repo_url: https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
repo_branch: main
|
||||
registry_url: registry.michaelschiemer.de
|
||||
registry_user: admin
|
||||
registry_password: <dein-registry-passwort>
|
||||
image_name: framework
|
||||
image_tag: latest
|
||||
```
|
||||
6. **Save**
|
||||
|
||||
### 5.3 Template f?r Staging Deployment
|
||||
|
||||
1. **Templates** ? **New Template**
|
||||
2. **Name**: "Deploy to Staging"
|
||||
3. **Inventory**: "Production Hosts"
|
||||
4. **Playbook Path**: `/tmp/semaphore/playbooks/deploy-staging.yml`
|
||||
5. **Variables**:
|
||||
```yaml
|
||||
registry_url: registry.michaelschiemer.de
|
||||
registry_user: admin
|
||||
registry_password: <dein-registry-passwort>
|
||||
image_name: framework
|
||||
image_tag: latest
|
||||
```
|
||||
6. **Save**
|
||||
|
||||
### 5.4 Template f?r Production Deployment
|
||||
|
||||
1. **Templates** ? **New Template**
|
||||
2. **Name**: "Deploy to Production"
|
||||
3. **Inventory**: "Production Hosts"
|
||||
4. **Playbook Path**: `/tmp/semaphore/playbooks/deploy-production.yml`
|
||||
5. **Variables**:
|
||||
```yaml
|
||||
registry_url: registry.michaelschiemer.de
|
||||
registry_user: admin
|
||||
registry_password: <dein-registry-passwort>
|
||||
image_name: framework
|
||||
image_tag: latest
|
||||
```
|
||||
6. **Save**
|
||||
|
||||
## Schritt 6: Task ausf?hren
|
||||
|
||||
1. Gehe zu **Templates**
|
||||
2. Klicke auf ein Template (z.B. "Run CI Tests")
|
||||
3. Klicke auf **"Run"**
|
||||
4. Beobachte die Ausf?hrung in Echtzeit
|
||||
|
||||
## ? Fertig!
|
||||
|
||||
Semaphore ist jetzt mit deinem Repository verbunden! Du kannst:
|
||||
|
||||
- ? CI/CD-Tasks manuell starten
|
||||
- ? Playbooks aus dem Repository verwenden
|
||||
- ? Deployments auf deine Server ausf?hren
|
||||
|
||||
## ?? Verf?gbare Playbooks
|
||||
|
||||
Die folgenden Playbooks sind bereits im Repository und k?nnen in Semaphore verwendet werden:
|
||||
|
||||
1. **`/tmp/semaphore/playbooks/ci-tests.yml`** - PHP Tests & Quality Checks
|
||||
2. **`/tmp/semaphore/playbooks/docker-build.yml`** - Docker Image Build & Push
|
||||
3. **`/tmp/semaphore/playbooks/deploy-staging.yml`** - Staging Deployment
|
||||
4. **`/tmp/semaphore/playbooks/deploy-production.yml`** - Production Deployment
|
||||
|
||||
## ?? Vorhandene Ansible-Playbooks
|
||||
|
||||
Die vorhandenen Ansible-Playbooks aus `deployment/ansible/playbooks/` sind auch verf?gbar unter:
|
||||
- `/tmp/semaphore/repo-playbooks/`
|
||||
|
||||
Zum Beispiel:
|
||||
- `/tmp/semaphore/repo-playbooks/deploy-update.yml`
|
||||
- `/tmp/semaphore/repo-playbooks/rollback.yml`
|
||||
|
||||
## ?? Tipps
|
||||
|
||||
### Docker Socket f?r Build-Tasks
|
||||
|
||||
Falls Docker-Build-Tasks ausgef?hrt werden sollen, f?ge den Docker Socket hinzu:
|
||||
|
||||
In `docker-compose.yml` bei Semaphore-Service:
|
||||
```yaml
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
```
|
||||
|
||||
Dann Semaphore neu starten:
|
||||
```bash
|
||||
docker compose restart semaphore
|
||||
```
|
||||
|
||||
### Git-Integration
|
||||
|
||||
Die Playbooks k?nnen direkt auf das Git-Repository zugreifen (HTTPS). F?r private Repositories oder SSH-Zugriff:
|
||||
|
||||
1. Erstelle einen SSH-Key f?r Git in Semaphore (Keys ? New Key)
|
||||
2. F?ge den Public Key zu deinem Git-Repository hinzu
|
||||
3. Verwende SSH-URL in Playbook-Variablen:
|
||||
```yaml
|
||||
repo_url: git@git.michaelschiemer.de:michael/michaelschiemer.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Hilfe**: Weitere Details in `SETUP_REPOSITORY.md` und `README.md`
|
||||
576
deployment/legacy/stacks/stacks/semaphore/README.md
Normal file
576
deployment/legacy/stacks/stacks/semaphore/README.md
Normal file
@@ -0,0 +1,576 @@
|
||||
# Semaphore CI Stack - Lokale Entwicklung
|
||||
|
||||
## Übersicht
|
||||
|
||||
Selbst-gehostete Semaphore CI/CD-Plattform für lokale Entwicklung, die es ermöglicht, CI/CD-Pipelines und Ansible-Playbooks lokal zu testen und auszuführen, ohne Abhängigkeit von externen CI-Services.
|
||||
|
||||
**Features**:
|
||||
- **Selbst-gehostet**: Läuft vollständig lokal auf dem Entwicklungsrechner
|
||||
- **Lokal isoliert**: NUR über localhost (127.0.0.1) erreichbar - KEIN externer Zugriff aus Sicherheitsgründen!
|
||||
- **MySQL-Backend**: Persistente Datenbank für Projekte, Tasks und Templates
|
||||
- **Web-UI**: Intuitive Benutzeroberfläche für Pipeline-Management (nur lokal)
|
||||
- **Ansible-Integration**: Native Unterstützung für Ansible-Playbooks
|
||||
- **Docker-basiert**: Einfaches Setup und Wartung
|
||||
|
||||
**Einsatzzweck**:
|
||||
- Lokales Testen von CI/CD-Pipelines
|
||||
- Entwicklung und Test von Ansible-Playbooks
|
||||
- Experimentieren mit Deployment-Workflows
|
||||
- Keine Abhängigkeit von externen CI-Services
|
||||
|
||||
## Services
|
||||
|
||||
- **mysql** - MySQL 8.0 Datenbank für Semaphore-Daten
|
||||
- **semaphore** - Semaphore CI/CD Web-UI und API
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
- Docker und Docker Compose installiert
|
||||
- Port 9300 auf localhost frei verfügbar (3000 wird von Gitea verwendet)
|
||||
- Ausreichend Speicherplatz für Docker Volumes (~500MB initial)
|
||||
|
||||
## Verzeichnisstruktur
|
||||
|
||||
```
|
||||
semaphore/
|
||||
├── docker-compose.yml # Service-Definitionen
|
||||
├── env.example # Environment-Variablen Template
|
||||
├── .env # Environment-Konfiguration (aus env.example erstellen)
|
||||
└── README.md # Diese Datei
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Environment-Datei erstellen
|
||||
|
||||
```bash
|
||||
cd deployment/stacks/semaphore
|
||||
cp env.example .env
|
||||
```
|
||||
|
||||
### 2. Konfiguration anpassen (Optional)
|
||||
|
||||
Bearbeite `.env` und passe die Werte an:
|
||||
|
||||
```bash
|
||||
nano .env
|
||||
```
|
||||
|
||||
**Wichtig**: Generiere einen sicheren Encryption Key:
|
||||
```bash
|
||||
# Linux/WSL
|
||||
head -c32 /dev/urandom | base64
|
||||
|
||||
# Windows PowerShell
|
||||
-join ((48..57) + (65..90) + (97..122) | Get-Random -Count 32 | % {[char]$_}) | ConvertTo-Base64
|
||||
```
|
||||
|
||||
Aktualisiere `SEMAPHORE_ACCESS_KEY_ENCRYPTION` in der `.env`-Datei.
|
||||
|
||||
### 3. Stack starten
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### 4. Semaphore Web-UI öffnen
|
||||
|
||||
Öffne im Browser: **http://localhost:9300**
|
||||
|
||||
⚠️ **WICHTIG**: Semaphore ist NUR lokal zugänglich (127.0.0.1). Es gibt KEINEN externen Zugriff aus Sicherheitsgründen.
|
||||
|
||||
**Standard-Login**:
|
||||
- **Username**: `admin` (oder Wert aus `SEMAPHORE_ADMIN`)
|
||||
- **Password**: `admin` (oder Wert aus `SEMAPHORE_ADMIN_PASSWORD`)
|
||||
|
||||
### 5. Erste Schritte in Semaphore
|
||||
|
||||
1. **Projekt erstellen**: Klicke auf "New Project" und erstelle ein neues Projekt
|
||||
2. **Inventory anlegen**: Erstelle ein Inventory mit lokalen Hosts oder Docker-Containern
|
||||
3. **Template erstellen**: Erstelle ein Template mit einem Ansible-Playbook
|
||||
4. **Task ausführen**: Starte einen Task und beobachte die Ausführung
|
||||
|
||||
## Konfiguration
|
||||
|
||||
### Environment-Variablen (.env)
|
||||
|
||||
#### MySQL-Datenbank
|
||||
|
||||
```env
|
||||
MYSQL_ROOT_PASSWORD=semaphore_root
|
||||
MYSQL_DATABASE=semaphore
|
||||
MYSQL_USER=semaphore
|
||||
MYSQL_PASSWORD=semaphore
|
||||
```
|
||||
|
||||
#### Semaphore-Konfiguration
|
||||
|
||||
```env
|
||||
# Port-Binding (Standard: 9300)
|
||||
SEMAPHORE_PORT=9300
|
||||
|
||||
# Admin-Benutzer
|
||||
SEMAPHORE_ADMIN=admin
|
||||
SEMAPHORE_ADMIN_NAME=Administrator
|
||||
SEMAPHORE_ADMIN_EMAIL=admin@localhost
|
||||
SEMAPHORE_ADMIN_PASSWORD=admin
|
||||
|
||||
# Playbook-Pfad (im Container)
|
||||
SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore
|
||||
|
||||
# Encryption Key (WICHTIG: Für Produktion ändern!)
|
||||
SEMAPHORE_ACCESS_KEY_ENCRYPTION=change-me-in-production
|
||||
```
|
||||
|
||||
#### Optionale Konfiguration
|
||||
|
||||
**LDAP-Integration** (Standard: deaktiviert):
|
||||
```env
|
||||
SEMAPHORE_LDAP_ENABLED=true
|
||||
SEMAPHORE_LDAP_HOST=ldap.example.com
|
||||
SEMAPHORE_LDAP_PORT=389
|
||||
SEMAPHORE_LDAP_DN=cn=admin,dc=example,dc=com
|
||||
SEMAPHORE_LDAP_PASSWORD=ldap_password
|
||||
SEMAPHORE_LDAP_BASE_DN=dc=example,dc=com
|
||||
SEMAPHORE_LDAP_USER_FILTER=(uid=%s)
|
||||
```
|
||||
|
||||
**Webhook-Integration**:
|
||||
```env
|
||||
SEMAPHORE_WEBHOOK_URL=http://localhost:8080/webhook
|
||||
```
|
||||
|
||||
## Verwendung
|
||||
|
||||
### Stack starten
|
||||
|
||||
```bash
|
||||
# Services im Hintergrund starten
|
||||
docker compose up -d
|
||||
|
||||
# Logs anzeigen
|
||||
docker compose logs -f
|
||||
|
||||
# Nur Semaphore-Logs
|
||||
docker compose logs -f semaphore
|
||||
|
||||
# Nur MySQL-Logs
|
||||
docker compose logs -f mysql
|
||||
```
|
||||
|
||||
### Stack stoppen
|
||||
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
### Stack neu starten
|
||||
|
||||
```bash
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
### Status prüfen
|
||||
|
||||
```bash
|
||||
# Container-Status anzeigen
|
||||
docker compose ps
|
||||
|
||||
# Health-Check-Status
|
||||
docker compose ps --format "table {{.Name}}\t{{.Status}}"
|
||||
|
||||
# Semaphore-Health-Check manuell
|
||||
docker compose exec semaphore wget --no-verbose --spider http://localhost:3000/api/health
|
||||
```
|
||||
|
||||
### Datenbank-Backup
|
||||
|
||||
```bash
|
||||
# MySQL-Dump erstellen
|
||||
docker compose exec mysql mysqldump -u semaphore -psemaphore semaphore > semaphore-backup-$(date +%Y%m%d).sql
|
||||
|
||||
# Backup wiederherstellen
|
||||
docker compose exec -T mysql mysql -u semaphore -psemaphore semaphore < semaphore-backup-YYYYMMDD.sql
|
||||
```
|
||||
|
||||
### Daten löschen und neu starten
|
||||
|
||||
```bash
|
||||
# ⚠️ WARNUNG: Löscht alle Daten!
|
||||
docker compose down -v
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Erste Schritte mit Semaphore
|
||||
|
||||
### 1. Projekt erstellen
|
||||
|
||||
1. Öffne http://localhost:9300 im Browser
|
||||
2. Melde dich mit Admin-Credentials an
|
||||
3. Klicke auf "New Project"
|
||||
4. Gib einen Projektnamen ein (z.B. "My Project")
|
||||
5. Klicke auf "Create"
|
||||
|
||||
### 2. Inventory anlegen
|
||||
|
||||
Ein Inventory definiert die Hosts, auf denen Playbooks ausgeführt werden sollen.
|
||||
|
||||
**Option A: Lokaler Host**
|
||||
1. Gehe zu Projekt → Inventories → New Inventory
|
||||
2. Name: "Local Hosts"
|
||||
3. Hinzufügen von Host:
|
||||
- Name: `localhost`
|
||||
- Address: `127.0.0.1`
|
||||
- SSH Username: `your-username`
|
||||
- SSH Key: Füge deinen privaten SSH-Key hinzu
|
||||
|
||||
**Option B: Docker-Container**
|
||||
1. Erstelle ein Inventory mit Docker-Hosts
|
||||
2. Für Docker-in-Docker Support benötigst du zusätzliche Konfiguration
|
||||
|
||||
### 3. Template erstellen
|
||||
|
||||
Templates definieren welche Playbooks ausgeführt werden sollen.
|
||||
|
||||
1. Gehe zu Projekt → Templates → New Template
|
||||
2. Template-Name: "Hello World"
|
||||
3. Inventory: Wähle dein Inventory
|
||||
4. Playbook: Erstelle ein einfaches Playbook:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- hosts: all
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Print hello world
|
||||
debug:
|
||||
msg: "Hello from Semaphore CI!"
|
||||
```
|
||||
|
||||
5. Speichere das Template
|
||||
|
||||
### 4. Task ausführen
|
||||
|
||||
1. Gehe zu Templates
|
||||
2. Klicke auf dein Template
|
||||
3. Klicke auf "Run"
|
||||
4. Beobachte die Ausführung in Echtzeit
|
||||
|
||||
## Integration mit bestehenden Stacks
|
||||
|
||||
### Verwendung mit lokaler Docker-Registry
|
||||
|
||||
Semaphore kann Docker-Images aus der lokalen Registry verwenden:
|
||||
|
||||
**In Ansible-Playbooks**:
|
||||
```yaml
|
||||
- name: Pull image from local registry
|
||||
docker_image:
|
||||
name: registry.michaelschiemer.de/framework:latest
|
||||
source: pull
|
||||
register: image_result
|
||||
```
|
||||
|
||||
**Voraussetzung**: Der Semaphore-Container muss Zugriff auf den Docker-Socket oder die Registry haben.
|
||||
|
||||
### Verwendung mit bestehenden Ansible-Playbooks
|
||||
|
||||
1. Mounte deine Playbooks als Volume:
|
||||
```yaml
|
||||
volumes:
|
||||
- /path/to/your/playbooks:/tmp/semaphore/playbooks:ro
|
||||
```
|
||||
|
||||
2. Oder kopiere Playbooks in den Container:
|
||||
```bash
|
||||
docker compose exec semaphore mkdir -p /tmp/semaphore/my-playbook
|
||||
docker cp my-playbook.yml semaphore:/tmp/semaphore/my-playbook/playbook.yml
|
||||
```
|
||||
|
||||
3. Verweise im Template auf den Playbook-Pfad
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port-Konflikt (Port 3000 vs 9300)
|
||||
|
||||
**Problem**: Port 3000 ist standardmäßig von Gitea belegt, daher verwendet Semaphore Port 9300.
|
||||
|
||||
**Lösung**: Wenn du einen anderen Port verwenden möchtest, setze `SEMAPHORE_PORT` in der `.env` Datei:
|
||||
```env
|
||||
SEMAPHORE_PORT=8080 # Oder ein anderer freier Port
|
||||
```
|
||||
|
||||
**Wichtig**: Der interne Container-Port bleibt immer 3000 - nur der externe Host-Port ändert sich.
|
||||
|
||||
### Semaphore startet nicht
|
||||
|
||||
**Prüfe Logs**:
|
||||
```bash
|
||||
docker compose logs semaphore
|
||||
```
|
||||
|
||||
**Häufige Ursachen**:
|
||||
- MySQL ist noch nicht bereit (warte auf Health-Check)
|
||||
- Port 9300 ist bereits belegt: `netstat -tuln | grep 9300` (oder auf Windows: `netstat -ano | findstr :9300`)
|
||||
- Falsche Datenbank-Credentials
|
||||
|
||||
**Lösung**:
|
||||
```bash
|
||||
# Prüfe MySQL-Status
|
||||
docker compose ps mysql
|
||||
|
||||
# Prüfe Semaphore-Logs für DB-Verbindungsfehler
|
||||
docker compose logs semaphore | grep -i database
|
||||
|
||||
# Restart wenn nötig
|
||||
docker compose restart semaphore
|
||||
```
|
||||
|
||||
### MySQL startet nicht
|
||||
|
||||
**Prüfe MySQL-Logs**:
|
||||
```bash
|
||||
docker compose logs mysql
|
||||
```
|
||||
|
||||
**Häufige Ursachen**:
|
||||
- Volume-Permissions-Probleme
|
||||
- Port-Konflikte (unwahrscheinlich, da kein Port-Mapping)
|
||||
|
||||
**Lösung**:
|
||||
```bash
|
||||
# Prüfe Volume
|
||||
docker volume inspect semaphore-mysql-data
|
||||
|
||||
# Cleanup und Neu-Start (⚠️ Datenverlust!)
|
||||
docker compose down -v
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Login funktioniert nicht
|
||||
|
||||
**Standard-Credentials**:
|
||||
- Username: `admin` (oder `SEMAPHORE_ADMIN` Wert)
|
||||
- Password: `admin` (oder `SEMAPHORE_ADMIN_PASSWORD` Wert)
|
||||
|
||||
**Admin-Passwort zurücksetzen**:
|
||||
1. Stoppe Semaphore: `docker compose stop semaphore`
|
||||
2. Setze `SEMAPHORE_ADMIN_PASSWORD` in `.env` auf neues Passwort
|
||||
3. Starte Semaphore: `docker compose up -d`
|
||||
|
||||
### Playbooks werden nicht gefunden
|
||||
|
||||
**Prüfe Playbook-Pfad**:
|
||||
```bash
|
||||
docker compose exec semaphore ls -la /tmp/semaphore
|
||||
```
|
||||
|
||||
**Lösung**:
|
||||
- Stelle sicher, dass `SEMAPHORE_PLAYBOOK_PATH` korrekt gesetzt ist
|
||||
- Prüfe, ob Playbooks im richtigen Pfad liegen
|
||||
- Stelle sicher, dass Datei-Berechtigungen korrekt sind
|
||||
|
||||
### Health-Check schlägt fehl
|
||||
|
||||
**Prüfe Health-Check**:
|
||||
```bash
|
||||
docker compose exec semaphore wget --no-verbose --spider http://localhost:3000/api/health
|
||||
```
|
||||
|
||||
**Lösung**:
|
||||
- Warte auf vollständigen Start (kann 1-2 Minuten dauern)
|
||||
- Prüfe Logs: `docker compose logs semaphore`
|
||||
- Restart wenn nötig: `docker compose restart semaphore`
|
||||
|
||||
## Sicherheit
|
||||
|
||||
### 🔒 Lokaler Zugriff (Nur localhost)
|
||||
|
||||
Semaphore ist absichtlich NUR lokal zugänglich aus Sicherheitsgründen:
|
||||
|
||||
- ✅ **Nur localhost-Zugriff**: Port 9300 gebunden an 127.0.0.1 (nicht 0.0.0.0)
|
||||
- ✅ **Keine externen Netzwerke**: Kein externer Zugriff möglich
|
||||
- ✅ **Keine Traefik-Integration**: Keine öffentliche Route konfiguriert
|
||||
- ✅ **Isoliertes Netzwerk**: Nur internes Docker-Netzwerk
|
||||
- ⚠️ **Standard-Passwörter**: Nur für lokale Entwicklung - ändern bei Bedarf
|
||||
|
||||
### ⚠️ KEINE Produktions-Nutzung über Internet!
|
||||
|
||||
**Semaphore sollte NICHT öffentlich zugänglich gemacht werden!**
|
||||
|
||||
Gründe:
|
||||
1. **Sicherheitsrisiko**: Semaphore hat Zugriff auf SSH-Keys und Deployment-Credentials
|
||||
2. **Privilegierter Zugriff**: Kann auf Produktions-Server zugreifen
|
||||
3. **Keine Multi-Factor-Authentication**: Standardmäßig keine 2FA
|
||||
4. **Exploits**: Selbst-gehostete Software kann Sicherheitslücken haben
|
||||
|
||||
### Für lokale Entwicklung (Empfohlen)
|
||||
|
||||
Wenn du Semaphore lokal nutzen willst:
|
||||
|
||||
1. **Starke Passwörter**: Ändere alle Passwörter in `.env`:
|
||||
```env
|
||||
SEMAPHORE_ADMIN_PASSWORD=<starkes-passwort>
|
||||
MYSQL_PASSWORD=<starkes-passwort>
|
||||
MYSQL_ROOT_PASSWORD=<starkes-passwort>
|
||||
```
|
||||
2. **Encryption Key**: Generiere einen sicheren Key:
|
||||
```bash
|
||||
head -c32 /dev/urandom | base64
|
||||
```
|
||||
Setze in `.env`: `SEMAPHORE_ACCESS_KEY_ENCRYPTION=<generierter-key>`
|
||||
3. **SSH-Keys**: Stelle sicher, dass SSH-Keys sicher gespeichert sind
|
||||
4. **Backup-Strategie**: Regelmäßige MySQL-Backups einrichten
|
||||
5. **Resource Limits**: Füge Memory/CPU-Limits hinzu (optional)
|
||||
|
||||
## Wartung
|
||||
|
||||
### Regelmäßige Aufgaben
|
||||
|
||||
**Wöchentlich**:
|
||||
- Logs auf Fehler prüfen: `docker compose logs --tail=100`
|
||||
- Disk-Space prüfen: `docker system df`
|
||||
- Backup erstellen (wenn wichtige Daten vorhanden)
|
||||
|
||||
**Monatlich**:
|
||||
- Images aktualisieren: `docker compose pull && docker compose up -d`
|
||||
- Alte Tasks in Semaphore aufräumen (über Web-UI)
|
||||
|
||||
### Updates
|
||||
|
||||
```bash
|
||||
# Aktuelle Images herunterladen
|
||||
docker compose pull
|
||||
|
||||
# Mit neuen Images neu starten
|
||||
docker compose up -d
|
||||
|
||||
# Logs prüfen
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
### Daten bereinigen
|
||||
|
||||
```bash
|
||||
# Alte Docker-Images löschen
|
||||
docker image prune -a
|
||||
|
||||
# Alte Volumes prüfen
|
||||
docker volume ls
|
||||
|
||||
# ⚠️ Vorsicht: Löscht alle Semaphore-Daten!
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
## Backup und Wiederherstellung
|
||||
|
||||
### Backup erstellen
|
||||
|
||||
```bash
|
||||
# MySQL-Dump
|
||||
docker compose exec mysql mysqldump \
|
||||
-u semaphore -psemaphore semaphore \
|
||||
> semaphore-backup-$(date +%Y%m%d-%H%M%S).sql
|
||||
|
||||
# Volume-Backup (komplett)
|
||||
docker run --rm \
|
||||
-v semaphore-mysql-data:/data \
|
||||
-v $(pwd):/backup \
|
||||
alpine tar czf /backup/semaphore-mysql-backup-$(date +%Y%m%d).tar.gz /data
|
||||
```
|
||||
|
||||
### Wiederherstellung
|
||||
|
||||
```bash
|
||||
# MySQL-Dump wiederherstellen
|
||||
docker compose exec -T mysql mysql \
|
||||
-u semaphore -psemaphore semaphore \
|
||||
< semaphore-backup-YYYYMMDD.sql
|
||||
|
||||
# Volume wiederherstellen (⚠️ stoppt Container)
|
||||
docker compose down
|
||||
docker run --rm \
|
||||
-v semaphore-mysql-data:/data \
|
||||
-v $(pwd):/backup \
|
||||
alpine sh -c "cd /data && tar xzf /backup/semaphore-mysql-backup-YYYYMMDD.tar.gz"
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Performance-Optimierung
|
||||
|
||||
### MySQL-Optimierung
|
||||
|
||||
Für bessere Performance kannst du MySQL-Konfiguration anpassen:
|
||||
|
||||
1. Erstelle `mysql/conf.d/my.cnf`:
|
||||
```ini
|
||||
[mysqld]
|
||||
innodb_buffer_pool_size = 256M
|
||||
max_connections = 100
|
||||
```
|
||||
|
||||
2. Mounte in `docker-compose.yml`:
|
||||
```yaml
|
||||
volumes:
|
||||
- ./mysql/conf.d:/etc/mysql/conf.d:ro
|
||||
```
|
||||
|
||||
### Resource Limits
|
||||
|
||||
Füge Limits in `docker-compose.yml` hinzu:
|
||||
|
||||
```yaml
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
cpus: '0.5'
|
||||
```
|
||||
|
||||
## Unterstützung
|
||||
|
||||
### Dokumentation
|
||||
|
||||
- **Semaphore CI Docs**: https://docs.semaphoreui.com/
|
||||
- **Semaphore GitHub**: https://github.com/semaphoreui/semaphore
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# Alle Logs
|
||||
docker compose logs -f
|
||||
|
||||
# Semaphore-Logs
|
||||
docker compose logs -f semaphore
|
||||
|
||||
# MySQL-Logs
|
||||
docker compose logs -f mysql
|
||||
|
||||
# Letzte 100 Zeilen
|
||||
docker compose logs --tail=100
|
||||
```
|
||||
|
||||
### Health-Checks
|
||||
|
||||
```bash
|
||||
# Container-Status
|
||||
docker compose ps
|
||||
|
||||
# Semaphore-Health
|
||||
curl http://localhost:9300/api/health
|
||||
|
||||
# MySQL-Health
|
||||
docker compose exec mysql mysqladmin ping -h localhost -u root -psemaphore_root
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Setup-Status**: ✅ Bereit für lokale Entwicklung
|
||||
|
||||
**Nächste Schritte**:
|
||||
1. `cp env.example .env` ausführen
|
||||
2. `docker compose up -d` starten
|
||||
3. http://localhost:9300 öffnen
|
||||
4. Mit Admin-Credentials anmelden
|
||||
5. Erstes Projekt und Template erstellen
|
||||
|
||||
355
deployment/legacy/stacks/stacks/semaphore/SETUP_REPOSITORY.md
Normal file
355
deployment/legacy/stacks/stacks/semaphore/SETUP_REPOSITORY.md
Normal file
@@ -0,0 +1,355 @@
|
||||
# Git-Repository in Semaphore Self-Hosted integrieren
|
||||
|
||||
Diese Anleitung beschreibt, wie du dein Git-Repository in Semaphore Self-Hosted (Ansible-UI) integrierst.
|
||||
|
||||
## ?? ?bersicht
|
||||
|
||||
Semaphore Self-Hosted ist eine Web-UI f?r Ansible, die es erm?glicht:
|
||||
- Ansible-Playbooks ?ber eine grafische Oberfl?che auszuf?hren
|
||||
- CI/CD-Workflows mit Ansible zu automatisieren
|
||||
- Git-Repositories als Playbook-Quellen zu verwenden
|
||||
|
||||
## ?? Schritt 1: Semaphore starten
|
||||
|
||||
### 1.1 Environment-Datei erstellen
|
||||
|
||||
```bash
|
||||
cd deployment/stacks/semaphore
|
||||
cp env.example .env
|
||||
```
|
||||
|
||||
### 1.2 Encryption Key generieren (WICHTIG!)
|
||||
|
||||
```bash
|
||||
# Linux/WSL
|
||||
head -c32 /dev/urandom | base64
|
||||
|
||||
# Windows PowerShell
|
||||
-join ((48..57) + (65..90) + (97..122) | Get-Random -Count 32 | % {[char]$_}) | ConvertTo-Base64
|
||||
```
|
||||
|
||||
Kopiere den generierten Key und setze ihn in `.env`:
|
||||
```env
|
||||
SEMAPHORE_ACCESS_KEY_ENCRYPTION=<dein-generierter-key>
|
||||
```
|
||||
|
||||
### 1.3 Stack starten
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### 1.4 Web-UI ?ffnen
|
||||
|
||||
?ffne im Browser: **http://localhost:9300**
|
||||
|
||||
?? **WICHTIG**: Semaphore ist NUR lokal zug?nglich (127.0.0.1). Es gibt KEINEN externen Zugriff aus Sicherheitsgr?nden!
|
||||
|
||||
**Standard-Login**:
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
## ?? Schritt 2: Projekt in Semaphore erstellen
|
||||
|
||||
### 2.1 Neues Projekt anlegen
|
||||
|
||||
1. Melde dich in Semaphore an
|
||||
2. Klicke auf **"New Project"** oder **"Create Project"**
|
||||
3. Gib einen Projektnamen ein: **"michaelschiemer"**
|
||||
4. Klicke auf **"Create"**
|
||||
|
||||
### 2.2 Git-Repository als Playbook-Quelle hinzuf?gen
|
||||
|
||||
Semaphore Self-Hosted kann Playbooks direkt aus Git-Repositories laden. Du hast zwei Optionen:
|
||||
|
||||
**Option A: Playbooks aus Repository-Clone verwenden** (Empfohlen)
|
||||
|
||||
1. Clone dein Repository lokal:
|
||||
```bash
|
||||
git clone https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
cd michaelschiemer
|
||||
```
|
||||
|
||||
2. Erstelle ein Playbook-Verzeichnis f?r Semaphore:
|
||||
```bash
|
||||
mkdir -p deployment/stacks/semaphore/playbooks
|
||||
cp deployment/ansible/playbooks/*.yml deployment/stacks/semaphore/playbooks/
|
||||
```
|
||||
|
||||
**Option B: Playbooks direkt in Semaphore einf?gen**
|
||||
|
||||
Semaphore kann auch Playbooks direkt im Web-UI erstellen/bearbeiten. F?r Git-Integration ist Option A besser.
|
||||
|
||||
## ??? Schritt 3: Inventory erstellen
|
||||
|
||||
Ein Inventory definiert die Hosts, auf denen Playbooks ausgef?hrt werden.
|
||||
|
||||
### 3.1 Inventory anlegen
|
||||
|
||||
1. Gehe zu deinem Projekt ? **Inventories** ? **New Inventory**
|
||||
2. Name: **"Production Hosts"** oder **"Local Hosts"**
|
||||
3. Klicke auf **"Create"**
|
||||
|
||||
### 3.2 Host hinzuf?gen
|
||||
|
||||
1. Klicke auf dein Inventory ? **Add Host**
|
||||
2. F?lle folgende Felder aus:
|
||||
|
||||
**F?r Production-Deployment:**
|
||||
```
|
||||
Name: production
|
||||
Address: 94.16.110.151
|
||||
SSH Username: deploy
|
||||
SSH Port: 22
|
||||
```
|
||||
|
||||
**F?r Staging-Deployment:**
|
||||
```
|
||||
Name: staging
|
||||
Address: 94.16.110.151 # oder dein Staging-Host
|
||||
SSH Username: deploy
|
||||
SSH Port: 22
|
||||
```
|
||||
|
||||
### 3.3 SSH-Key hinzuf?gen
|
||||
|
||||
1. Gehe zu **Keys** (Seitenleiste)
|
||||
2. Klicke auf **"New Key"**
|
||||
3. Gib einen Namen ein: **"Deployment Key"**
|
||||
4. F?ge deinen SSH Private Key ein (aus `~/.ssh/production` oder ?hnlich)
|
||||
5. W?hle den Key im Host-Inventory aus
|
||||
|
||||
## ?? Schritt 4: Template erstellen
|
||||
|
||||
Templates verbinden Playbooks mit Inventories und definieren Parameter.
|
||||
|
||||
### 4.1 Template f?r Tests erstellen
|
||||
|
||||
1. Gehe zu **Templates** ? **New Template**
|
||||
2. Template-Name: **"Run PHP Tests"**
|
||||
3. Inventory: W?hle dein Inventory
|
||||
4. Playbook: Erstelle ein Playbook oder verwende ein vorhandenes:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: Run PHP Tests
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Checkout repository
|
||||
git:
|
||||
repo: https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
dest: /tmp/ci-build
|
||||
version: main
|
||||
|
||||
- name: Install dependencies
|
||||
command: composer install --no-interaction --prefer-dist
|
||||
args:
|
||||
chdir: /tmp/ci-build
|
||||
|
||||
- name: Run tests
|
||||
command: ./vendor/bin/pest
|
||||
args:
|
||||
chdir: /tmp/ci-build
|
||||
register: test_result
|
||||
|
||||
- name: Show test results
|
||||
debug:
|
||||
var: test_result.stdout_lines
|
||||
```
|
||||
|
||||
5. Speichere das Template
|
||||
|
||||
### 4.2 Template f?r Build erstellen
|
||||
|
||||
Erstelle ein Template f?r Docker Image Build:
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: Build and Push Docker Image
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Checkout repository
|
||||
git:
|
||||
repo: https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
dest: /tmp/ci-build
|
||||
version: main
|
||||
|
||||
- name: Login to Docker registry
|
||||
docker_login:
|
||||
username: "{{ registry_user }}"
|
||||
password: "{{ registry_password }}"
|
||||
registry_url: "{{ registry_url }}"
|
||||
vars:
|
||||
registry_user: "admin"
|
||||
registry_url: "registry.michaelschiemer.de"
|
||||
|
||||
- name: Build Docker image
|
||||
docker_image:
|
||||
name: "{{ registry_url }}/framework:{{ image_tag }}"
|
||||
tag: "{{ image_tag }}"
|
||||
source: build
|
||||
build:
|
||||
path: /tmp/ci-build
|
||||
dockerfile: Dockerfile.production
|
||||
push: yes
|
||||
vars:
|
||||
registry_url: "registry.michaelschiemer.de"
|
||||
image_tag: "latest"
|
||||
```
|
||||
|
||||
### 4.3 Template f?r Deployment erstellen
|
||||
|
||||
Erstelle ein Template f?r Production-Deployment (verwendet die vorhandenen Ansible-Playbooks):
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: Deploy to Production
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
tasks:
|
||||
- name: Checkout deployment scripts
|
||||
git:
|
||||
repo: https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
dest: /tmp/deployment
|
||||
version: main
|
||||
|
||||
- name: Include deployment playbook
|
||||
include_role:
|
||||
name: deployment/ansible/playbooks/deploy-update.yml
|
||||
```
|
||||
|
||||
Oder verwende die vorhandenen Ansible-Playbooks direkt:
|
||||
|
||||
**Vorteil**: Die vorhandenen Playbooks in `deployment/ansible/playbooks/` k?nnen direkt verwendet werden!
|
||||
|
||||
1. Erstelle ein Template: **"Deploy to Production"**
|
||||
2. W?hle Inventory: **"Production Hosts"**
|
||||
3. Playbook-Pfad: **`deployment/ansible/playbooks/deploy-update.yml`**
|
||||
4. Speichere
|
||||
|
||||
## ?? Schritt 5: CI/CD-Workflow einrichten
|
||||
|
||||
### 5.1 Git-Webhook konfigurieren (Optional)
|
||||
|
||||
Semaphore kann Webhooks von Git-Repositories empfangen, um automatisch Tasks zu starten.
|
||||
|
||||
1. Gehe zu deinem Git-Repository (Gitea)
|
||||
2. Settings ? Webhooks ? Add Webhook
|
||||
3. Webhook-URL: `http://localhost:9300/api/hook/git` (nur lokal!)
|
||||
4. Content-Type: `application/json`
|
||||
5. Secret: Optional, aber empfohlen
|
||||
|
||||
### 5.2 Task manuell starten
|
||||
|
||||
1. Gehe zu **Templates**
|
||||
2. Klicke auf dein Template (z.B. "Run PHP Tests")
|
||||
3. Klicke auf **"Run"**
|
||||
4. Beobachte die Ausf?hrung in Echtzeit
|
||||
|
||||
### 5.3 Task automatisch starten
|
||||
|
||||
Semaphore kann Tasks basierend auf Git-Events starten:
|
||||
|
||||
1. Gehe zu Template ? **Settings**
|
||||
2. Aktiviere **"Auto Run on Push"**
|
||||
3. W?hle Branch: `main` oder `staging`
|
||||
4. Speichere
|
||||
|
||||
## ?? Verwendung vorhandener Ansible-Playbooks
|
||||
|
||||
### Vorhandene Playbooks verwenden
|
||||
|
||||
Die vorhandenen Ansible-Playbooks in `deployment/ansible/playbooks/` k?nnen direkt in Semaphore verwendet werden:
|
||||
|
||||
1. **Mounte Playbooks als Volume** (in `docker-compose.yml`):
|
||||
```yaml
|
||||
volumes:
|
||||
- ./../../ansible/playbooks:/tmp/semaphore/playbooks:ro
|
||||
```
|
||||
|
||||
2. **Erstelle Templates**, die auf diese Playbooks verweisen:
|
||||
- Playbook-Pfad: `/tmp/semaphore/playbooks/deploy-update.yml`
|
||||
- Inventory: W?hle dein Production-Inventory
|
||||
|
||||
### Beispiel-Templates
|
||||
|
||||
#### Template: Deploy Update
|
||||
- Name: "Deploy Update"
|
||||
- Inventory: "Production Hosts"
|
||||
- Playbook: `/tmp/semaphore/playbooks/deploy-update.yml`
|
||||
- Variables:
|
||||
```yaml
|
||||
registry_url: registry.michaelschiemer.de
|
||||
image_name: framework
|
||||
image_tag: latest
|
||||
```
|
||||
|
||||
#### Template: Rollback
|
||||
- Name: "Rollback"
|
||||
- Inventory: "Production Hosts"
|
||||
- Playbook: `/tmp/semaphore/playbooks/rollback.yml`
|
||||
- Variables:
|
||||
```yaml
|
||||
registry_url: registry.michaelschiemer.de
|
||||
image_name: framework
|
||||
```
|
||||
|
||||
## ?? Erweiterte Konfiguration
|
||||
|
||||
### Docker Socket f?r Build-Tasks
|
||||
|
||||
F?r Docker-Build-Tasks muss der Semaphore-Container Zugriff auf den Docker-Socket haben:
|
||||
|
||||
**In `docker-compose.yml` hinzuf?gen:**
|
||||
```yaml
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
```
|
||||
|
||||
### Environment-Variablen f?r Templates
|
||||
|
||||
Du kannst Environment-Variablen in Templates verwenden:
|
||||
|
||||
1. Gehe zu Template ? **Variables**
|
||||
2. F?ge Variablen hinzu:
|
||||
- `registry_url`: `registry.michaelschiemer.de`
|
||||
- `registry_user`: `admin`
|
||||
- `image_name`: `framework`
|
||||
|
||||
### Git-Integration mit SSH-Keys
|
||||
|
||||
F?r private Repositories:
|
||||
|
||||
1. Gehe zu **Keys**
|
||||
2. Erstelle einen SSH-Key f?r Git-Access
|
||||
3. F?ge den Public Key zu deinem Git-Repository hinzu (Deploy Keys)
|
||||
4. Verwende SSH-URL in Git-Tasks:
|
||||
```yaml
|
||||
git:
|
||||
repo: git@git.michaelschiemer.de:michael/michaelschiemer.git
|
||||
```
|
||||
|
||||
## ?? N?tzliche Links
|
||||
|
||||
- [Semaphore Self-Hosted Dokumentation](https://docs.ansible-semaphore.com/)
|
||||
- [Ansible Semaphore GitHub](https://github.com/ansible-semaphore/semaphore)
|
||||
- [Ansible Playbook Dokumentation](https://docs.ansible.com/ansible/latest/playbook_guide/index.html)
|
||||
|
||||
## ? Checkliste
|
||||
|
||||
- [ ] Semaphore gestartet (http://localhost:9300)
|
||||
- [ ] Projekt erstellt
|
||||
- [ ] Inventory mit Hosts erstellt
|
||||
- [ ] SSH-Keys f?r Host-Zugriff konfiguriert
|
||||
- [ ] Template f?r Tests erstellt
|
||||
- [ ] Template f?r Build erstellt
|
||||
- [ ] Template f?r Deployment erstellt
|
||||
- [ ] Erstes Template erfolgreich ausgef?hrt
|
||||
- [ ] Git-Webhook konfiguriert (optional)
|
||||
|
||||
---
|
||||
|
||||
**N?chste Schritte**: Starte Semaphore und erstelle dein erstes Template!
|
||||
93
deployment/legacy/stacks/stacks/semaphore/docker-compose.yml
Normal file
93
deployment/legacy/stacks/stacks/semaphore/docker-compose.yml
Normal file
@@ -0,0 +1,93 @@
|
||||
services:
|
||||
# MySQL Database for Semaphore
|
||||
mysql:
|
||||
image: mysql:8.0
|
||||
container_name: semaphore-mysql
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- semaphore-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-semaphore_root}
|
||||
- MYSQL_DATABASE=${MYSQL_DATABASE:-semaphore}
|
||||
- MYSQL_USER=${MYSQL_USER:-semaphore}
|
||||
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-semaphore}
|
||||
volumes:
|
||||
- semaphore-mysql-data:/var/lib/mysql
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p${MYSQL_ROOT_PASSWORD:-semaphore_root}"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
command: >
|
||||
--default-authentication-plugin=mysql_native_password
|
||||
--character-set-server=utf8mb4
|
||||
--collation-server=utf8mb4_unicode_ci
|
||||
|
||||
# Semaphore CI/CD Platform
|
||||
semaphore:
|
||||
image: semaphoreui/semaphore:latest
|
||||
container_name: semaphore
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
mysql:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- semaphore-internal
|
||||
ports:
|
||||
# ONLY bind to localhost (127.0.0.1) - NOT accessible externally!
|
||||
# Default port 9300 to avoid conflict with Gitea (port 3000)
|
||||
# SECURITY: This ensures Semaphore is only accessible locally
|
||||
- "127.0.0.1:${SEMAPHORE_PORT:-9300}:3000"
|
||||
# NO Traefik labels - Semaphore should only be accessible locally!
|
||||
# External access is disabled for security reasons.
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
# Database Configuration
|
||||
- SEMAPHORE_DB_DIALECT=mysql
|
||||
- SEMAPHORE_DB_HOST=mysql
|
||||
- SEMAPHORE_DB_PORT=3306
|
||||
- SEMAPHORE_DB=${MYSQL_DATABASE:-semaphore}
|
||||
- SEMAPHORE_DB_USER=${MYSQL_USER:-semaphore}
|
||||
- SEMAPHORE_DB_PASS=${MYSQL_PASSWORD:-semaphore}
|
||||
# Admin Configuration
|
||||
- SEMAPHORE_ADMIN=${SEMAPHORE_ADMIN:-admin}
|
||||
- SEMAPHORE_ADMIN_NAME=${SEMAPHORE_ADMIN_NAME:-Administrator}
|
||||
- SEMAPHORE_ADMIN_EMAIL=${SEMAPHORE_ADMIN_EMAIL:-admin@localhost}
|
||||
- SEMAPHORE_ADMIN_PASSWORD=${SEMAPHORE_ADMIN_PASSWORD:-admin}
|
||||
# Playbook Path
|
||||
- SEMAPHORE_PLAYBOOK_PATH=${SEMAPHORE_PLAYBOOK_PATH:-/tmp/semaphore}
|
||||
# Encryption Key (generate with: head -c32 /dev/urandom | base64)
|
||||
- SEMAPHORE_ACCESS_KEY_ENCRYPTION=${SEMAPHORE_ACCESS_KEY_ENCRYPTION:-change-me-in-production}
|
||||
# Optional: LDAP Configuration (disabled by default)
|
||||
- SEMAPHORE_LDAP_ENABLED=${SEMAPHORE_LDAP_ENABLED:-false}
|
||||
# Optional: Webhook Configuration
|
||||
- SEMAPHORE_WEBHOOK_URL=${SEMAPHORE_WEBHOOK_URL:-}
|
||||
volumes:
|
||||
- semaphore-data:/etc/semaphore
|
||||
# Mount playbooks from repository so Semaphore can access them
|
||||
- ../../../deployment/stacks/semaphore/playbooks:/tmp/semaphore/playbooks:ro
|
||||
- ../../../deployment/ansible/playbooks:/tmp/semaphore/repo-playbooks:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
volumes:
|
||||
semaphore-mysql-data:
|
||||
name: semaphore-mysql-data
|
||||
semaphore-data:
|
||||
name: semaphore-data
|
||||
|
||||
networks:
|
||||
semaphore-internal:
|
||||
name: semaphore-internal
|
||||
driver: bridge
|
||||
|
||||
51
deployment/legacy/stacks/stacks/semaphore/env.example
Normal file
51
deployment/legacy/stacks/stacks/semaphore/env.example
Normal file
@@ -0,0 +1,51 @@
|
||||
# Semaphore CI Stack - Environment Configuration
|
||||
# Copy this file to .env and adjust values as needed
|
||||
# Note: Rename this file to .env.example if you prefer the standard naming
|
||||
|
||||
# ============================================
|
||||
# MySQL Database Configuration
|
||||
# ============================================
|
||||
MYSQL_ROOT_PASSWORD=semaphore_root
|
||||
MYSQL_DATABASE=semaphore
|
||||
MYSQL_USER=semaphore
|
||||
MYSQL_PASSWORD=semaphore
|
||||
|
||||
# ============================================
|
||||
# Semaphore Configuration
|
||||
# ============================================
|
||||
|
||||
# Port binding (default: 9300)
|
||||
# Only accessible via localhost (127.0.0.1)
|
||||
# Note: Changed from 3000 to avoid conflict with Gitea
|
||||
SEMAPHORE_PORT=9300
|
||||
|
||||
# Admin User Configuration
|
||||
SEMAPHORE_ADMIN=admin
|
||||
SEMAPHORE_ADMIN_NAME=Administrator
|
||||
SEMAPHORE_ADMIN_EMAIL=admin@localhost
|
||||
SEMAPHORE_ADMIN_PASSWORD=admin
|
||||
|
||||
# Playbook Storage Path (inside container)
|
||||
SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore
|
||||
|
||||
# Access Key Encryption
|
||||
# Generate with: head -c32 /dev/urandom | base64
|
||||
# IMPORTANT: Change this in production!
|
||||
SEMAPHORE_ACCESS_KEY_ENCRYPTION=change-me-in-production
|
||||
|
||||
# ============================================
|
||||
# Optional: LDAP Configuration
|
||||
# ============================================
|
||||
# SEMAPHORE_LDAP_ENABLED=false
|
||||
# SEMAPHORE_LDAP_HOST=ldap.example.com
|
||||
# SEMAPHORE_LDAP_PORT=389
|
||||
# SEMAPHORE_LDAP_DN=cn=admin,dc=example,dc=com
|
||||
# SEMAPHORE_LDAP_PASSWORD=ldap_password
|
||||
# SEMAPHORE_LDAP_BASE_DN=dc=example,dc=com
|
||||
# SEMAPHORE_LDAP_USER_FILTER=(uid=%s)
|
||||
|
||||
# ============================================
|
||||
# Optional: Webhook Configuration
|
||||
# ============================================
|
||||
# SEMAPHORE_WEBHOOK_URL=http://localhost:8080/webhook
|
||||
|
||||
103
deployment/legacy/stacks/stacks/semaphore/playbooks/ci-tests.yml
Normal file
103
deployment/legacy/stacks/stacks/semaphore/playbooks/ci-tests.yml
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
# CI Tests Playbook f?r Semaphore
|
||||
# F?hrt PHP Tests und Quality Checks aus
|
||||
|
||||
- name: Run CI Tests and Quality Checks
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
vars:
|
||||
repo_url: "{{ repo_url | default('https://git.michaelschiemer.de/michael/michaelschiemer.git') }}"
|
||||
repo_branch: "{{ repo_branch | default('main') }}"
|
||||
build_dir: "/tmp/ci-build"
|
||||
|
||||
tasks:
|
||||
- name: Clean up build directory
|
||||
file:
|
||||
path: "{{ build_dir }}"
|
||||
state: absent
|
||||
|
||||
- name: Checkout repository
|
||||
git:
|
||||
repo: "{{ repo_url }}"
|
||||
dest: "{{ build_dir }}"
|
||||
version: "{{ repo_branch }}"
|
||||
force: yes
|
||||
register: git_result
|
||||
|
||||
- name: Display checked out commit
|
||||
debug:
|
||||
msg: "Checked out commit: {{ git_result.after }}"
|
||||
|
||||
- name: Install Composer if not present
|
||||
get_url:
|
||||
url: https://getcomposer.org/installer
|
||||
dest: /tmp/composer-installer.php
|
||||
mode: '0755'
|
||||
when: ansible_facts.os_family == "Debian"
|
||||
|
||||
- name: Install Composer (Debian)
|
||||
shell: php /tmp/composer-installer.php && mv composer.phar /usr/local/bin/composer
|
||||
when: ansible_facts.os_family == "Debian"
|
||||
args:
|
||||
creates: /usr/local/bin/composer
|
||||
|
||||
- name: Install PHP dependencies
|
||||
command: composer install --no-interaction --prefer-dist --optimize-autoloader --ignore-platform-req=php
|
||||
args:
|
||||
chdir: "{{ build_dir }}"
|
||||
register: composer_result
|
||||
|
||||
- name: Display composer output
|
||||
debug:
|
||||
var: composer_result.stdout_lines
|
||||
when: composer_result.stdout_lines is defined
|
||||
|
||||
- name: Run PHP tests
|
||||
command: ./vendor/bin/pest
|
||||
args:
|
||||
chdir: "{{ build_dir }}"
|
||||
register: test_result
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display test results
|
||||
debug:
|
||||
msg: "{{ test_result.stdout_lines }}"
|
||||
when: test_result.stdout_lines is defined
|
||||
|
||||
- name: Run PHPStan
|
||||
command: composer phpstan
|
||||
args:
|
||||
chdir: "{{ build_dir }}"
|
||||
register: phpstan_result
|
||||
ignore_errors: yes
|
||||
when: test_result.rc == 0
|
||||
|
||||
- name: Display PHPStan results
|
||||
debug:
|
||||
msg: "{{ phpstan_result.stdout_lines }}"
|
||||
when: phpstan_result.stdout_lines is defined and phpstan_result.rc == 0
|
||||
|
||||
- name: Run code style check
|
||||
command: composer cs
|
||||
args:
|
||||
chdir: "{{ build_dir }}"
|
||||
register: cs_result
|
||||
ignore_errors: yes
|
||||
when: test_result.rc == 0
|
||||
|
||||
- name: Display code style results
|
||||
debug:
|
||||
msg: "{{ cs_result.stdout_lines }}"
|
||||
when: cs_result.stdout_lines is defined
|
||||
|
||||
- name: Fail if tests failed
|
||||
fail:
|
||||
msg: "Tests failed! Check output above."
|
||||
when: test_result.rc != 0
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "? CI Tests completed successfully!"
|
||||
- "Commit: {{ git_result.after }}"
|
||||
- "Branch: {{ repo_branch }}"
|
||||
@@ -0,0 +1,129 @@
|
||||
---
|
||||
# Production Deployment Playbook f?r Semaphore
|
||||
# Deployed die Anwendung auf Production-Server
|
||||
|
||||
- name: Deploy to Production
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
vars:
|
||||
registry_url: "{{ registry_url | default('registry.michaelschiemer.de') }}"
|
||||
image_name: "{{ image_name | default('framework') }}"
|
||||
image_tag: "{{ image_tag | default('latest') }}"
|
||||
registry_user: "{{ registry_user | default('admin') }}"
|
||||
registry_password: "{{ registry_password | required }}"
|
||||
deployment_path: "{{ deployment_path | default('~/deployment/stacks/application') }}"
|
||||
repo_url: "{{ repo_url | default('https://git.michaelschiemer.de/michael/michaelschiemer.git') }}"
|
||||
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
debug:
|
||||
msg:
|
||||
- "?? Starting production deployment..."
|
||||
- "Host: {{ inventory_hostname }}"
|
||||
- "Registry: {{ registry_url }}"
|
||||
- "Image: {{ image_name }}:{{ image_tag }}"
|
||||
- "Path: {{ deployment_path }}"
|
||||
|
||||
- name: Ensure deployment directory exists
|
||||
file:
|
||||
path: "{{ deployment_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Login to Docker registry
|
||||
docker_login:
|
||||
username: "{{ registry_user }}"
|
||||
password: "{{ registry_password }}"
|
||||
registry_url: "{{ registry_url }}"
|
||||
|
||||
- name: Pull Docker image
|
||||
docker_image:
|
||||
name: "{{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
source: pull
|
||||
|
||||
- name: Check if docker-compose files exist
|
||||
stat:
|
||||
path: "{{ deployment_path }}/docker-compose.base.yml"
|
||||
register: base_compose_exists
|
||||
|
||||
- name: Check if docker-compose.production.yml exists
|
||||
stat:
|
||||
path: "{{ deployment_path }}/docker-compose.production.yml"
|
||||
register: production_compose_exists
|
||||
|
||||
- name: Copy docker-compose files if missing
|
||||
copy:
|
||||
src: "{{ item.src }}"
|
||||
dest: "{{ deployment_path }}/{{ item.dest }}"
|
||||
mode: '0644'
|
||||
loop:
|
||||
- { src: "docker-compose.base.yml", dest: "docker-compose.base.yml" }
|
||||
- { src: "docker-compose.production.yml", dest: "docker-compose.production.yml" }
|
||||
when: not (item.dest == "docker-compose.base.yml" and base_compose_exists.stat.exists) or
|
||||
not (item.dest == "docker-compose.production.yml" and production_compose_exists.stat.exists)
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Update docker-compose.production.yml with new image
|
||||
replace:
|
||||
path: "{{ deployment_path }}/docker-compose.production.yml"
|
||||
regexp: 'image:.*{{ image_name }}:.*'
|
||||
replace: 'image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}'
|
||||
|
||||
- name: Update docker-compose.production.yml with new image (alternative format)
|
||||
replace:
|
||||
path: "{{ deployment_path }}/docker-compose.production.yml"
|
||||
regexp: 'image:.*{{ image_name }}@.*'
|
||||
replace: 'image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}'
|
||||
|
||||
- name: Ensure Docker networks exist
|
||||
docker_network:
|
||||
name: traefik-public
|
||||
state: present
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Deploy services
|
||||
docker_compose_v2:
|
||||
project_src: "{{ deployment_path }}"
|
||||
files:
|
||||
- "{{ deployment_path }}/docker-compose.base.yml"
|
||||
- "{{ deployment_path }}/docker-compose.production.yml"
|
||||
pull: missing
|
||||
state: present
|
||||
recreate: always
|
||||
|
||||
- name: Wait for services to start
|
||||
pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check container status
|
||||
command: docker compose -f {{ deployment_path }}/docker-compose.base.yml -f {{ deployment_path }}/docker-compose.production.yml ps
|
||||
register: container_status
|
||||
|
||||
- name: Display container status
|
||||
debug:
|
||||
var: container_status.stdout_lines
|
||||
|
||||
- name: Health check
|
||||
uri:
|
||||
url: https://michaelschiemer.de/health
|
||||
method: GET
|
||||
status_code: [200, 201, 204]
|
||||
validate_certs: no
|
||||
register: health_result
|
||||
retries: 10
|
||||
delay: 10
|
||||
until: health_result.status == 200
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display health check result
|
||||
debug:
|
||||
msg: "? Health check passed!" if health_result.status == 200 else "?? Health check failed"
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "? Production deployment completed!"
|
||||
- "Image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
- "URL: https://michaelschiemer.de"
|
||||
@@ -0,0 +1,109 @@
|
||||
---
|
||||
# Staging Deployment Playbook f?r Semaphore
|
||||
# Deployed die Anwendung auf Staging-Server
|
||||
|
||||
- name: Deploy to Staging
|
||||
hosts: staging
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
vars:
|
||||
registry_url: "{{ registry_url | default('registry.michaelschiemer.de') }}"
|
||||
image_name: "{{ image_name | default('framework') }}"
|
||||
image_tag: "{{ image_tag | default('latest') }}"
|
||||
registry_user: "{{ registry_user | default('admin') }}"
|
||||
registry_password: "{{ registry_password | required }}"
|
||||
deployment_path: "{{ deployment_path | default('~/deployment/stacks/staging') }}"
|
||||
repo_url: "{{ repo_url | default('https://git.michaelschiemer.de/michael/michaelschiemer.git') }}"
|
||||
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
debug:
|
||||
msg:
|
||||
- "?? Starting staging deployment..."
|
||||
- "Host: {{ inventory_hostname }}"
|
||||
- "Registry: {{ registry_url }}"
|
||||
- "Image: {{ image_name }}:{{ image_tag }}"
|
||||
- "Path: {{ deployment_path }}"
|
||||
|
||||
- name: Ensure deployment directory exists
|
||||
file:
|
||||
path: "{{ deployment_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Login to Docker registry
|
||||
docker_login:
|
||||
username: "{{ registry_user }}"
|
||||
password: "{{ registry_password }}"
|
||||
registry_url: "{{ registry_url }}"
|
||||
|
||||
- name: Pull Docker image
|
||||
docker_image:
|
||||
name: "{{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
source: pull
|
||||
|
||||
- name: Check if docker-compose files exist
|
||||
stat:
|
||||
path: "{{ deployment_path }}/docker-compose.base.yml"
|
||||
register: base_compose_exists
|
||||
|
||||
- name: Check if docker-compose.staging.yml exists
|
||||
stat:
|
||||
path: "{{ deployment_path }}/docker-compose.staging.yml"
|
||||
register: staging_compose_exists
|
||||
|
||||
- name: Copy docker-compose files (always update)
|
||||
copy:
|
||||
src: "{{ item.src }}"
|
||||
dest: "{{ deployment_path }}/{{ item.dest }}"
|
||||
mode: '0644'
|
||||
force: yes
|
||||
loop:
|
||||
- { src: "docker-compose.base.yml", dest: "docker-compose.base.yml" }
|
||||
- { src: "docker-compose.staging.yml", dest: "docker-compose.staging.yml" }
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Update docker-compose.staging.yml with new image
|
||||
replace:
|
||||
path: "{{ deployment_path }}/docker-compose.staging.yml"
|
||||
regexp: 'image:.*{{ image_name }}:.*'
|
||||
replace: 'image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}'
|
||||
|
||||
- name: Ensure Docker networks exist
|
||||
docker_network:
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
loop:
|
||||
- traefik-public
|
||||
- staging-internal
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Deploy services
|
||||
docker_compose_v2:
|
||||
project_src: "{{ deployment_path }}"
|
||||
files:
|
||||
- "{{ deployment_path }}/docker-compose.base.yml"
|
||||
- "{{ deployment_path }}/docker-compose.staging.yml"
|
||||
pull: missing
|
||||
state: present
|
||||
recreate: always
|
||||
|
||||
- name: Wait for services to start
|
||||
pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check container status
|
||||
command: docker compose -f {{ deployment_path }}/docker-compose.base.yml -f {{ deployment_path }}/docker-compose.staging.yml ps
|
||||
register: container_status
|
||||
|
||||
- name: Display container status
|
||||
debug:
|
||||
var: container_status.stdout_lines
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "? Staging deployment completed!"
|
||||
- "Image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
- "URL: https://staging.michaelschiemer.de"
|
||||
@@ -0,0 +1,103 @@
|
||||
---
|
||||
# Docker Build Playbook f?r Semaphore
|
||||
# Baut Docker Image und pusht es zur Registry
|
||||
|
||||
- name: Build and Push Docker Image
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
vars:
|
||||
repo_url: "{{ repo_url | default('https://git.michaelschiemer.de/michael/michaelschiemer.git') }}"
|
||||
repo_branch: "{{ repo_branch | default('main') }}"
|
||||
build_dir: "/tmp/ci-build"
|
||||
registry_url: "{{ registry_url | default('registry.michaelschiemer.de') }}"
|
||||
image_name: "{{ image_name | default('framework') }}"
|
||||
image_tag: "{{ image_tag | default('latest') }}"
|
||||
registry_user: "{{ registry_user | default('admin') }}"
|
||||
registry_password: "{{ registry_password | required }}"
|
||||
|
||||
tasks:
|
||||
- name: Clean up build directory
|
||||
file:
|
||||
path: "{{ build_dir }}"
|
||||
state: absent
|
||||
|
||||
- name: Checkout repository
|
||||
git:
|
||||
repo: "{{ repo_url }}"
|
||||
dest: "{{ build_dir }}"
|
||||
version: "{{ repo_branch }}"
|
||||
force: yes
|
||||
register: git_result
|
||||
|
||||
- name: Get short commit SHA
|
||||
shell: echo "{{ git_result.after }}" | cut -c1-7
|
||||
register: short_sha
|
||||
|
||||
- name: Generate image tags
|
||||
set_fact:
|
||||
tags:
|
||||
- "{{ registry_url }}/{{ image_name }}:latest"
|
||||
- "{{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
- "{{ registry_url }}/{{ image_name }}:git-{{ short_sha.stdout }}"
|
||||
|
||||
- name: Display image tags
|
||||
debug:
|
||||
msg:
|
||||
- "??? Building Docker image..."
|
||||
- "Registry: {{ registry_url }}"
|
||||
- "Image: {{ image_name }}"
|
||||
- "Tags: {{ tags | join(', ') }}"
|
||||
|
||||
- name: Ensure Docker is available
|
||||
command: docker --version
|
||||
register: docker_version
|
||||
|
||||
- name: Display Docker version
|
||||
debug:
|
||||
msg: "Docker version: {{ docker_version.stdout }}"
|
||||
|
||||
- name: Login to Docker registry
|
||||
docker_login:
|
||||
username: "{{ registry_user }}"
|
||||
password: "{{ registry_password }}"
|
||||
registry_url: "{{ registry_url }}"
|
||||
register: login_result
|
||||
|
||||
- name: Verify registry login
|
||||
debug:
|
||||
msg: "? Successfully logged in to {{ registry_url }}"
|
||||
when: login_result.failed == false
|
||||
|
||||
- name: Build Docker image
|
||||
docker_image:
|
||||
name: "{{ registry_url }}/{{ image_name }}"
|
||||
tag: "{{ image_tag }}"
|
||||
source: build
|
||||
build:
|
||||
path: "{{ build_dir }}"
|
||||
dockerfile: Dockerfile.production
|
||||
push: yes
|
||||
state: present
|
||||
register: build_result
|
||||
|
||||
- name: Tag image with additional tags
|
||||
docker_image:
|
||||
name: "{{ registry_url }}/{{ image_name }}:{{ image_tag }}"
|
||||
repository: "{{ registry_url }}/{{ image_name }}"
|
||||
tag: "{{ item }}"
|
||||
source: local
|
||||
push: yes
|
||||
state: present
|
||||
loop:
|
||||
- "latest"
|
||||
- "git-{{ short_sha.stdout }}"
|
||||
when: build_result.changed
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "? Docker image built and pushed successfully!"
|
||||
- "Registry: {{ registry_url }}"
|
||||
- "Image: {{ image_name }}"
|
||||
- "Tags: {{ tags | join(', ') }}"
|
||||
- "Commit: {{ git_result.after }}"
|
||||
178
deployment/legacy/stacks/stacks/staging/README.md
Normal file
178
deployment/legacy/stacks/stacks/staging/README.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Staging Stack - Staging Environment
|
||||
|
||||
## Overview
|
||||
|
||||
Separate staging environment for testing features before production deployment.
|
||||
|
||||
**URL**: https://staging.michaelschiemer.de
|
||||
|
||||
**Features**:
|
||||
- Separate containers and volumes from production
|
||||
- Auto-deployment from `staging` branch
|
||||
- Shared Traefik network (same SSL certificates)
|
||||
- Separate Redis instance
|
||||
- Can use separate database or share with production
|
||||
|
||||
## Services
|
||||
|
||||
- **staging-app** (PHP-FPM) - Application runtime
|
||||
- **staging-nginx** (Nginx) - Web server for staging.michaelschiemer.de
|
||||
- **staging-redis** (Redis 7) - Separate cache/session/queue backend
|
||||
- **staging-queue-worker** - Background job processor
|
||||
- **staging-scheduler** - Cron job executor
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running** (shared with production)
|
||||
2. **PostgreSQL Staging Stack Running** (separate from production)
|
||||
```bash
|
||||
cd ../postgresql-staging
|
||||
docker compose up -d
|
||||
```
|
||||
3. **DNS Configuration**: `staging.michaelschiemer.de` points to server IP
|
||||
4. **Docker Networks**:
|
||||
```bash
|
||||
docker network create traefik-public # Shared with production
|
||||
docker network create staging-internal # Staging-only
|
||||
docker network create postgres-staging-internal # Staging database network
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Configure Environment Variables
|
||||
|
||||
```env
|
||||
# Staging-specific
|
||||
APP_ENV=staging
|
||||
APP_DEBUG=true
|
||||
APP_URL=https://staging.michaelschiemer.de
|
||||
|
||||
# Database - using separate staging database stack
|
||||
DB_DATABASE=michaelschiemer_staging
|
||||
DB_HOST=postgres-staging
|
||||
DB_PORT=5432
|
||||
DB_USERNAME=postgres
|
||||
DB_PASSWORD=<password>
|
||||
|
||||
# Redis (separate instance)
|
||||
# Note: REDIS_PASSWORD is loaded from Docker Secret via REDIS_PASSWORD_FILE
|
||||
# See secrets/redis_password.txt file
|
||||
CACHE_PREFIX=staging
|
||||
|
||||
# Git
|
||||
GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
GIT_BRANCH=staging
|
||||
# Note: GIT_TOKEN is loaded from Docker Secret via GIT_TOKEN_FILE
|
||||
# See secrets/git_token.txt file
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
The staging environment uses `docker-compose.staging.yml` in the repository root, which is used as an override for `docker-compose.base.yml`.
|
||||
|
||||
```bash
|
||||
# Create staging stack directory on server
|
||||
mkdir -p ~/deployment/stacks/staging
|
||||
|
||||
# Copy docker-compose files from repository
|
||||
cp docker-compose.base.yml ~/deployment/stacks/staging/
|
||||
cp docker-compose.staging.yml ~/deployment/stacks/staging/
|
||||
|
||||
# Create secrets directory and files
|
||||
mkdir -p ~/deployment/stacks/staging/secrets
|
||||
# Create secret files (redis_password.txt, db_user_password.txt, app_key.txt, etc.)
|
||||
# These files should contain the actual secret values
|
||||
|
||||
# Ensure networks exist
|
||||
docker network create traefik-public 2>/dev/null || true
|
||||
docker network create staging-internal 2>/dev/null || true
|
||||
|
||||
# Start staging stack
|
||||
cd ~/deployment/stacks/staging
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d
|
||||
```
|
||||
|
||||
### Auto-Deployment
|
||||
|
||||
**Automatic**: Push to `staging` branch triggers build, then auto-deploys to staging.
|
||||
|
||||
**Manual**: Use "Deploy to Staging" workflow in Gitea Actions.
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
cd ~/deployment/stacks/staging
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||
|
||||
# Test staging URL
|
||||
curl https://staging.michaelschiemer.de/health
|
||||
|
||||
# View logs
|
||||
docker compose logs -f staging-app
|
||||
docker compose logs -f staging-nginx
|
||||
```
|
||||
|
||||
## Differences from Production
|
||||
|
||||
| Aspect | Production | Staging |
|
||||
|--------|-----------|---------|
|
||||
| **Domain** | michaelschiemer.de | staging.michaelschiemer.de |
|
||||
| **Branch** | main | staging |
|
||||
| **APP_ENV** | production | staging |
|
||||
| **APP_DEBUG** | false | true |
|
||||
| **Redis** | Separate instance | Separate instance |
|
||||
| **Database** | michaelschiemer (postgres-production) | michaelschiemer_staging (postgres-staging) |
|
||||
| **Database Stack** | postgresql-production (separate) | postgresql-staging (separate) |
|
||||
| **Volumes** | app-* | staging-* |
|
||||
| **Network** | app-internal | staging-internal |
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Development**: Feature branch → merge to `staging`
|
||||
2. **Auto-Build**: Push to `staging` triggers `build-image.yml`
|
||||
3. **Auto-Deploy**: Successful build auto-deploys to staging (via `deploy-staging.yml`)
|
||||
4. **Testing**: Test on https://staging.michaelschiemer.de
|
||||
5. **Production**: Merge `staging` → `main` → deploy to production
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Staging not accessible
|
||||
|
||||
```bash
|
||||
# Check Traefik routing
|
||||
docker logs traefik | grep staging
|
||||
|
||||
# Verify container is running
|
||||
docker ps | grep staging
|
||||
|
||||
# Check nginx logs
|
||||
docker logs staging-nginx
|
||||
```
|
||||
|
||||
### Code not updating
|
||||
|
||||
```bash
|
||||
# Force code pull in staging-app
|
||||
docker exec staging-app bash -c "cd /var/www/html && git pull origin staging"
|
||||
cd ~/deployment/stacks/staging
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml restart staging-app staging-nginx
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
To remove staging environment:
|
||||
|
||||
```bash
|
||||
cd ~/deployment/stacks/staging
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml down -v # Removes volumes too
|
||||
docker network rm staging-internal
|
||||
```
|
||||
@@ -0,0 +1,129 @@
|
||||
# Nginx Configuration for PHP-FPM Application
|
||||
# Optimized for production with security headers and performance tuning
|
||||
|
||||
# Upstream PHP-FPM
|
||||
upstream php-upstream {
|
||||
server staging-app:9000;
|
||||
}
|
||||
|
||||
# Rate limiting zones
|
||||
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
|
||||
limit_req_zone $binary_remote_addr zone=general_limit:10m rate=30r/s;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
root /var/www/html/public;
|
||||
index index.php index.html;
|
||||
|
||||
# Security Headers (additional to Traefik middleware)
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# Client body size limit (adjust for file uploads)
|
||||
client_max_body_size 100M;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/x-javascript
|
||||
text/xml
|
||||
application/xml
|
||||
application/xml+rss
|
||||
image/svg+xml;
|
||||
|
||||
# Logging
|
||||
access_log /var/log/nginx/access.log combined;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
|
||||
# Health check endpoint (for Docker healthcheck)
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# Deny access to sensitive files
|
||||
location ~ /\. {
|
||||
deny all;
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
location ~ ^/(\.env|\.git|\.gitignore|\.gitattributes|composer\.(json|lock)|package(-lock)?\.json) {
|
||||
deny all;
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
# Static files - serve directly for performance
|
||||
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot|webp)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
access_log off;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# API endpoints - rate limited
|
||||
location ^~ /api/ {
|
||||
limit_req zone=api_limit burst=20 nodelay;
|
||||
limit_req_status 429;
|
||||
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# PHP-FPM processing
|
||||
location ~ \.php$ {
|
||||
limit_req zone=general_limit burst=50 nodelay;
|
||||
|
||||
try_files $uri =404;
|
||||
fastcgi_split_path_info ^(.+\.php)(/.+)$;
|
||||
|
||||
fastcgi_pass php-upstream;
|
||||
fastcgi_index index.php;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_path_info;
|
||||
|
||||
# FastCGI parameters
|
||||
include fastcgi_params;
|
||||
|
||||
# PHP-FPM timeouts
|
||||
fastcgi_connect_timeout 60s;
|
||||
fastcgi_send_timeout 180s;
|
||||
fastcgi_read_timeout 180s;
|
||||
|
||||
# Buffer settings
|
||||
fastcgi_buffer_size 128k;
|
||||
fastcgi_buffers 256 16k;
|
||||
fastcgi_busy_buffers_size 256k;
|
||||
fastcgi_temp_file_write_size 256k;
|
||||
|
||||
# Hide PHP version
|
||||
fastcgi_hide_header X-Powered-By;
|
||||
}
|
||||
|
||||
# Fallback to index.php for non-existent files (framework routing)
|
||||
location / {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Deny access to storage directory (except public subdirectory)
|
||||
location ^~ /storage {
|
||||
deny all;
|
||||
}
|
||||
|
||||
location ^~ /storage/public {
|
||||
allow all;
|
||||
}
|
||||
}
|
||||
20
deployment/legacy/stacks/stacks/traefik/.env.example
Normal file
20
deployment/legacy/stacks/stacks/traefik/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# Traefik Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# Let's Encrypt Email
|
||||
ACME_EMAIL=kontakt@michaelschiemer.de
|
||||
|
||||
# Domain
|
||||
DOMAIN=michaelschiemer.de
|
||||
|
||||
# Dashboard Authentication
|
||||
# Generate password hash with: htpasswd -nb admin your_password
|
||||
# Replace $ with $$ in docker-compose.yml
|
||||
TRAEFIK_DASHBOARD_USER=admin
|
||||
TRAEFIK_DASHBOARD_PASSWORD_HASH=
|
||||
|
||||
# Log Level (DEBUG, INFO, WARN, ERROR)
|
||||
LOG_LEVEL=INFO
|
||||
443
deployment/legacy/stacks/stacks/traefik/README.md
Normal file
443
deployment/legacy/stacks/stacks/traefik/README.md
Normal file
@@ -0,0 +1,443 @@
|
||||
# Traefik Stack - Reverse Proxy with SSL
|
||||
|
||||
## Overview
|
||||
|
||||
Traefik acts as the central reverse proxy for all services, handling:
|
||||
- Automatic SSL certificate generation via Let's Encrypt
|
||||
- HTTP to HTTPS redirection
|
||||
- Service discovery via Docker labels
|
||||
- Security headers and compression
|
||||
- Rate limiting and access control
|
||||
|
||||
## Services
|
||||
|
||||
- **traefik.michaelschiemer.de** - Traefik Dashboard (BasicAuth protected)
|
||||
- Protected by BasicAuth authentication
|
||||
- Accessible via HTTPS with Let's Encrypt certificate
|
||||
|
||||
## Local Development
|
||||
|
||||
For local development, use the separate local configuration to avoid port conflicts and Let's Encrypt errors:
|
||||
|
||||
### Quick Start (Local Development)
|
||||
|
||||
```bash
|
||||
# Ensure Docker network exists
|
||||
docker network create traefik-public 2>/dev/null || true
|
||||
|
||||
# Start Traefik with local configuration
|
||||
cd deployment/stacks/traefik
|
||||
docker compose -f docker-compose.local.yml up -d
|
||||
|
||||
# Check logs
|
||||
docker compose -f docker-compose.local.yml logs -f
|
||||
|
||||
# Access dashboard at http://localhost:8093/dashboard/
|
||||
# Note: Dashboard is served on the API port (8093) when api.insecure=true
|
||||
```
|
||||
|
||||
### Local Development Configuration
|
||||
|
||||
The local configuration (`docker-compose.local.yml` and `traefik.local.yml`) differs from production:
|
||||
|
||||
- **Bridge network** instead of `host` mode (avoids port conflicts)
|
||||
- **Port mappings**: `8081:80` (HTTP) and `8093:8080` (API/Dashboard)
|
||||
- Note: HTTPS not needed locally - avoids port conflicts with web container (8443:443)
|
||||
- **HTTP-only** (no ACME/Let's Encrypt) for local development
|
||||
- **Dashboard**: Accessible at `http://localhost:8093/dashboard/` (HTTP, no authentication)
|
||||
- Also available: `http://localhost:8093/api/rawdata` and `http://localhost:8093/api/http/routers`
|
||||
- Note: Dashboard is served on the API port (8093) when `api.insecure=true` in `traefik.local.yml`
|
||||
- **No `acme.json`** required
|
||||
- **Console logging** (human-readable) instead of JSON file logs
|
||||
|
||||
### Local Development vs Production
|
||||
|
||||
| Feature | Local (`docker-compose.local.yml`) | Production (`docker-compose.yml`) |
|
||||
|---------|-----------------------------------|----------------------------------|
|
||||
| Network Mode | Bridge | Bridge |
|
||||
| Ports | 8081:80, 8093:8080 (HTTP only) | 80:80, 443:443, 2222:2222 |
|
||||
| SSL/TLS | HTTP-only | HTTPS with Let's Encrypt |
|
||||
| Dashboard | `http://localhost:8093/dashboard/` | `https://traefik.michaelschiemer.de` |
|
||||
| Authentication | None (local dev) | BasicAuth |
|
||||
| Logging | Console (human-readable) | JSON files |
|
||||
| ACME | Disabled | Enabled |
|
||||
|
||||
### Troubleshooting Local Development
|
||||
|
||||
**Container restarts in loop:**
|
||||
- Check if ports 8081 or 8093 are already in use: `netstat -tlnp | grep -E ':(8081|8093)' || ss -tlnp | grep -E ':(8081|8093)'`
|
||||
- Verify Docker network exists: `docker network ls | grep traefik-public`
|
||||
- Check logs: `docker compose -f docker-compose.local.yml logs -f traefik`
|
||||
|
||||
**Services not accessible through Traefik:**
|
||||
- Ensure services are on `traefik-public` network
|
||||
- Verify Traefik labels are correctly configured
|
||||
- Check that services are running: `docker compose ps`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Docker Network**
|
||||
```bash
|
||||
docker network create traefik-public
|
||||
```
|
||||
|
||||
2. **ACME Storage File**
|
||||
```bash
|
||||
touch acme.json
|
||||
chmod 600 acme.json
|
||||
```
|
||||
|
||||
3. **DNS Configuration**
|
||||
Point these domains to your server IP (94.16.110.151):
|
||||
- `michaelschiemer.de`
|
||||
- `*.michaelschiemer.de` (wildcard)
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate Dashboard Password
|
||||
|
||||
```bash
|
||||
# Generate password hash
|
||||
htpasswd -nb admin your_secure_password
|
||||
|
||||
# Example output:
|
||||
# admin:$apr1$8kj9d7lj$r.x5jhLVPLuCDLvJ6x0Hd0
|
||||
|
||||
# Important: In docker-compose.yml, replace $ with $$
|
||||
# admin:$$apr1$$8kj9d7lj$$r.x5jhLVPLuCDLvJ6x0Hd0
|
||||
```
|
||||
|
||||
Update the `traefik.http.middlewares.traefik-auth.basicauth.users` label in `docker-compose.yml`.
|
||||
|
||||
### 3. Adjust Configuration (Optional)
|
||||
|
||||
Edit `traefik.yml` for:
|
||||
- Log levels
|
||||
- Certificate resolvers
|
||||
- Additional entry points
|
||||
- Metrics configuration
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Create network
|
||||
docker network create traefik-public
|
||||
|
||||
# Create acme.json
|
||||
touch acme.json
|
||||
chmod 600 acme.json
|
||||
|
||||
# Create log directories
|
||||
mkdir -p logs
|
||||
|
||||
# Start Traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
docker compose ps
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Test dashboard access
|
||||
curl -I https://traefik.michaelschiemer.de
|
||||
|
||||
# Check certificate
|
||||
openssl s_client -connect traefik.michaelschiemer.de:443 -servername traefik.michaelschiemer.de < /dev/null
|
||||
```
|
||||
|
||||
## Middleware Configuration
|
||||
|
||||
Traefik provides several reusable middlewares in `dynamic/middlewares.yml`:
|
||||
|
||||
### Security Headers
|
||||
```yaml
|
||||
labels:
|
||||
- "traefik.http.routers.myapp.middlewares=security-headers-global@file"
|
||||
```
|
||||
|
||||
### Rate Limiting
|
||||
```yaml
|
||||
labels:
|
||||
# Strict: 50 req/s
|
||||
- "traefik.http.routers.myapp.middlewares=rate-limit-strict@file"
|
||||
|
||||
# Moderate: 100 req/s
|
||||
- "traefik.http.routers.myapp.middlewares=rate-limit-moderate@file"
|
||||
|
||||
# Lenient: 200 req/s
|
||||
- "traefik.http.routers.myapp.middlewares=rate-limit-lenient@file"
|
||||
```
|
||||
|
||||
### Compression
|
||||
```yaml
|
||||
labels:
|
||||
- "traefik.http.routers.myapp.middlewares=gzip-compression@file"
|
||||
```
|
||||
|
||||
### VPN-Only Access (WireGuard Network)
|
||||
```yaml
|
||||
labels:
|
||||
# Restrict access to WireGuard VPN network only (10.8.0.0/24)
|
||||
- "traefik.http.routers.myapp.middlewares=vpn-only@file"
|
||||
|
||||
# Combined: VPN-only + BasicAuth (order matters - VPN check first, then BasicAuth)
|
||||
- "traefik.http.routers.myapp.middlewares=vpn-only@file,traefik-auth"
|
||||
```
|
||||
|
||||
### Middleware Chains
|
||||
```yaml
|
||||
labels:
|
||||
# Default chain: Security + Compression
|
||||
- "traefik.http.routers.myapp.middlewares=default-chain@file"
|
||||
|
||||
# Admin chain: Security + Compression + Rate Limiting
|
||||
- "traefik.http.routers.myapp.middlewares=admin-chain@file"
|
||||
```
|
||||
|
||||
## Service Integration
|
||||
|
||||
### Example Service Configuration
|
||||
|
||||
Add these labels to any Docker service to expose it through Traefik:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
myapp:
|
||||
image: myapp:latest
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
# Enable Traefik
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Router configuration
|
||||
- "traefik.http.routers.myapp.rule=Host(`app.michaelschiemer.de`)"
|
||||
- "traefik.http.routers.myapp.entrypoints=websecure"
|
||||
- "traefik.http.routers.myapp.tls=true"
|
||||
- "traefik.http.routers.myapp.tls.certresolver=letsencrypt"
|
||||
|
||||
# Service configuration
|
||||
- "traefik.http.services.myapp.loadbalancer.server.port=80"
|
||||
|
||||
# Middleware (optional)
|
||||
- "traefik.http.routers.myapp.middlewares=default-chain@file"
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Dashboard Access
|
||||
|
||||
Access the Traefik dashboard at: https://traefik.michaelschiemer.de
|
||||
|
||||
Default credentials (change in production):
|
||||
- Username: `admin`
|
||||
- Password: (set via htpasswd)
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# Access logs (HTTP requests)
|
||||
tail -f logs/access.log
|
||||
|
||||
# Traefik logs (errors, warnings)
|
||||
tail -f logs/traefik.log
|
||||
|
||||
# Container logs
|
||||
docker compose logs -f traefik
|
||||
```
|
||||
|
||||
### Prometheus Metrics
|
||||
|
||||
Traefik exposes Prometheus metrics for monitoring:
|
||||
|
||||
```yaml
|
||||
# Add to Prometheus scrape config
|
||||
- job_name: 'traefik'
|
||||
static_configs:
|
||||
- targets: ['traefik:8082']
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Certificate Issues
|
||||
|
||||
```bash
|
||||
# Check acme.json permissions
|
||||
ls -la acme.json
|
||||
# Should be: -rw------- (600)
|
||||
|
||||
# View certificate status
|
||||
docker compose logs traefik | grep -i "certificate"
|
||||
|
||||
# Force certificate renewal
|
||||
rm acme.json
|
||||
touch acme.json
|
||||
chmod 600 acme.json
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
### DNS Issues
|
||||
|
||||
```bash
|
||||
# Verify DNS resolution
|
||||
dig michaelschiemer.de
|
||||
dig git.michaelschiemer.de
|
||||
|
||||
# Check from external
|
||||
nslookup michaelschiemer.de 8.8.8.8
|
||||
```
|
||||
|
||||
### Service Not Accessible
|
||||
|
||||
```bash
|
||||
# Check Traefik can reach service
|
||||
docker network inspect traefik-public
|
||||
|
||||
# Verify service labels
|
||||
docker inspect <container_name> | grep -A 20 Labels
|
||||
|
||||
# Check Traefik logs for routing errors
|
||||
docker compose logs traefik | grep -i error
|
||||
```
|
||||
|
||||
### Port Conflicts
|
||||
|
||||
```bash
|
||||
# Check if ports 80/443 are free
|
||||
sudo netstat -tlnp | grep -E ':80|:443'
|
||||
|
||||
# Stop conflicting services
|
||||
sudo systemctl stop nginx # or apache2
|
||||
```
|
||||
|
||||
## Security Hardening
|
||||
|
||||
### 1. IP Whitelisting
|
||||
|
||||
Uncomment and configure in `dynamic/middlewares.yml`:
|
||||
|
||||
```yaml
|
||||
admin-whitelist:
|
||||
ipWhiteList:
|
||||
sourceRange:
|
||||
- "your.vpn.ip.range/32"
|
||||
- "10.0.0.0/8"
|
||||
```
|
||||
|
||||
### 2. Strong Dashboard Password
|
||||
|
||||
```bash
|
||||
# Generate strong password
|
||||
openssl rand -base64 32
|
||||
|
||||
# Create hash
|
||||
htpasswd -nb admin "your_strong_password"
|
||||
```
|
||||
|
||||
### 3. Rate Limiting
|
||||
|
||||
Apply rate limiting to sensitive endpoints:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- "traefik.http.routers.admin.middlewares=rate-limit-strict@file"
|
||||
```
|
||||
|
||||
### 4. DDoS Protection
|
||||
|
||||
```yaml
|
||||
# In traefik.yml - add entry point middleware
|
||||
entryPoints:
|
||||
websecure:
|
||||
address: ":443"
|
||||
http:
|
||||
middlewares:
|
||||
- rate-limit-moderate@file
|
||||
```
|
||||
|
||||
## Backup
|
||||
|
||||
### Important Files
|
||||
|
||||
- `acme.json` - SSL certificates
|
||||
- `traefik.yml` - Static configuration
|
||||
- `dynamic/` - Dynamic configuration
|
||||
|
||||
```bash
|
||||
# Backup certificates
|
||||
cp acme.json acme.json.backup.$(date +%Y%m%d)
|
||||
|
||||
# Backup configuration
|
||||
tar -czf traefik-config-backup.tar.gz traefik.yml dynamic/
|
||||
```
|
||||
|
||||
## Updates
|
||||
|
||||
```bash
|
||||
# Pull latest image
|
||||
docker compose pull
|
||||
|
||||
# Restart with new image
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Connection Limits
|
||||
|
||||
In `traefik.yml`:
|
||||
|
||||
```yaml
|
||||
entryPoints:
|
||||
websecure:
|
||||
transport:
|
||||
respondingTimeouts:
|
||||
readTimeout: 60s
|
||||
writeTimeout: 60s
|
||||
lifeCycle:
|
||||
requestAcceptGraceTimeout: 0s
|
||||
graceTimeOut: 10s
|
||||
```
|
||||
|
||||
### Resource Limits
|
||||
|
||||
In `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues with Traefik configuration:
|
||||
1. Check official Traefik documentation: https://doc.traefik.io/traefik/
|
||||
2. Review logs: `docker compose logs -f`
|
||||
3. Verify network connectivity: `docker network inspect traefik-public`
|
||||
@@ -0,0 +1,60 @@
|
||||
# Local Development Configuration for Traefik
|
||||
# Usage: docker compose -f docker-compose.local.yml up -d
|
||||
#
|
||||
# This configuration is optimized for local development:
|
||||
# - Bridge network instead of host mode
|
||||
# - Port mappings: 8081:80 (HTTP) and 8093:8080 (API/Dashboard)
|
||||
# Note: 8443:443 is used by the web container, and we don't need HTTPS for Traefik locally
|
||||
# - No ACME/Let's Encrypt (HTTP-only)
|
||||
# - Simplified healthcheck
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v3.0
|
||||
container_name: traefik-local
|
||||
restart: unless-stopped
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
# Use bridge network for local development (avoids port conflicts)
|
||||
# Ports 80/443 might be in use by other services
|
||||
# For local development, we only use HTTP (no HTTPS needed)
|
||||
# Note: 8443:443 is used by the web container
|
||||
ports:
|
||||
- "8081:80" # HTTP on port 80 (mapped to host port 8081)
|
||||
- "8093:8080" # Traefik API entrypoint (for api.insecure=true dashboard)
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
command:
|
||||
# Load static configuration file
|
||||
- "--configFile=/traefik.yml"
|
||||
volumes:
|
||||
# Docker socket for service discovery
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
# Static configuration for local development
|
||||
- ./traefik.local.yml:/traefik.yml:ro
|
||||
# Dynamic configuration (shared with production config)
|
||||
# Note: These configs reference letsencrypt resolver which we don't configure locally
|
||||
# This will cause harmless errors in logs but won't break functionality
|
||||
- ./dynamic:/dynamic:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
# Note: With api.insecure=true, Traefik should automatically serve the dashboard
|
||||
# at /dashboard/ and /api/ without needing router labels.
|
||||
# However, if this doesn't work in bridge network mode, we may need explicit routing.
|
||||
# For now, we'll try without labels and see if api.insecure=true works directly.
|
||||
- "traefik.enable=true"
|
||||
healthcheck:
|
||||
# Use Traefik's built-in healthcheck command (works in minimal image)
|
||||
test: ["CMD", "traefik", "healthcheck", "--ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
# Create this network if it doesn't exist:
|
||||
# docker network create traefik-public
|
||||
|
||||
73
deployment/legacy/stacks/stacks/traefik/docker-compose.yml
Normal file
73
deployment/legacy/stacks/stacks/traefik/docker-compose.yml
Normal file
@@ -0,0 +1,73 @@
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v3.0
|
||||
container_name: traefik
|
||||
restart: unless-stopped
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
# Use bridge network mode for reliable service discovery
|
||||
# Service discovery works correctly with Docker labels in bridge mode
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "2222:2222" # Gitea SSH
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
command:
|
||||
# Load static configuration file
|
||||
- "--configFile=/traefik.yml"
|
||||
# Increase timeouts for slow backends like Gitea
|
||||
- "--entrypoints.websecure.transport.respondingTimeouts.readTimeout=300s"
|
||||
- "--entrypoints.websecure.transport.respondingTimeouts.writeTimeout=300s"
|
||||
- "--entrypoints.websecure.transport.respondingTimeouts.idleTimeout=360s"
|
||||
volumes:
|
||||
# Docker socket for service discovery
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
# Static configuration
|
||||
- ./traefik.yml:/traefik.yml:ro
|
||||
# Dynamic configuration
|
||||
- ./dynamic:/dynamic:ro
|
||||
# SSL certificates
|
||||
- ./acme.json:/acme.json
|
||||
# Logs
|
||||
- ./logs:/logs
|
||||
labels:
|
||||
# Enable Traefik for itself
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Dashboard - BasicAuth protected
|
||||
- "traefik.http.routers.traefik-dashboard.rule=Host(`traefik.michaelschiemer.de`)"
|
||||
- "traefik.http.routers.traefik-dashboard.entrypoints=websecure"
|
||||
- "traefik.http.routers.traefik-dashboard.tls=true"
|
||||
- "traefik.http.routers.traefik-dashboard.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.traefik-dashboard.service=api@internal"
|
||||
- "traefik.http.routers.traefik-dashboard.middlewares=traefik-auth"
|
||||
|
||||
# BasicAuth for dashboard
|
||||
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:$$apr1$$Of2wG3O5$$y8X1vEoIp9vpvx64mIalk/"
|
||||
|
||||
# Note: ACME challenges are handled automatically by Traefik
|
||||
# when httpChallenge.entryPoint: web is set in traefik.yml
|
||||
# No explicit router needed - Traefik handles /.well-known/acme-challenge automatically
|
||||
|
||||
# Global redirect to HTTPS (lower priority, matches everything else)
|
||||
# ACME challenges are excluded from redirect automatically by Traefik
|
||||
# Explicitly exclude ACME challenge path to be safe
|
||||
- "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`) && !PathPrefix(`/.well-known/acme-challenge`)"
|
||||
- "traefik.http.routers.http-catchall.entrypoints=web"
|
||||
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
|
||||
- "traefik.http.routers.http-catchall.priority=1"
|
||||
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
|
||||
- "traefik.http.middlewares.redirect-to-https.redirectscheme.permanent=true"
|
||||
healthcheck:
|
||||
test: ["CMD", "traefik", "healthcheck", "--ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
@@ -0,0 +1,17 @@
|
||||
# Gitea Router Configuration
|
||||
# Router definition for Gitea using File Provider
|
||||
|
||||
http:
|
||||
routers:
|
||||
gitea:
|
||||
rule: Host(`git.michaelschiemer.de`)
|
||||
entryPoints:
|
||||
- websecure
|
||||
service: gitea
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
middlewares:
|
||||
- security-headers-global
|
||||
- gzip-compression
|
||||
priority: 100
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
# Gitea Service Configuration
|
||||
# Service definition with proper timeouts for slow responses
|
||||
|
||||
http:
|
||||
services:
|
||||
gitea:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://gitea:3000
|
||||
# Health check configuration
|
||||
healthCheck:
|
||||
path: /api/healthz
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
scheme: http
|
||||
# Response forwarding - flush immediately to stream responses
|
||||
responseForwarding:
|
||||
flushInterval: 0s
|
||||
# Pass host header to backend
|
||||
passHostHeader: true
|
||||
@@ -0,0 +1,19 @@
|
||||
# TCP Router Configuration for Gitea SSH
|
||||
# Routes SSH traffic on port 2222 to Gitea container
|
||||
# Note: Gitea SSH is handled via port mapping in docker-compose.yml
|
||||
# This file is kept for reference but may not be needed
|
||||
|
||||
tcp:
|
||||
routers:
|
||||
gitea-ssh:
|
||||
entryPoints:
|
||||
- gitea-ssh
|
||||
rule: "HostSNI(`*`)"
|
||||
service: gitea-ssh-service
|
||||
|
||||
services:
|
||||
gitea-ssh-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
# Use container name in bridge network mode
|
||||
- address: "gitea:22"
|
||||
@@ -0,0 +1,59 @@
|
||||
# Dynamic Middleware Configuration
|
||||
|
||||
http:
|
||||
middlewares:
|
||||
# Security headers for all services
|
||||
security-headers-global:
|
||||
headers:
|
||||
frameDeny: true
|
||||
contentTypeNosniff: true
|
||||
browserXssFilter: true
|
||||
stsSeconds: 31536000
|
||||
stsIncludeSubdomains: true
|
||||
stsPreload: true
|
||||
forceSTSHeader: true
|
||||
customFrameOptionsValue: "SAMEORIGIN"
|
||||
contentSecurityPolicy: "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';"
|
||||
referrerPolicy: "strict-origin-when-cross-origin"
|
||||
permissionsPolicy: "geolocation=(), microphone=(), camera=()"
|
||||
|
||||
# Compression for better performance
|
||||
gzip-compression:
|
||||
compress:
|
||||
excludedContentTypes:
|
||||
- text/event-stream
|
||||
|
||||
# Rate limiting - strict
|
||||
rate-limit-strict:
|
||||
rateLimit:
|
||||
average: 50
|
||||
burst: 25
|
||||
period: 1s
|
||||
|
||||
# Rate limiting - moderate
|
||||
rate-limit-moderate:
|
||||
rateLimit:
|
||||
average: 100
|
||||
burst: 50
|
||||
period: 1s
|
||||
|
||||
# Rate limiting - lenient
|
||||
rate-limit-lenient:
|
||||
rateLimit:
|
||||
average: 200
|
||||
burst: 100
|
||||
period: 1s
|
||||
|
||||
# Chain multiple middlewares
|
||||
default-chain:
|
||||
chain:
|
||||
middlewares:
|
||||
- security-headers-global
|
||||
- gzip-compression
|
||||
|
||||
admin-chain:
|
||||
chain:
|
||||
middlewares:
|
||||
- security-headers-global
|
||||
- gzip-compression
|
||||
- rate-limit-strict
|
||||
@@ -0,0 +1,17 @@
|
||||
http:
|
||||
routers:
|
||||
semaphore:
|
||||
rule: Host(`semaphore.michaelschiemer.de`)
|
||||
entrypoints:
|
||||
- websecure
|
||||
service: semaphore
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
priority: 100
|
||||
services:
|
||||
semaphore:
|
||||
loadBalancer:
|
||||
# Use localhost port binding since Semaphore binds to 127.0.0.1
|
||||
# Port is configured via SEMAPHORE_PORT in .env (currently 9300)
|
||||
servers:
|
||||
- url: http://127.0.0.1:9300
|
||||
98
deployment/legacy/stacks/stacks/traefik/traefik.local.yml
Normal file
98
deployment/legacy/stacks/stacks/traefik/traefik.local.yml
Normal file
@@ -0,0 +1,98 @@
|
||||
# Local Development Configuration for Traefik
|
||||
# This configuration is optimized for local development without Let's Encrypt/ACME
|
||||
|
||||
# Global Configuration
|
||||
global:
|
||||
checkNewVersion: true
|
||||
sendAnonymousUsage: false
|
||||
|
||||
# API and Dashboard
|
||||
# For local development, we enable insecure access on port 8080
|
||||
# Dashboard automatically accessible at:
|
||||
# - http://localhost:8080/dashboard/ (with trailing slash)
|
||||
# - http://localhost:8080/api/rawdata
|
||||
# - http://localhost:8080/api/http/routers
|
||||
api:
|
||||
dashboard: true
|
||||
insecure: true # HTTP-only for local development - enables direct dashboard access
|
||||
# Note: With insecure=true, dashboard is accessible at:
|
||||
# - http://localhost:8080/dashboard/
|
||||
# - http://localhost:8080/api/rawdata
|
||||
# - http://localhost:8080/api/http/routers
|
||||
# The insecure mode works directly on the entrypoint (web) without needing router labels
|
||||
|
||||
# Entry Points
|
||||
entryPoints:
|
||||
web:
|
||||
address: ":80"
|
||||
# No redirects for local development - HTTP is acceptable
|
||||
|
||||
websecure:
|
||||
address: ":443"
|
||||
# Note: Even though we don't use HTTPS locally, we need this entrypoint
|
||||
# because dynamic configurations (gitea.yml, semaphore.yml) reference it
|
||||
# We use HTTP only, but the entrypoint must exist to avoid errors
|
||||
|
||||
traefik:
|
||||
address: ":8080"
|
||||
# This entrypoint is used by api.insecure=true for dashboard access
|
||||
# It must be on port 8080 (which maps to host port 8080) to match our port mapping
|
||||
|
||||
# Certificate Resolvers
|
||||
# Note: For local development, we don't configure ACME/Let's Encrypt
|
||||
# Dynamic configs (gitea.yml, semaphore.yml) that reference letsencrypt will show errors
|
||||
# but won't break Traefik functionality. We can ignore these errors for local dev.
|
||||
# If you need to test with real certificates locally, configure ACME manually.
|
||||
# certificatesResolvers:
|
||||
# letsencrypt:
|
||||
# acme:
|
||||
# email: your-email@example.com
|
||||
# storage: /tmp/acme.json
|
||||
# caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
|
||||
# Providers
|
||||
providers:
|
||||
docker:
|
||||
endpoint: "unix:///var/run/docker.sock"
|
||||
exposedByDefault: false
|
||||
# Use Docker bridge network for local development
|
||||
network: traefik-public
|
||||
watch: true
|
||||
|
||||
file:
|
||||
directory: /dynamic
|
||||
watch: true
|
||||
# Note: Dynamic configs (gitea.yml, semaphore.yml) will show errors
|
||||
# because they reference letsencrypt resolver which we don't configure locally
|
||||
# These errors are harmless and won't affect local development
|
||||
|
||||
# Forwarded Headers Configuration
|
||||
# Simplified for local development
|
||||
forwardedHeaders:
|
||||
trustedIPs:
|
||||
- "127.0.0.1/32" # Localhost
|
||||
- "172.17.0.0/16" # Docker bridge network
|
||||
- "172.18.0.0/16" # Docker user-defined networks
|
||||
insecure: true # Allow insecure forwarded headers for local dev
|
||||
|
||||
# Logging - Console output for local development (easier to debug)
|
||||
log:
|
||||
level: INFO
|
||||
format: common # Human-readable format for local development
|
||||
|
||||
# Access Logs - Console output for local development
|
||||
accessLog:
|
||||
format: common # Human-readable format for local development
|
||||
|
||||
# Metrics (optional for local development)
|
||||
# Can be enabled if needed for monitoring
|
||||
# metrics:
|
||||
# prometheus:
|
||||
# addEntryPointsLabels: true
|
||||
# addRoutersLabels: true
|
||||
# addServicesLabels: true
|
||||
|
||||
# Ping endpoint for health checks
|
||||
ping:
|
||||
entryPoint: web
|
||||
|
||||
98
deployment/legacy/stacks/stacks/traefik/traefik.yml
Normal file
98
deployment/legacy/stacks/stacks/traefik/traefik.yml
Normal file
@@ -0,0 +1,98 @@
|
||||
# Static Configuration for Traefik v3.0
|
||||
# Minimal configuration - only static settings
|
||||
|
||||
# Global Configuration
|
||||
global:
|
||||
checkNewVersion: true
|
||||
sendAnonymousUsage: false
|
||||
|
||||
# API and Dashboard
|
||||
api:
|
||||
dashboard: true
|
||||
insecure: false
|
||||
# Dashboard accessible via HTTPS router
|
||||
|
||||
# Entry Points
|
||||
entryPoints:
|
||||
web:
|
||||
address: ":80"
|
||||
# No global redirect - ACME challenges need HTTP access
|
||||
# Redirects are handled per-router via middleware
|
||||
|
||||
websecure:
|
||||
address: ":443"
|
||||
http:
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
domains:
|
||||
- main: michaelschiemer.de
|
||||
sans:
|
||||
- "*.michaelschiemer.de"
|
||||
|
||||
# TCP entrypoint for Gitea SSH
|
||||
gitea-ssh:
|
||||
address: ":2222"
|
||||
|
||||
# Certificate Resolvers
|
||||
certificatesResolvers:
|
||||
letsencrypt:
|
||||
acme:
|
||||
email: kontakt@michaelschiemer.de
|
||||
storage: /acme.json
|
||||
caServer: https://acme-v02.api.letsencrypt.org/directory
|
||||
# Use HTTP-01 challenge (requires port 80 accessible)
|
||||
# Traefik automatically handles /.well-known/acme-challenge requests
|
||||
httpChallenge:
|
||||
entryPoint: web
|
||||
# Optional: Increase retry attempts for certificate renewal
|
||||
# This helps when Gitea is temporarily unavailable
|
||||
preferredChain: ""
|
||||
|
||||
# Providers
|
||||
providers:
|
||||
docker:
|
||||
endpoint: "unix:///var/run/docker.sock"
|
||||
exposedByDefault: false
|
||||
# Bridge network mode - Traefik uses Docker service discovery via labels
|
||||
# Services must be on the same network (traefik-public) for discovery
|
||||
watch: true
|
||||
|
||||
file:
|
||||
directory: /dynamic
|
||||
watch: true
|
||||
|
||||
# Forwarded Headers Configuration
|
||||
forwardedHeaders:
|
||||
trustedIPs:
|
||||
- "127.0.0.1/32"
|
||||
- "172.17.0.0/16"
|
||||
- "172.18.0.0/16"
|
||||
- "10.8.0.0/24"
|
||||
insecure: false
|
||||
|
||||
# Logging
|
||||
log:
|
||||
level: INFO
|
||||
filePath: /logs/traefik.log
|
||||
format: json
|
||||
|
||||
# Access Logs
|
||||
accessLog:
|
||||
filePath: /logs/access.log
|
||||
format: json
|
||||
bufferingSize: 100
|
||||
filters:
|
||||
statusCodes:
|
||||
- "400-499"
|
||||
- "500-599"
|
||||
|
||||
# Metrics
|
||||
metrics:
|
||||
prometheus:
|
||||
addEntryPointsLabels: true
|
||||
addRoutersLabels: true
|
||||
addServicesLabels: true
|
||||
|
||||
# Ping
|
||||
ping:
|
||||
entryPoint: web
|
||||
22
deployment/legacy/stacks/stacks/wireguard/.env.example
Normal file
22
deployment/legacy/stacks/stacks/wireguard/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# WireGuard VPN Configuration
|
||||
|
||||
# Server endpoint (auto-detected or set manually)
|
||||
SERVERURL=auto
|
||||
|
||||
# WireGuard port
|
||||
SERVERPORT=51820
|
||||
|
||||
# VPN network subnet
|
||||
INTERNAL_SUBNET=10.8.0.0/24
|
||||
|
||||
# Allowed IPs (VPN network only - no split tunneling)
|
||||
ALLOWEDIPS=10.8.0.0/24
|
||||
|
||||
# DNS configuration (use host DNS)
|
||||
PEERDNS=auto
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# Peers (managed manually)
|
||||
PEERS=0
|
||||
49
deployment/legacy/stacks/stacks/wireguard/docker-compose.yml
Normal file
49
deployment/legacy/stacks/stacks/wireguard/docker-compose.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
services:
|
||||
wireguard:
|
||||
image: linuxserver/wireguard:1.0.20210914
|
||||
container_name: wireguard
|
||||
restart: unless-stopped
|
||||
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
- SYS_MODULE
|
||||
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=Europe/Berlin
|
||||
- SERVERURL=auto
|
||||
- SERVERPORT=51820
|
||||
- PEERS=0 # Managed manually via config files
|
||||
- PEERDNS=auto # Use host DNS
|
||||
- INTERNAL_SUBNET=10.8.0.0/24
|
||||
- ALLOWEDIPS=10.8.0.0/24 # VPN network only
|
||||
- LOG_CONFS=true
|
||||
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- /lib/modules:/lib/modules:ro
|
||||
|
||||
ports:
|
||||
- "51820:51820/udp"
|
||||
|
||||
sysctls:
|
||||
- net.ipv4.conf.all.src_valid_mark=1
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD", "bash", "-c", "wg show wg0 | grep -q interface"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
networks:
|
||||
default:
|
||||
name: wireguard-net
|
||||
driver: bridge
|
||||
Reference in New Issue
Block a user