Compare commits

..

106 Commits

Author SHA1 Message Date
c93d3f07a2 fix(Console): add void as valid return type for command methods
All checks were successful
Test Runner / test-php (push) Successful in 31s
Deploy Application / deploy (push) Successful in 1m42s
Test Runner / test-basic (push) Successful in 7s
The MethodSignatureAnalyzer was rejecting command methods with void return
type, causing the schedule:run command to fail validation.
2025-11-26 06:16:09 +01:00
386baff65f fix(staging): correct command names for scheduler and queue-worker
All checks were successful
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 1m43s
- scheduler: scheduler:run → schedule:run (correct console command)
- queue-worker: console.php queue:work → worker.php (standalone script)
2025-11-25 23:29:31 +01:00
7f7029ae2a fix(staging): add APP_KEY_FILE environment variable to scheduler and queue-worker
All checks were successful
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 7s
Deploy Application / deploy (push) Successful in 1m46s
The scheduler and queue-worker containers were crashing with
RequiredEnvironmentVariableException because the APP_KEY_FILE
environment variable was not set, even though the app_key secret
was mounted. The Framework's Environment class needs the *_FILE
pattern to read Docker Secrets.
2025-11-25 22:45:23 +01:00
22fd89b013 fix(ErrorBoundaries): use Environment class instead of direct $_ENV access
All checks were successful
Test Runner / test-basic (push) Successful in 9s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 1m40s
Replace direct $_ENV/$_SERVER access with framework's Environment class
to follow proper framework patterns and enable Docker Secrets support.

Changes:
- Add Environment and EnvKey imports
- Use $container->get(Environment::class) for environment access
- Replace $_ENV['APP_ENV'] with $env->getString(EnvKey::APP_ENV, ...)
- Rename internal method to registerServices for clarity
- Add documentation explaining the pattern

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 19:44:02 +01:00
85e2360a90 fix(deploy): improve deployment robustness and reliability
All checks were successful
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 7s
Deploy Application / deploy (push) Successful in 1m35s
- Add docker volume prune to deploy.sh to prevent stale code issues
- Add automatic migrations and cache warmup to staging entrypoint
- Fix nginx race condition by waiting for PHP-FPM before starting
- Improve PHP healthcheck to use php-fpm-healthcheck
- Add curl to production nginx Dockerfile for healthchecks
- Add ensureSeedsTable() to SeedRepository for automatic table creation
- Update SeedCommand to ensure seeds table exists before operations

This prevents 502 Bad Gateway errors during deployment and ensures
fresh code is deployed without volume cache issues.
2025-11-25 17:44:44 +01:00
7785e65d08 fix(security): prevent debug error pages on staging/production
All checks were successful
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 7s
Deploy Application / deploy (push) Successful in 1m28s
Root cause: ExceptionHandlingInitializer attempted to autowire
EnvironmentType directly, but it was never registered in the DI
container. This caused the debug mode resolution to fail silently.

Changes:
- Use TypedConfiguration instead of EnvironmentType for proper DI
- Create ErrorHandlingConfig value object to centralize config
- Access debug mode via AppConfig.isDebugEnabled() which respects
  both APP_DEBUG env var AND EnvironmentType.isDebugEnabled()
- Register ErrorHandlingConfig as singleton in container
- Remove diagnostic logging from ResponseErrorRenderer

This ensures that staging/production environments (where
EnvironmentType != DEV) will not display stack traces, code context,
or file paths in error responses.
2025-11-25 15:01:40 +01:00
520d082393 chore: add diagnostic logging for debug mode verification
All checks were successful
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 1m33s
This temporary logging will help verify that:
- EnvironmentType is correctly detected as STAGING
- isDebugEnabled() returns false for STAGING
- ResponseErrorRenderer receives isDebugMode=false

Remove after verification is complete.
2025-11-25 14:44:05 +01:00
f9063aa151 fix: wire debug flag into error kernel
All checks were successful
Test Runner / test-php (push) Successful in 27s
Deploy Application / deploy (push) Successful in 59s
Test Runner / test-basic (push) Successful in 6s
2025-11-25 04:36:19 +01:00
4309ea7972 fix(staging): hardcode APP_DEBUG=false to prevent env override
All checks were successful
Test Runner / test-php (push) Successful in 30s
Deploy Application / deploy (push) Successful in 59s
Test Runner / test-basic (push) Successful in 6s
Changed APP_DEBUG from ${APP_DEBUG:-false} to hardcoded false value
in all 4 services (php, nginx, queue-worker, scheduler).

This prevents any server-side .env or environment variables from
accidentally enabling debug mode in staging, which was causing
detailed error pages to be displayed.
2025-11-25 04:33:46 +01:00
26f87060d5 fix(deploy): add build parameter to ensure Docker images are rebuilt
All checks were successful
Test Runner / test-php (push) Successful in 41s
Deploy Application / deploy (push) Successful in 2m58s
Test Runner / test-basic (push) Successful in 7s
The deployment was only pulling code via git but not rebuilding the
Docker images, causing containers to run with stale code from the
registry image. This fixes the debug error pages still showing on
staging despite APP_DEBUG=false.
2025-11-25 04:23:38 +01:00
dd7cfd97e6 feat: improve stack trace rendering 2025-11-25 04:13:25 +01:00
57eabe30a5 fix(security): change ErrorRendererFactory isDebugMode default to false
All checks were successful
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 43s
Change the default value of $isDebugMode constructor parameter from
true to false, following the security-by-default principle. This ensures
that even if the factory is instantiated without explicit debug mode
configuration, it won't leak sensitive debugging information like
stack traces, file paths, and code context.
2025-11-25 04:09:41 +01:00
77505edabf refactor(csrf): replace error_log with debugLog for structured logging
All checks were successful
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 45s
Replace raw error_log() calls with framework's debugLog() method for:
- Consistent structured logging with context data
- Sensitive data hashing (tokens, session IDs)
- Debug-mode awareness (only logs when debug enabled)
2025-11-25 03:52:57 +01:00
68a59f460f fix(staging): set APP_DEBUG default to false for security
All checks were successful
Deploy Application / deploy (push) Successful in 24s
Test Runner / test-php (push) Successful in 28s
Test Runner / test-basic (push) Successful in 7s
Staging environment should not expose detailed error messages,
stack traces, or debug information to end users.

Changed default from 'true' to 'false' in all services:
- php
- nginx
- queue-worker
- scheduler
2025-11-25 03:47:29 +01:00
2d762eafdf fix(deploy): add warning messages for missing Docker secrets 2025-11-25 03:13:30 +01:00
760690549d fix(deploy): escape shell variables in docker-compose YAML
Shell variables like $SECRETS_DIR in docker-compose command blocks
must be escaped as $$SECRETS_DIR. Without escaping, docker-compose
interprets them as environment variable interpolation and expands
them to empty strings, causing:
- mkdir: cannot create directory ''
- Secrets copied to wrong path (/redis_password instead of /var/www/html/storage/secrets/redis_password)
- PHP TypeError: RedisConfig::__construct() argument #3 must be string, null given

The fix applies $$ escaping to all shell variables in the PHP
service entrypoint script.
2025-11-25 03:07:26 +01:00
417c7d7a7d fix(staging): correct nginx upstream sed patterns for production-php
All checks were successful
Test Runner / test-basic (push) Successful in 9s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 43s
- Add explicit sed pattern for production-php:9000 → php:9000
- Fix character class [a-f0-9_]* to [a-zA-Z0-9_-]* to match full container names
- Loop over both sites-enabled and sites-available configs
- Add fastcgi_pass replacement for production-php
2025-11-24 22:44:43 +01:00
5e74ce73a6 fix(deploy): force remove containers before deployment
All checks were successful
Test Runner / test-basic (push) Successful in 9s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Successful in 43s
The --force-recreate flag alone doesn't handle containers that exist
outside the compose project context. Now explicitly:
1. Run docker compose down first
2. Force remove any orphaned containers with known names
3. Then create fresh containers
2025-11-24 22:10:38 +01:00
6c266861ec fix(deploy): add --force-recreate --remove-orphans to handle container conflicts
Some checks failed
Test Runner / test-basic (push) Successful in 9s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Failing after 30s
Fixes deployment error where existing containers with same name
blocked recreation. This ensures clean deployments by:
- Force recreating containers even if unchanged
- Removing orphan containers not in compose file
2025-11-24 22:07:50 +01:00
1f93377ded fix(ci): use IP address instead of hostname for SSH (DNS not available in runner)
Some checks failed
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 7s
Deploy Application / deploy (push) Failing after 33s
2025-11-24 22:05:15 +01:00
5c36517046 fix(ci): hardcode SSH host/user, only key needs to be secret
Some checks failed
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 8s
Deploy Application / deploy (push) Failing after 35s
2025-11-24 22:03:12 +01:00
4d0328bfe3 fix(ci): use correct secret name SSH_PRIVATE_KEY
Some checks failed
Deploy Application / deploy (push) Failing after 8s
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 26s
2025-11-24 22:01:43 +01:00
4cadd7ce1c fix(ci): use php-ci runner for deploy job (has SSH installed)
Some checks failed
Deploy Application / deploy (push) Failing after 8s
Test Runner / test-basic (push) Successful in 8s
Test Runner / test-php (push) Successful in 7s
2025-11-24 21:56:42 +01:00
abe68af124 fix(ci): replace actions/checkout with manual git checkout
Some checks failed
Deploy Application / deploy (push) Failing after 8s
Test Runner / test-basic (push) Successful in 7s
Test Runner / test-php (push) Successful in 33s
The Gitea Actions Runner doesn't have Node.js installed, causing
actions/checkout@v3 (a JavaScript action) to fail with
"Cannot find: node in PATH".

Replace with native shell-based git checkout that works without
Node.js and uses Gitea context variables for repository URL.
2025-11-24 21:54:27 +01:00
a0762623bc test: Add test workflow for Gitea Actions Runner
Some checks failed
Test Runner / test-php (push) Failing after 21s
Deploy Application / deploy (push) Failing after 10s
Test Runner / test-basic (push) Failing after 0s
2025-11-24 21:41:02 +01:00
77abc65cd7 fix: DockerSecretsResolver - don't normalize absolute paths like /var/www/html/...
Some checks failed
Deploy Application / deploy (push) Has been cancelled
2025-11-24 21:28:25 +01:00
4eb7134853 feat(deploy): add Gitea CI/CD workflows and production Docker Compose config
- Add staging deployment workflow (deploy-staging.yml)
- Add production deployment workflow (deploy-production.yml)
- Add workflow documentation (README.md)
- Add secrets setup guide (SECRETS_SETUP_GUIDE.md)
- Add production Docker Compose configuration (docker-compose.prod.yml)

Workflows implement automated deployment with SSH-based remote execution,
health checks, rollback on failure, and smoke testing.
2025-11-24 18:37:27 +01:00
8f3c15ddbb fix(console): comprehensive TUI rendering fixes
- Fix Enter key detection: handle multiple Enter key formats (\n, \r, \r\n)
- Reduce flickering: lower render frequency from 60 FPS to 30 FPS
- Fix menu bar visibility: re-render menu bar after content to prevent overwriting
- Fix content positioning: explicit line positioning for categories and commands
- Fix line shifting: clear lines before writing, control newlines manually
- Limit visible items: prevent overflow with maxVisibleCategories/Commands
- Improve CPU usage: increase sleep interval when no events processed

This fixes:
- Enter key not working for selection
- Strong flickering of the application
- Menu bar not visible or being overwritten
- Top half of selection list not displayed
- Lines being shifted/misaligned
2025-11-10 11:06:07 +01:00
6bc78f5540 fix(console): improve TUI rendering - fix menu bar display and category formatting
- Fix TuiRenderer rendering: correct line positioning for categories
- Fix category item formatting: remove tabs, ensure consistent spacing
- Improve clearContentArea: preserve menu bar (lines 2-3) when clearing content
- Add ConsoleContext: mutable context container for readonly ConsoleOutput
- Add context awareness to ConsoleOutput: setContext/getContext/isInTuiContext
- Auto-detect TUI context in InteractivePrompter: automatically set LayoutAreas
- Set TUI context in TuiFactory and TuiCommandExecutor
- Add tests for TuiRenderer: menu bar preservation, category formatting

This fixes rendering issues where:
- Menu bar was not displayed or overwritten
- Category items had tab/space misalignment
- Content area clearing overwrote the menu bar
2025-11-10 02:42:46 +01:00
2d53270056 feat(cms,asset): add comprehensive test suite and finalize modules
- Add comprehensive test suite for CMS and Asset modules using Pest Framework
- Implement ContentTypeService::delete() protection against deletion of in-use content types
- Add CannotDeleteContentTypeInUseException for better error handling
- Fix DerivatPipelineRegistry::getAllPipelines() to handle object uniqueness correctly
- Fix VariantName::getScale() to correctly parse scales with file extensions
- Update CMS module documentation with new features, exceptions, and test coverage
- Add CmsTestHelpers and AssetTestHelpers for test data factories
- Fix BlockTypeRegistry to be immutable after construction
- Update ContentTypeService to check for associated content before deletion
- Improve BlockRendererRegistry initialization

Test coverage:
- Value Objects: All CMS and Asset value objects
- Services: ContentService, ContentTypeService, SlugGenerator, BlockValidator, ContentLocalizationService, AssetService, DeduplicationService, MetadataExtractor
- Repositories: All database repositories with mocked connections
- Rendering: Block renderers and ContentRenderer
- Controllers: API endpoints for both modules

254 tests passing, 38 remaining (mostly image processing pipeline tests)
2025-11-10 02:12:28 +01:00
74d50a29cc fix(console): improve InteractiveMenu rendering with layout-aware system
- Add LayoutAreas and LayoutArea value objects for coordinated screen rendering
- Add ScreenRendererInterface for testable screen operations
- Extend ScreenManager with clearContentArea() for selective clearing
- Refactor InteractiveMenu to support LayoutAreas via setLayoutAreas()
- Add prepareScreen() method that handles both standalone and layout-aware modes
- Fix cursor positioning to prevent menu bar overwriting
- Add comprehensive tests for layout areas and rendering behavior

This fixes rendering issues where InteractiveMenu would overwrite the menu bar
and cause misalignment of menu items when used within TUI layouts.
2025-11-10 02:00:41 +01:00
43dd602509 test: add comprehensive tests for Discovery module components
- Add tests for Results registries (AttributeRegistry, InterfaceRegistry, TemplateRegistry)
- Add tests for Processing components (ProcessingContext)
- Add tests for Memory components (MemoryGuard)
- Add tests for Value Objects (DiscoveryOptions, DiscoveryContext)

All new tests pass and cover core functionality including:
- Registry operations (add, get, has, serialize/deserialize, optimize)
- Processing context (reflection caching, file context management)
- Memory guard (memory checks, statistics, emergency cleanup)
- Value objects (factory methods, scan types, cache keys, metrics)
2025-11-10 01:39:57 +01:00
9289344379 feat(gitea): Migrate configuration from environment variables to app.ini
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 1m12s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🧊 Warm Docker Build Cache / Refresh Buildx Caches (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 25s
System Maintenance / Run Ansible System Maintenance (push) Successful in 1m8s
📊 Monitor Workflow Performance / Monitor Workflow Performance (push) Failing after 35s
Security Vulnerability Scan / Composer Security Audit (push) Failing after 27s
- Move all Gitea configuration from docker-compose.yml environment variables to app.ini
- Enable Redis cache with proper connection string format (redis://)
- Fix Redis password to use Gitea Redis instance password (gitea_redis_password) instead of application Redis stack password
- Add database connection pool settings to prevent timeout errors
- Configure Redis for cache, session, and queue using app.ini
- Update Ansible task to use correct Redis password for Gitea Redis instance

Benefits:
- Cache now works correctly (environment variables had a bug in Gitea 1.25)
- All settings are versioned in Git
- Better maintainability and reliability
- Configuration follows Gitea documentation recommendations
2025-11-09 16:33:35 +01:00
36ef2a1e2c fix: Gitea Traefik routing and connection pool optimization
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 10m14s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 11m25s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
- Remove middleware reference from Gitea Traefik labels (caused routing issues)
- Optimize Gitea connection pool settings (MAX_IDLE_CONNS=30, authentication_timeout=180s)
- Add explicit service reference in Traefik labels
- Fix intermittent 504 timeouts by improving PostgreSQL connection handling

Fixes Gitea unreachability via git.michaelschiemer.de
2025-11-09 14:46:15 +01:00
85c369e846 refactor(console): extract responsibilities from ConsoleApplication
- Extract terminal compatibility checking to TerminalCompatibilityChecker
- Extract input validation to ConsoleInputValidator
- Extract command categorization to CommandCategorizer
- Extract result processing to CommandResultProcessor
- Extract help rendering to ConsoleHelpRenderer
- Extract TUI/Dialog initialization to TuiFactory/DialogFactory
- Extract signal handling to ConsoleSignalHandler (uses Pcntl module)
- Remove legacy PCNTL fallback code
- Reduce ConsoleApplication from 757 to ~340 lines (55% reduction)

All changes maintain backward compatibility - no breaking changes.
2025-11-09 14:30:59 +01:00
4586f0e9ee feat: further optimize workflows - runtime-base artifact and optional health checks
- Runtime-base job now uses repository artifact instead of cloning (saves 1 git clone per run)
- Health checks are now optional via workflow_dispatch input (default: true)
- Health checks still run automatically on push events
- Reduces additional load on Gitea by ~10-15%
2025-11-09 04:16:00 +01:00
72757954dc feat: optimize workflows with repository artifacts and add performance monitoring
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 33s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 32s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🧊 Warm Docker Build Cache / Refresh Buildx Caches (push) Failing after 11s
📊 Monitor Workflow Performance / Monitor Workflow Performance (push) Failing after 20s
- Use repository artifacts in test and build jobs (reduces 2-3 git clones per run)
- Add comprehensive workflow performance monitoring system
- Add monitoring playbook and Gitea workflow for automated metrics collection
- Add monitoring documentation and scripts

Optimizations:
- Repository artifact caching: changes job uploads repo, test/build jobs download it
- Reduces Gitea load by eliminating redundant git operations
- Faster job starts (artifact download is typically faster than git clone)

Monitoring:
- Script for local workflow metrics collection via Gitea API
- Ansible playbook for server-side system and Gitea metrics
- Automated Gitea workflow that runs every 6 hours
- Tracks workflow durations, system load, Gitea API response times, and more
2025-11-09 04:03:51 +01:00
c3bec296fc fix: replace insecure Traefik API calls with CLI commands
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Has been cancelled
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 33s
System Maintenance / Run Ansible System Maintenance (push) Failing after 15s
Security Vulnerability Scan / Composer Security Audit (push) Failing after 2m18s
- Replace http://localhost:8080/api/http/services with traefik show providers docker
- Replace http://localhost:8080/api/http/routers with traefik show providers docker
- Update debug messages to reference CLI command instead of HTTP API
- Fixes false 'NOT_FOUND' errors since api.insecure: false is set in traefik.yml

The Traefik CLI (traefik show providers docker) works without credentials
and is the recommended method for Traefik v3. It shows all Docker providers
including services, routers, and middlewares, so Gitea will be visible if
registered correctly.
2025-11-09 02:19:27 +01:00
67ab6bd20f fix: ensure PostgreSQL Staging Stack runs before app deployment + fix YAML parsing
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m6s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
- Fix YAML parsing error by quoting task name with colon
- Add PostgreSQL Staging Stack check and auto-start for staging deployments
- Ensures postgres-staging-internal network is created by the stack itself
- Network creation remains as fallback if stack doesn't create them
- Improves deployment reliability by ensuring dependencies are available

This addresses the root cause: PostgreSQL Staging Stack should be running
before the application stack tries to use its network.
2025-11-09 01:56:06 +01:00
453e573f28 fix: extract and create external Docker networks from compose files
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 28s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 50s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Extract external networks from docker-compose.base.yml and compose override files
- Extract network names from 'name:' fields when external: true
- Create all required external networks before docker compose up
- Fixes error: 'network postgres-staging-internal declared as external, but could not be found'

This ensures all external networks (traefik-public, app-internal, postgres-staging-internal, etc.)
are created before attempting to start containers.
2025-11-09 01:45:35 +01:00
78f6fb186b fix: bind wait/healthcheck tasks to traefik_restart.changed
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 31s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 27s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 11s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m12s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- fix-gitea-timeouts.yml: Add when conditions to wait_for and uri tasks
  - Wait for Traefik only if traefik_restart.changed
  - Wait for Gitea via Traefik only if traefik_restart or gitea_restart changed

- fix-gitea-complete.yml: Same fixes as fix-gitea-timeouts.yml
  - Wait for Traefik only if traefik_restart.changed
  - Wait for Gitea and service discovery checks only if restart occurred

- fix-gitea-traefik-connection.yml: Fix wait and test tasks
  - Register traefik_restart to track if restart happened
  - Wait for Traefik only if traefik_restart.changed
  - Test Gitea via Traefik only if traefik_restart.changed
  - Update message to reflect actual restart status

- update-gitea-traefik-service.yml: Fix pause block
  - Register traefik_restart to track if restart happened
  - Wait for Traefik only if traefik_restart.changed

This prevents unnecessary blocking when traefik_auto_restart=false and
ensures wait/healthcheck tasks only run when a restart actually occurred.
2025-11-09 01:18:12 +01:00
03affc87cf fix: use shell command for docker pull to get better error messages
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 23s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 11m15s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
- Replace docker_image module with shell command for more reliable pulling
- Add detailed error output from pull command (stdout/stderr)
- Show actual docker pull error messages when pull fails
- Simplify pull logic - always attempt pull regardless of local existence
2025-11-09 00:57:08 +01:00
bfcaf09936 fix: improve image pull verification and fix registries_to_login type error
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 25s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m1s
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 11s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Check if image exists before pull to determine if force pull is needed
- Use docker images command to verify image exists locally (more reliable)
- Fix registries_to_login Jinja2 template to ensure it's always a list
- Add better error messages when image pull fails
- Only tag image if it was successfully verified to exist
2025-11-09 00:51:42 +01:00
6a0b029138 fix: improve Docker image pull error handling and registry login
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 26s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 24s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 52s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Add debug output for image pull process
- Improve error handling: verify image exists after pull before tagging
- Fix registries_to_login Jinja2 template to handle undefined variables
- Add explicit failure if image pull fails
- Only tag image if it was successfully pulled
2025-11-09 00:45:26 +01:00
51fda6b732 fix: unify Docker registry URLs to localhost:5000
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 4m6s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 27s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Successful in 56s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 56s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Change docker-compose.staging.yml: git.michaelschiemer.de:5000 -> localhost:5000
- Update deploy-image.yml playbook to:
  - Pull images from registry.michaelschiemer.de (source registry)
  - Tag and push to localhost:5000 (local registry) for local containers
  - Remove hardcoded git.michaelschiemer.de:5000 logic
  - Use local_registry from compose files for deployment

This ensures:
- Workflow pushes to registry.michaelschiemer.de (external, HTTPS)
- Containers use localhost:5000 (local, faster, no HTTPS overhead)
- Consistent registry usage across staging and production
2025-11-09 00:29:20 +01:00
bb5284220f fix: remove duplicate changed_when line
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 33s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m8s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 29s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 9s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
2025-11-09 00:21:08 +01:00
d192a09823 fix: use registry from docker-compose file for image deployment
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
The playbook was using docker_registry (registry.michaelschiemer.de) but
docker-compose.staging.yml uses git.michaelschiemer.de:5000. Now the playbook:
- Extracts the actual registry URL from docker-compose files
- Uses that registry for deploy_image
- Updates docker-compose file with the correct registry

This ensures the image is pulled from and deployed to the correct registry.
2025-11-09 00:20:18 +01:00
f33182539f fix: improve registry URL regex to exclude image names with tags
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 26s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 25s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m8s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
The previous regex matched 'redis:7-alpine' because it saw ':7' as a port.
New regex requires:
- TLD with optional port (e.g. git.michaelschiemer.de:5000)
- Hostname with numeric port only (e.g. localhost:5000)
- localhost with optional port

This excludes image names like 'redis:7-alpine' or 'minio/minio:latest'.
2025-11-09 00:13:33 +01:00
b9437ed68a fix: use immutable list operations in Jinja2 template
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 22s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 9s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 25s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 59s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
Ansible doesn't allow .append() on lists in Jinja2 templates.
Changed to use select() filter with regex match instead of loop with append.
This is the same filtering logic but using immutable operations.
2025-11-09 00:09:51 +01:00
c8ffb6e298 fix: preserve port numbers in Docker registry URL extraction
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 38s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 14s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 33s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 58s
The previous regex was removing port numbers from registry URLs.
Now using sed to only remove the image name part after the slash,
preserving the full registry URL including port (e.g. git.michaelschiemer.de:5000)
2025-11-09 00:05:11 +01:00
ba859d0fdf fix: prevent Traefik restart loops and improve Docker registry login
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 31s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
Registry Login Fixes:
- Filter out service names (minio, redis) from registry URL extraction
- Only recognize actual registry URLs (with TLD or port)
- Preserve port numbers in registry URLs (e.g. git.michaelschiemer.de:5000)
- Better error messages for failed logins

Traefik Restart Loop Prevention:
- Set traefik_auto_restart default to false in traefik role
- Add traefik_auto_restart, traefik_ssl_restart, gitea_auto_restart to staging vars
- Add guard to fix-gitea-traefik-connection.yml restart task
- Add guard and deprecation warning to update-gitea-traefik-service.yml

This ensures that:
- CI/CD pipelines won't cause Traefik restart loops
- Staging environment uses same safe defaults as production
- Deprecated playbooks fail by default unless explicitly enabled
- Only actual Docker registries are used for login, not service names
2025-11-09 00:03:30 +01:00
c6b94b1147 fix: login to correct Docker registry from docker-compose files
Some checks failed
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 31s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m7s
- Extract actual registry URLs from docker-compose files
- Login to all registries found in compose files (e.g. git.michaelschiemer.de:5000)
- This fixes the 'no basic auth credentials' error when pulling images
- The playbook now automatically detects which registry is used in compose files
- Falls back to docker_registry variable if no registry found in compose files
2025-11-08 23:56:17 +01:00
f548a0322c fix: prevent CI jobs from restarting Traefik
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 28s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 35s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 18s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 17s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m9s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Add traefik_auto_restart check to fix-gitea-timeouts.yml
- Add traefik_auto_restart check to fix-gitea-ssl-routing.yml
- Add traefik_auto_restart check to fix-gitea-complete.yml
- Set traefik_auto_restart=false in all Gitea workflow Ansible calls
- Set gitea_auto_restart=false in all Gitea workflow Ansible calls
- Add redeploy-traefik-gitea.yml playbook for clean redeployment

This prevents CI/CD pipelines from causing Traefik restart loops by
ensuring all remediation playbooks respect the traefik_auto_restart
flag, which is set to false in group_vars/production/vars.yml.
2025-11-08 23:47:44 +01:00
a96bd41326 fix(ansible): Fix YAML syntax in traefik/tasks/ssl.yml
- Change when clause from list to string format to fix YAML parsing error
- Use 'and' operators instead of list items for better readability
2025-11-08 23:28:49 +01:00
bb7cf35e54 fix(ansible): Prevent Traefik and Gitea restart loops
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 29s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 11m3s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
- Set traefik_auto_restart: false in group_vars to prevent automatic restarts after config deployment
- Set traefik_ssl_restart: false to prevent automatic restarts during SSL certificate setup
- Set gitea_auto_restart: false to prevent automatic restarts when healthcheck fails
- Modify traefik/tasks/ssl.yml to only restart if explicitly requested or acme.json was created
- Modify traefik/tasks/config.yml to respect traefik_auto_restart flag
- Modify gitea/tasks/restart.yml to respect gitea_auto_restart flag
- Add verify-traefik-fix.yml playbook to monitor Traefik stability

This fixes the issue where Traefik was restarting every minute due to
automatic restart mechanisms triggered by config deployments and health checks.
The restart loops caused 504 Gateway Timeouts for Gitea and other services.

Fixes: Traefik restart loop causing service unavailability
2025-11-08 23:25:38 +01:00
aa9de7173d feat: Add playbook to deploy Traefik configuration files
- Add deploy-traefik-config.yml to copy updated config files to server
- Deploys docker-compose.yml and traefik.yml
- Shows deployment status and next steps
- Required before restarting Traefik with new configuration
2025-11-08 19:01:28 +01:00
c06a9ec134 fix: Use internal healthcheck instead of HTTP ping endpoint
- Change health check to use docker exec traefik healthcheck
- HTTP ping endpoint requires BasicAuth (401), internal check is more reliable
- Improves health check accuracy in restart-traefik.yml playbook
2025-11-08 18:53:07 +01:00
e047720954 feat: Add Ansible playbook to restart Traefik and verify configuration
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 34s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 12m24s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 4m10s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
- Add restart-traefik.yml playbook to restart Traefik container
- Verify Traefik health after restart
- Check for ACME challenge errors in logs
- Display status summary with next steps
- Useful after Traefik configuration changes
2025-11-08 18:47:14 +01:00
06bad20123 fix: Explicitly exclude ACME challenge path from HTTPS redirect
- Add explicit exclusion of /.well-known/acme-challenge from catch-all redirect
- Ensures ACME challenges are never redirected to HTTPS
- Traefik handles ACME challenges automatically, but explicit exclusion is safer
2025-11-08 18:46:27 +01:00
3d233e8b2c fix: Remove redundant ACME challenge router in Traefik configuration
- Remove explicit ACME challenge router that had no service defined
- Traefik handles ACME challenges automatically when httpChallenge.entryPoint is set
- The router was interfering with automatic challenge handling
- Fixes 'Cannot retrieve the ACME challenge' errors in Traefik logs
2025-11-08 18:46:01 +01:00
af98069eba fix: Reduce Gitea runner request frequency to prevent DDoS-like behavior
- Increase fetch_interval from 2s to 10s to reduce load on Gitea
- Increase fetch_timeout from 5s to 30s for better error handling
- Add documentation about runner overloading Gitea and how to fix it
- Prevents 504 errors caused by runner bombarding Gitea with requests
2025-11-08 17:56:11 +01:00
52023081ab fix: Add retry logic to git operations in deploy-application-code.yml
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 37s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Runtime Base Image (push) Failing after 13m31s
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
- Add retry logic (5 retries, 10s delay) to git clone and update tasks
- Handle 504 Gateway Timeout errors from Gitea gracefully
- Fail with clear error message if all retries are exhausted
- Prevents workflow failures due to temporary Gitea unavailability
2025-11-08 17:34:59 +01:00
43a06eae4d docs: Add documentation and playbooks for fixing Gitea runner configuration
- Add FIX_RUNNER_CONFIG.md with manual steps to re-register runner
- Add fix-gitea-runner-config.yml to diagnose runner issues
- Add register-gitea-runner.yml to re-register runner via Ansible
- Fixes issue where runner falls back to GitHub on 504 errors
2025-11-08 17:25:22 +01:00
21e7c40c99 feat: Add Ansible playbooks to fix Gitea runner configuration
- Add fix-gitea-runner-config.yml to diagnose runner configuration issues
- Add register-gitea-runner.yml to re-register runner with correct Gitea URL
- Check for GitHub URLs in runner configuration (should only use git.michaelschiemer.de)
- Verify .env file has correct GITEA_INSTANCE_URL
- Fixes 504 timeouts caused by runner trying to connect to GitHub fallback
2025-11-08 17:14:19 +01:00
97b0dde75b feat: Add Ansible playbook to check and restart Gitea
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 35s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 15s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m11s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Check Gitea container status
- Check Gitea health endpoint
- Display container logs
- Restart container if unhealthy or not running
- Wait for Gitea to be ready after restart
- Display comprehensive status summary
- Helps diagnose and fix 504 Gateway Timeout issues
2025-11-08 17:03:22 +01:00
891c73d0af fix: Wait for Docker socket and login to registry before compose up
- Change Docker daemon wait from TCP port 2375 to Unix socket /var/run/docker.sock
- Add Docker registry login task before docker compose up
- Ensures authentication is available when pulling images
- Fixes 'no basic auth credentials' error during image pull
2025-11-08 16:53:22 +01:00
294e6721eb fix: Use immutable operations for Docker daemon config merge
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 55s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 27s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m26s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Replace append() with list concatenation (+ operator)
- Use combine filter instead of update() method
- Avoids 'unsafe append' error with AnsibleLazyTemplateList
- All operations are now immutable and safe
2025-11-08 16:09:56 +01:00
bf25f088c0 fix: Correct Jinja2 logic for merging insecure registry config
Some checks failed
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m11s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 35s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 33s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
- Fix dictionary update logic in set_fact
- Use list operations before updating dictionary
- Ensures proper JSON structure for Docker daemon.json
2025-11-08 16:06:08 +01:00
dbf7f6e002 fix: Replace shell script with Ansible modules for Docker daemon config
Some checks failed
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
🚀 Build & Deploy Image / Determine Build Necessity (push) Has been cancelled
- Replace Python heredoc in shell script with native Ansible modules
- Use slurp to read existing daemon.json
- Use set_fact and copy modules to update configuration
- Fixes YAML parsing error with heredoc syntax
- More idempotent and Ansible-native approach
2025-11-08 16:05:53 +01:00
76ec4cf28d fix: Configure Docker insecure registry and add GIT_BRANCH
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 52s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 51s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 52s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Add Docker daemon configuration to use HTTP for git.michaelschiemer.de:5000 registry
- Configure insecure-registries in /etc/docker/daemon.json
- Add GIT_BRANCH environment variable (staging for staging, main for production)
- Set default GIT_REPOSITORY_URL if not provided
- Fixes 'http: server gave HTTP response to HTTPS client' error
- Fixes missing GIT_BRANCH variable warnings
2025-11-08 16:01:44 +01:00
bfd91fcb61 fix: Add Git variables and improve bool check for image pull
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m3s
- Add Git repository variables (GIT_REPOSITORY_URL, GIT_TOKEN, GIT_USERNAME, GIT_PASSWORD) to environment
- Load Git variables from vault or defaults
- Add Git variables to .env file creation
- Improve registry_accessible check with 'is defined' guard
- Fixes missing GIT_* environment variable warnings in docker compose
2025-11-08 15:58:09 +01:00
43c36d2687 fix: Fix bool comparison and .env file permissions
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 58s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 34s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m6s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Change registry_accessible to string comparison ('true'/'false') instead of bool
- Fix 'argument of type bool is not iterable' error in when conditions
- Set correct owner/group for .env file (ansible_user instead of root)
- Fixes 'permission denied' error when docker compose reads .env file
2025-11-08 15:53:50 +01:00
c1331ae7a7 fix: Add .env file check and environment variables for docker compose
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 30s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m14s
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 28s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Fix 'argument of type bool is not iterable' error in image pull task
- Check if .env file exists before docker compose up
- Create minimal .env file if it doesn't exist with required variables
- Load secrets from vault file if available
- Set database and MinIO variables from vault or defaults
- Pass environment variables to docker compose command
- Fixes missing MINIO_ROOT_USER, DB_USERNAME, DB_PASSWORD, SECRETS_DIR errors
2025-11-08 15:49:22 +01:00
333dc04404 fix: Improve registry check and image pull error handling
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 33s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 15s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m11s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 32s
- Add registry_accessible flag to safely check registry status
- Fix 'argument of type bool is not iterable' error in when conditions
- Only pull image if registry is accessible
- Add ignore_errors to image pull task to prevent failures
- Improves handling of registry connectivity issues
2025-11-08 15:44:50 +01:00
cf8fea322c fix: Resolve recursive loops and fix registry URL in deploy-image.yml
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m24s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Fix recursive loop in app_name variable
- Set app_name and deploy_image using set_fact tasks
- Replace application_stack_dest with application_code_dest (consistent with other playbooks)
- Change registry URL from HTTP to HTTPS
- Add validate_certs: no for registry accessibility check
- Fixes 'Recursive loop detected' error in image deployment
2025-11-08 15:41:11 +01:00
48e5179bac fix: Correct deployment order - deploy image before composer install
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 31s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 11s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 52s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Move deploy-image.yml before install-composer-dependencies.yml
- Containers must be running before composer can install dependencies
- Fixes 'container not running' error in composer install step
- Applied to both staging and production workflows
2025-11-08 15:35:50 +01:00
38af81e2db fix: Don't start containers in install-composer-dependencies playbook
Some checks failed
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
🚀 Build & Deploy Image / Determine Build Necessity (push) Has been cancelled
- Remove container start logic - containers should be started by deploy-image.yml
- Add clear error message if container is not running
- Provides helpful instructions for manual container start if needed
2025-11-08 15:35:22 +01:00
ae592c21c7 fix: Add container status check and better error handling
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 56s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 24s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 15s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m3s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Check if container is running before executing composer
- Start container if not running
- Display detailed error output for debugging
- Fixes composer install failures when container is not running
2025-11-08 15:31:06 +01:00
f0a412a221 fix: Use application code directory for docker-compose files
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 53s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 25s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 12s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 15s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m20s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Change from stacks path to application code directory (/home/deploy/michaelschiemer/current)
- docker-compose files are in the application root, not in deployment/stacks
- Fixes 'no such file or directory' error for docker-compose.base.yml
2025-11-08 15:26:42 +01:00
dd072ded3c fix: Use correct path for docker-compose files
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 34s
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
- Change stacks_base_path_default from /home/deploy to /home/deploy/deployment/stacks
- Matches actual server directory structure where stacks are located
2025-11-08 15:25:57 +01:00
f7bac92b64 fix: Resolve recursive loop in stacks_base_path variable
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 30s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 54s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 54s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Use stacks_base_path_default instead of self-reference
- Fixes 'Recursive loop detected' error in install-composer-dependencies playbook
2025-11-08 15:19:37 +01:00
eeaf025fed fix: Define stacks_base_path variable with default value
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 29s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 29s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 52s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Add stacks_base_path variable with default '/home/deploy'
- Fixes 'stacks_base_path is undefined' error in install-composer-dependencies playbook
2025-11-08 15:16:24 +01:00
2e14557b21 fix: Handle case where destination exists but is not a git repo
Some checks failed
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 13s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 27s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 31s
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 13s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m22s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Check if destination directory exists separately from git repo check
- Remove directory if it exists but is not a git repository
- Prevents 'destination path already exists' error during clone
2025-11-08 15:12:06 +01:00
03f4d90ed0 fix: Remove unsupported owner/group parameters from git module
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 28s
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 9s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 10s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 43s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- ansible.builtin.git no longer supports owner and group parameters
- Set ownership in separate file task after git operations
- Fixes 'Unsupported parameters' error
2025-11-08 15:08:52 +01:00
2f98c52300 refactor: Simplify git_repo_url logic
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 59s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 26s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 48s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Use single set_fact task with ternary operator
- Cleaner and more efficient than multiple conditional tasks
2025-11-08 15:04:20 +01:00
163460c22e fix: Use separate variable git_repo_url to avoid recursive loop
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
🚀 Build & Deploy Image / Determine Build Necessity (push) Has been cancelled
- Use git_repo_url instead of git_repository_url in tasks
- Set git_repo_url based on whether git_repository_url is provided
- This completely avoids the recursive loop issue
2025-11-08 15:04:04 +01:00
0ab3b6a799 fix: Set git_repository_url using set_fact to avoid recursive loop
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 33s
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been cancelled
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
Security Vulnerability Scan / Check for Dependency Changes (push) Has been cancelled
- Use set_fact task to set git_repository_url instead of vars section
- This prevents recursive loop when variable is referenced in tasks
2025-11-08 15:03:09 +01:00
acecc23cec fix: Resolve recursive loop in git_repository_url variable
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 57s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 11s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 1m7s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 25s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
- Change git_repository_url to use git_repository_url_default instead of self-reference
- Fixes 'Recursive loop detected in template' error in Ansible playbook
2025-11-08 14:59:16 +01:00
95b53c0ab8 test: Verify workflow with ANSIBLE_VAULT_PASSWORD secret
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 25s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 26s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 14s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 11s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 44s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Test commit to verify that workflow can now:
  - Use php-ci image with Ansible
  - Use ANSIBLE_VAULT_PASSWORD secret for vault decryption
  - Successfully deploy to staging
2025-11-08 14:56:35 +01:00
13d627d351 fix: Use php-ci runner for deploy jobs
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 57s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 52s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 15s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 40s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Change deploy-staging and deploy-production to use php-ci runner
- php-ci image has Ansible pre-installed, fixing 'ansible-playbook: command not found' error
2025-11-08 14:50:25 +01:00
deddb87dcf test: Trigger workflow by changing deployment path
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 50s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 26s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 14s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 41s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
This commit changes a file in deployment/ to trigger the build workflow
2025-11-08 14:46:22 +01:00
f498a13ee1 test: Verify workflow with registry secrets
All checks were successful
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 31s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
- Test commit to verify that workflow can now:
  - Use php-ci image from docker-dind
  - Login to registry with configured secrets
  - Build and push images successfully
2025-11-08 14:44:15 +01:00
efa97f8b5d fix: Build CI images on production server
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 33s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 39s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 17s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Successful in 1m15s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Failing after 33s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
- Add build-ci-image-production.sh script for building CI images on production
- Add BUILD_ON_PRODUCTION.md documentation
- Fix Dockerfile to handle optional PECL extensions for PHP 8.5 RC

This fixes the issue where Gitea workflows fail with:
'Error response from daemon: pull access denied for php-ci'
2025-11-08 14:33:59 +01:00
07e92a8709 fix: Install Ansible in docker-build image instead of runtime
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 44s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 18s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Failing after 2s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Failing after 44s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 40s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
2025-11-08 13:54:01 +01:00
e9e87c9c5e fix: Replace apt-get with apk for Alpine-based docker-build container
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 1m1s
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 28s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 10s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 16s
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 38s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
2025-11-08 13:45:18 +01:00
1b9cda6dd3 docs: Add CI image setup documentation
Some checks failed
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Failing after 40s
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Determine Build Necessity (push) Successful in 46s
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 1m0s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Successful in 11s
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Successful in 12s
2025-11-08 13:38:46 +01:00
50e58c6ba9 docs: Update deployment status - Gitea repository created, Traefik issues fixed
Some checks failed
Security Vulnerability Scan / Check for Dependency Changes (push) Successful in 34s
Security Vulnerability Scan / Composer Security Audit (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been cancelled
🚀 Build & Deploy Image / Build Docker Image (push) Has been cancelled
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been cancelled
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been cancelled
🚀 Build & Deploy Image / Determine Build Necessity (push) Has been cancelled
2025-11-08 13:13:42 +01:00
63799a7655 test: CI/CD pipeline production test 2025-11-08 12:15:36 +01:00
7093693cfb test: CI/CD pipeline staging test 2025-11-08 11:16:01 +01:00
9e77ac3b42 feat(traefik): Add Gitea service definition for Traefik
- Add gitea-service.yml with proper timeout configuration
- Service definition required for Traefik to route to Gitea
- Replaces old gitea.yml file that was removed
2025-11-07 23:24:20 +01:00
e8a26d7807 test: CI/CD pipeline staging test - Redis aktiviert, Bad Gateway dokumentiert 2025-11-07 20:54:44 +01:00
c088d08639 test: CI/CD pipeline staging test - Repository Setup automatisiert 2025-11-07 20:17:35 +01:00
07c054b5ff test: CI/CD pipeline staging test 2025-11-07 19:52:48 +01:00
1963b10749 feat: Integrate Ansible playbooks into CI/CD workflows
- Add deploy-application-code.yml for Git-based code deployment
- Add install-composer-dependencies.yml for dependency installation
- Add deploy-image.yml for Docker image deployment
- Update build-image.yml to use Ansible playbooks
- Update manual-deploy.yml to use Ansible playbooks
- Add ANSIBLE_VAULT_PASSWORD secret handling
2025-11-07 18:14:11 +01:00
cf903f2582 fix(traefik): update local dev ports and gitea SSH IP
- Change Traefik local HTTP port from 8080 to 8081 (conflict with cadvisor)
- Change Traefik dashboard port to 8093 (conflicts with cadvisor, Hyperion)
- Update Gitea SSH service IP from 172.23.0.2 to 172.23.0.3
- Note: Gitea SSH works directly via Docker port mapping in local dev
- Traefik TCP routing only needed for production (host network mode)
2025-11-05 14:51:37 +01:00
95147ff23e refactor(deployment): Remove WireGuard VPN dependency and restore public service access
Remove WireGuard integration from production deployment to simplify infrastructure:
- Remove docker-compose-direct-access.yml (VPN-bound services)
- Remove VPN-only middlewares from Grafana, Prometheus, Portainer
- Remove WireGuard middleware definitions from Traefik
- Remove WireGuard IPs (10.8.0.0/24) from Traefik forwarded headers

All monitoring services now publicly accessible via subdomains:
- grafana.michaelschiemer.de (with Grafana native auth)
- prometheus.michaelschiemer.de (with Basic Auth)
- portainer.michaelschiemer.de (with Portainer native auth)

All services use Let's Encrypt SSL certificates via Traefik.
2025-11-05 12:48:25 +01:00
2760 changed files with 248807 additions and 38653 deletions

27
.gitattributes vendored Normal file
View File

@@ -0,0 +1,27 @@
# Ensure shell scripts use LF line endings
*.sh text eol=lf
docker/**/*.sh text eol=lf
**/*.sh text eol=lf
# PHP files
*.php text eol=lf
# Configuration files
*.yml text eol=lf
*.yaml text eol=lf
*.json text eol=lf
*.conf text eol=lf
*.ini text eol=lf
# Docker files
Dockerfile* text eol=lf
docker-compose*.yml text eol=lf
*.dockerfile text eol=lf
# Scripts
*.bash text eol=lf
*.zsh text eol=lf
# Default for text files
* text=auto

View File

@@ -0,0 +1,73 @@
name: Deploy Application
on:
push:
branches:
- staging
- main
workflow_dispatch:
jobs:
deploy:
runs-on: php-ci
steps:
# Manual checkout - works without Node.js
- name: Checkout code
run: |
echo "📥 Checking out repository..."
if [ -d ".git" ]; then
git fetch origin
git checkout ${{ github.ref_name }}
git reset --hard origin/${{ github.ref_name }}
else
git clone --branch ${{ github.ref_name }} --single-branch ${{ github.server_url }}/${{ github.repository }}.git .
fi
- name: Determine environment
id: env
run: |
if [ "${{ github.ref }}" == "refs/heads/staging" ]; then
echo "environment=staging" >> $GITHUB_OUTPUT
elif [ "${{ github.ref }}" == "refs/heads/main" ]; then
echo "environment=production" >> $GITHUB_OUTPUT
else
echo "environment=staging" >> $GITHUB_OUTPUT
fi
- name: Deploy to server
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
run: |
# Validate required secret
if [ -z "$SSH_PRIVATE_KEY" ]; then
echo "❌ Missing required secret: SSH_PRIVATE_KEY"
exit 1
fi
echo "$SSH_PRIVATE_KEY" > /tmp/ssh_key
chmod 600 /tmp/ssh_key
ssh -i /tmp/ssh_key -o StrictHostKeyChecking=no deploy@94.16.110.151 << EOF
set -e
cd /home/deploy/michaelschiemer/current
# Pull latest code
git fetch origin ${{ github.ref_name }}
git reset --hard origin/${{ github.ref_name }}
# Run deployment script with image build
./deployment/scripts/deploy.sh ${{ steps.env.outputs.environment }} build
EOF
rm -f /tmp/ssh_key
- name: Deployment status
if: always()
run: |
if [ "${{ job.status }}" == "success" ]; then
echo "✅ Deployment successful"
else
echo "❌ Deployment failed"
exit 1
fi

View File

@@ -1,456 +0,0 @@
name: 🚀 Manual Deployment
run-name: Manual Deploy - ${{ inputs.environment }} - ${{ inputs.image_tag || 'latest' }}
on:
workflow_dispatch:
inputs:
environment:
description: 'Deployment environment'
required: true
type: choice
options:
- staging
- production
image_tag:
description: 'Image tag to deploy (e.g. abc1234-1696234567, git-abc1234). Leave empty for latest'
required: false
type: string
default: ''
branch:
description: 'Branch to checkout (default: main for production, staging for staging)'
required: false
type: string
default: ''
env:
REGISTRY: registry.michaelschiemer.de
IMAGE_NAME: framework
DEPLOYMENT_HOST: 94.16.110.151
jobs:
determine-image:
name: Determine Deployment Image
runs-on: ubuntu-latest
outputs:
image_url: ${{ steps.image.outputs.image_url }}
image_tag: ${{ steps.image.outputs.image_tag }}
registry_host: ${{ env.REGISTRY }}
image_name: ${{ env.IMAGE_NAME }}
steps:
- name: Determine image to deploy
id: image
shell: bash
run: |
REGISTRY="${{ env.REGISTRY }}"
IMAGE_NAME="${{ env.IMAGE_NAME }}"
INPUT_TAG="${{ inputs.image_tag }}"
if [ -z "$INPUT_TAG" ] || [ "$INPUT_TAG" = "" ]; then
IMAGE_TAG="latest"
else
IMAGE_TAG="$INPUT_TAG"
fi
IMAGE_URL="${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
echo "image_url=${IMAGE_URL}" >> "$GITHUB_OUTPUT"
echo "image_tag=${IMAGE_TAG}" >> "$GITHUB_OUTPUT"
echo "📦 Deployment Image:"
echo " URL: ${IMAGE_URL}"
echo " Tag: ${IMAGE_TAG}"
echo ""
echo " Image will be validated during deployment"
deploy-staging:
name: Deploy to Staging
needs: determine-image
if: inputs.environment == 'staging'
runs-on: ubuntu-latest
environment:
name: staging
url: https://staging.michaelschiemer.de
steps:
- name: Determine branch name
id: branch
shell: bash
run: |
INPUT_BRANCH="${{ inputs.branch }}"
if [ -z "$INPUT_BRANCH" ] || [ "$INPUT_BRANCH" = "" ]; then
REF_NAME="staging"
else
REF_NAME="$INPUT_BRANCH"
fi
echo "BRANCH=$REF_NAME" >> $GITHUB_OUTPUT
echo "📋 Branch: $REF_NAME"
- name: Checkout deployment scripts
run: |
REF_NAME="${{ steps.branch.outputs.BRANCH }}"
REPO="${{ github.repository }}"
if [ -n "${{ secrets.CI_TOKEN }}" ]; then
git clone --depth 1 --branch "$REF_NAME" \
"https://${{ secrets.CI_TOKEN }}@git.michaelschiemer.de/${REPO}.git" \
/workspace/repo
else
git clone --depth 1 --branch "$REF_NAME" \
"https://git.michaelschiemer.de/${REPO}.git" \
/workspace/repo || \
git clone --depth 1 \
"https://git.michaelschiemer.de/${REPO}.git" \
/workspace/repo
fi
cd /workspace/repo
- name: Setup SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/production
chmod 600 ~/.ssh/production
ssh-keyscan -H ${{ env.DEPLOYMENT_HOST }} >> ~/.ssh/known_hosts
- name: Deploy to Staging Server
run: |
set -e
DEPLOYMENT_HOST="${{ env.DEPLOYMENT_HOST }}"
REGISTRY_HOST="${{ needs.determine-image.outputs.registry_host }}"
IMAGE_NAME="${{ needs.determine-image.outputs.image_name }}"
DEPLOY_IMAGE="${{ needs.determine-image.outputs.image_url }}"
IMAGE_TAG="${{ needs.determine-image.outputs.image_tag }}"
DEFAULT_IMAGE="${REGISTRY_HOST}/${IMAGE_NAME}:latest"
FALLBACK_IMAGE="$DEFAULT_IMAGE"
SELECTED_IMAGE="$DEPLOY_IMAGE"
if [ -z "$SELECTED_IMAGE" ] || [ "$SELECTED_IMAGE" = "null" ]; then
SELECTED_IMAGE="$DEFAULT_IMAGE"
fi
STACK_PATH_DISPLAY="~/deployment/stacks/staging"
SELECTED_TAG="${SELECTED_IMAGE##*:}"
SELECTED_REPO="${SELECTED_IMAGE%:*}"
if [ -z "$SELECTED_REPO" ] || [ "$SELECTED_REPO" = "$SELECTED_IMAGE" ]; then
FALLBACK_IMAGE="$DEFAULT_IMAGE"
else
FALLBACK_IMAGE="${SELECTED_REPO}:latest"
fi
echo "🚀 Starting staging deployment..."
echo " Image: ${SELECTED_IMAGE}"
echo " Tag: ${SELECTED_TAG}"
echo " Host: ${DEPLOYMENT_HOST}"
echo " Stack: ${STACK_PATH_DISPLAY}"
FULL_IMAGE_ARG=$(printf '%q' "$SELECTED_IMAGE")
FALLBACK_IMAGE_ARG=$(printf '%q' "$FALLBACK_IMAGE")
IMAGE_NAME_ARG=$(printf '%q' "$IMAGE_NAME")
REGISTRY_ARG=$(printf '%q' "$REGISTRY_HOST")
ssh -i ~/.ssh/production \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
deploy@${DEPLOYMENT_HOST} "bash -s -- $FULL_IMAGE_ARG $FALLBACK_IMAGE_ARG $IMAGE_NAME_ARG $REGISTRY_ARG" <<'EOF'
set -e
FULL_IMAGE="$1"
FALLBACK_IMAGE="$2"
IMAGE_NAME="$3"
REGISTRY="$4"
shift 4
CURRENT_USER="$(whoami)"
USER_HOME="$(getent passwd "$CURRENT_USER" | cut -d: -f6 2>/dev/null)"
[ -z "$USER_HOME" ] && USER_HOME="$HOME"
[ -z "$USER_HOME" ] && USER_HOME="/home/$CURRENT_USER"
STACK_TARGET="${USER_HOME}/deployment/stacks/staging"
# Ensure staging stack directory exists
mkdir -p "${STACK_TARGET}"
cd "${STACK_TARGET}"
declare -a REGISTRY_TARGETS=()
if [ -n "${REGISTRY}" ]; then
REGISTRY_TARGETS+=("${REGISTRY}")
fi
for IMAGE_REF in "${FULL_IMAGE}" "${FALLBACK_IMAGE}"; do
if [ -n "${IMAGE_REF}" ]; then
HOST_PART="${IMAGE_REF%%/*}"
if [ -n "${HOST_PART}" ]; then
if ! printf '%s\n' "${REGISTRY_TARGETS[@]}" | grep -qx "${HOST_PART}"; then
REGISTRY_TARGETS+=("${HOST_PART}")
fi
fi
fi
done
for TARGET in "${REGISTRY_TARGETS[@]}"; do
[ -z "${TARGET}" ] && continue
echo "🔐 Logging in to Docker registry ${TARGET}..."
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login "${TARGET}" \
-u "${{ secrets.REGISTRY_USER }}" \
--password-stdin || echo "⚠️ Registry login failed for ${TARGET}, continuing..."
done
DEPLOY_IMAGE="$FULL_IMAGE"
echo "📥 Pulling image ${DEPLOY_IMAGE}..."
if ! docker pull "${DEPLOY_IMAGE}"; then
if [ -n "${FALLBACK_IMAGE}" ] && [ "${DEPLOY_IMAGE}" != "${FALLBACK_IMAGE}" ]; then
echo "⚠️ Failed to pull ${DEPLOY_IMAGE}, attempting fallback ${FALLBACK_IMAGE}"
if docker pull "${FALLBACK_IMAGE}"; then
DEPLOY_IMAGE="${FALLBACK_IMAGE}"
echo " Using fallback image ${DEPLOY_IMAGE}"
else
echo "❌ Failed to pull fallback image ${FALLBACK_IMAGE}"
exit 1
fi
else
echo "❌ Failed to pull image ${DEPLOY_IMAGE}"
exit 1
fi
fi
# Copy base and staging docker-compose files if they don't exist
if [ ! -f docker-compose.base.yml ]; then
echo "⚠️ docker-compose.base.yml not found, copying from repo..."
cp /workspace/repo/docker-compose.base.yml . || {
echo "❌ Failed to copy docker-compose.base.yml"
exit 1
}
fi
if [ ! -f docker-compose.staging.yml ]; then
echo "⚠️ docker-compose.staging.yml not found, copying from repo..."
cp /workspace/repo/docker-compose.staging.yml . || {
echo "❌ Failed to copy docker-compose.staging.yml"
exit 1
}
fi
# Update docker-compose.staging.yml with new image tag
echo "📝 Updating docker-compose.staging.yml with new image tag..."
sed -i "s|image:.*/${IMAGE_NAME}:.*|image: ${DEPLOY_IMAGE}|g" docker-compose.staging.yml
echo "✅ Updated docker-compose.staging.yml:"
grep "image:" docker-compose.staging.yml | head -5
# Ensure networks exist
echo "🔗 Ensuring Docker networks exist..."
docker network create traefik-public 2>/dev/null || true
docker network create staging-internal 2>/dev/null || true
echo "🔄 Starting/updating services..."
# Use --pull missing instead of --pull always since we already pulled the specific image
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d --pull missing --force-recreate || {
echo "❌ Failed to start services"
exit 1
}
echo "⏳ Waiting for services to start..."
sleep 15
# Pull latest code from Git repository
echo "🔄 Pulling latest code from Git repository in staging-app container..."
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-app bash -c "cd /var/www/html && git -c safe.directory=/var/www/html fetch origin staging && git -c safe.directory=/var/www/html reset --hard origin/staging && git -c safe.directory=/var/www/html clean -fd" || echo "⚠️ Git pull failed, container will sync on next restart"
# Also trigger a restart to ensure entrypoint script runs
echo "🔄 Restarting staging-app to ensure all services are up-to-date..."
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml restart staging-app || echo "⚠️ Failed to restart staging-app"
# Fix nginx upstream configuration - critical fix for 502 errors
# sites-available/default uses 127.0.0.1:9000 but PHP-FPM runs in staging-app container
echo "🔧 Fixing nginx PHP-FPM upstream configuration (post-deploy fix)..."
sleep 5
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server 127.0.0.1:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "⚠️ Upstream fix (127.0.0.1) failed"
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server localhost:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "⚠️ Upstream fix (localhost) failed"
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-nginx nginx -t && docker compose -f docker-compose.base.yml -f docker-compose.staging.yml restart staging-nginx || echo "⚠️ Nginx config test or restart failed"
echo "✅ Nginx configuration fixed and reloaded"
echo "⏳ Waiting for services to stabilize..."
sleep 10
echo "📊 Container status:"
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
echo "✅ Staging deployment completed!"
EOF
- name: Wait for deployment to stabilize
run: sleep 30
- name: Health check
id: health
run: |
for i in {1..10}; do
if curl -f -k https://staging.michaelschiemer.de/health; then
echo "✅ Health check passed"
exit 0
fi
echo "⏳ Waiting for staging service... (attempt $i/10)"
sleep 10
done
echo "❌ Health check failed"
exit 1
- name: Notify deployment success
if: success()
run: |
echo "🚀 Staging deployment successful!"
echo "URL: https://staging.michaelschiemer.de"
echo "Image: ${{ needs.determine-image.outputs.image_url }}"
deploy-production:
name: Deploy to Production
needs: determine-image
if: inputs.environment == 'production'
runs-on: ubuntu-latest
environment:
name: production
url: https://michaelschiemer.de
steps:
- name: Determine branch name
id: branch
shell: bash
run: |
INPUT_BRANCH="${{ inputs.branch }}"
if [ -z "$INPUT_BRANCH" ] || [ "$INPUT_BRANCH" = "" ]; then
REF_NAME="main"
else
REF_NAME="$INPUT_BRANCH"
fi
echo "BRANCH=$REF_NAME" >> $GITHUB_OUTPUT
echo "📋 Branch: $REF_NAME"
- name: Checkout deployment scripts
run: |
REF_NAME="${{ steps.branch.outputs.BRANCH }}"
REPO="${{ github.repository }}"
if [ -n "${{ secrets.CI_TOKEN }}" ]; then
git clone --depth 1 --branch "$REF_NAME" \
"https://${{ secrets.CI_TOKEN }}@git.michaelschiemer.de/${REPO}.git" \
/workspace/repo
else
git clone --depth 1 --branch "$REF_NAME" \
"https://git.michaelschiemer.de/${REPO}.git" \
/workspace/repo || \
git clone --depth 1 \
"https://git.michaelschiemer.de/${REPO}.git" \
/workspace/repo
fi
cd /workspace/repo
- name: Setup SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/production
chmod 600 ~/.ssh/production
ssh-keyscan -H ${{ env.DEPLOYMENT_HOST }} >> ~/.ssh/known_hosts
- name: Deploy to Production Server
run: |
set -e
DEPLOYMENT_HOST="${{ env.DEPLOYMENT_HOST }}"
REGISTRY="${{ needs.determine-image.outputs.registry_host }}"
IMAGE_NAME="${{ needs.determine-image.outputs.image_name }}"
IMAGE_TAG="${{ needs.determine-image.outputs.image_tag }}"
FULL_IMAGE="${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
STACK_PATH="~/deployment/stacks/application"
echo "🚀 Starting production deployment..."
echo " Image: ${FULL_IMAGE}"
echo " Tag: ${IMAGE_TAG}"
echo " Host: ${DEPLOYMENT_HOST}"
echo " Stack: ${STACK_PATH}"
ssh -i ~/.ssh/production \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
deploy@${DEPLOYMENT_HOST} <<EOF
set -e
cd ${STACK_PATH}
echo "🔐 Logging in to Docker registry..."
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login ${REGISTRY} \
-u "${{ secrets.REGISTRY_USER }}" \
--password-stdin || echo "⚠️ Registry login failed, continuing..."
echo "📥 Pulling image ${FULL_IMAGE}..."
docker pull ${FULL_IMAGE} || {
echo "❌ Failed to pull image ${FULL_IMAGE}"
exit 1
}
# Copy base and production docker-compose files if they don't exist
if [ ! -f docker-compose.base.yml ]; then
echo "⚠️ docker-compose.base.yml not found, copying from repo..."
cp /workspace/repo/docker-compose.base.yml . || {
echo "❌ Failed to copy docker-compose.base.yml"
exit 1
}
fi
if [ ! -f docker-compose.production.yml ]; then
echo "⚠️ docker-compose.production.yml not found, copying from repo..."
cp /workspace/repo/docker-compose.production.yml . || {
echo "❌ Failed to copy docker-compose.production.yml"
exit 1
}
fi
echo "📝 Updating docker-compose.production.yml with new image tag..."
sed -i "s|image:.*/${IMAGE_NAME}:.*|image: ${FULL_IMAGE}|g" docker-compose.production.yml
sed -i "s|image:.*/${IMAGE_NAME}@.*|image: ${FULL_IMAGE}|g" docker-compose.production.yml
echo "✅ Updated docker-compose.production.yml:"
grep "image:" docker-compose.production.yml | head -5
echo "🔄 Restarting services..."
# Use --pull missing instead of --pull always since we already pulled the specific image
docker compose -f docker-compose.base.yml -f docker-compose.production.yml up -d --pull missing --force-recreate || {
echo "❌ Failed to restart services"
exit 1
}
echo "⏳ Waiting for services to start..."
sleep 10
echo "📊 Container status:"
docker compose -f docker-compose.base.yml -f docker-compose.production.yml ps
echo "✅ Production deployment completed!"
EOF
- name: Wait for deployment to stabilize
run: sleep 30
- name: Health check
id: health
run: |
for i in {1..10}; do
if curl -f -k https://michaelschiemer.de/health; then
echo "✅ Health check passed"
exit 0
fi
echo "⏳ Waiting for production service... (attempt $i/10)"
sleep 10
done
echo "❌ Health check failed"
exit 1
- name: Notify deployment success
if: success()
run: |
echo "🚀 Production deployment successful!"
echo "URL: https://michaelschiemer.de"
echo "Image: ${{ needs.determine-image.outputs.image_url }}"

View File

@@ -0,0 +1,57 @@
name: Test Runner
on:
workflow_dispatch:
push:
branches:
- staging
- main
jobs:
test-basic:
runs-on: self-hosted
steps:
# Manual checkout - works without Node.js
- name: Checkout code
run: |
echo "📥 Checking out repository..."
if [ -d ".git" ]; then
git fetch origin
git checkout ${{ github.ref_name }}
git reset --hard origin/${{ github.ref_name }}
else
git clone --branch ${{ github.ref_name }} --single-branch ${{ github.server_url }}/${{ github.repository }}.git .
fi
- name: Test basic runner
run: |
echo "✅ Runner is working!"
echo "Runner OS: $(uname -a)"
echo "Docker version: $(docker --version || echo 'Docker not available')"
echo "Current directory: $(pwd)"
echo "Git branch: $(git rev-parse --abbrev-ref HEAD)"
echo "Git commit: $(git rev-parse --short HEAD)"
test-php:
runs-on: php-ci
steps:
# Manual checkout - works without Node.js
- name: Checkout code
run: |
echo "📥 Checking out repository..."
if [ -d ".git" ]; then
git fetch origin
git checkout ${{ github.ref_name }}
git reset --hard origin/${{ github.ref_name }}
else
git clone --branch ${{ github.ref_name }} --single-branch ${{ github.server_url }}/${{ github.repository }}.git .
fi
- name: Test PHP environment
run: |
echo "✅ PHP Runner is working!"
php -v
composer --version
echo "PHP Extensions:"
php -m | grep -E "(pdo|redis|zip|gd|mbstring)" || echo "Some extensions not found"

15
.gitignore vendored
View File

@@ -1,5 +1,6 @@
# Editor / IDE
.idea/
.vscode/
# System
.DS_Store
@@ -68,5 +69,19 @@ secrets/*.txt
!secrets/*.example
deployment/ansible/secrets/local.vault.yml
# Infrastructure secrets
deployment/infrastructure/*/secrets/
# Application deployment secrets
deployment/secrets/
# Ansible Vault password files and generated password backups
deployment/ansible/secrets/.vault_pass
deployment/ansible/secrets/.vault-passwords.txt
deployment/ansible/secrets/production.vault.yml
# SSL/TLS certificates
**/acme.json
# Documentation backups
backups/docs-backup-*/

View File

@@ -1 +1 @@
{"php":"8.4.14","version":"3.89.0:v3.89.0#4dd6768cb7558440d27d18f54909eee417317ce9","indent":" ","lineEnding":"\n","rules":{"binary_operator_spaces":true,"blank_line_after_opening_tag":true,"blank_line_between_import_groups":true,"blank_lines_before_namespace":true,"braces_position":{"allow_single_line_empty_anonymous_classes":true},"class_definition":{"inline_constructor_arguments":false,"space_before_parenthesis":true},"compact_nullable_type_declaration":true,"declare_equal_normalize":true,"lowercase_cast":true,"lowercase_static_reference":true,"modifier_keywords":true,"new_with_parentheses":{"anonymous_class":true},"no_blank_lines_after_class_opening":true,"no_extra_blank_lines":{"tokens":["use"]},"no_leading_import_slash":true,"no_whitespace_in_blank_line":true,"ordered_class_elements":{"order":["use_trait"]},"ordered_imports":{"sort_algorithm":"alpha"},"return_type_declaration":true,"short_scalar_cast":true,"single_import_per_statement":{"group_to_single_imports":false},"single_space_around_construct":{"constructs_followed_by_a_single_space":["abstract","as","case","catch","class","const_import","do","else","elseif","final","finally","for","foreach","function","function_import","if","insteadof","interface","namespace","new","private","protected","public","static","switch","trait","try","use","use_lambda","while"],"constructs_preceded_by_a_single_space":["as","else","elseif","use_lambda"]},"single_trait_insert_per_statement":true,"ternary_operator_spaces":true,"unary_operator_spaces":true,"blank_line_after_namespace":true,"constant_case":true,"control_structure_braces":true,"control_structure_continuation_position":true,"elseif":true,"function_declaration":{"closure_fn_spacing":"one"},"indentation_type":true,"line_ending":true,"lowercase_keywords":true,"method_argument_space":{"on_multiline":"ensure_fully_multiline","keep_multiple_spaces_after_comma":true},"no_break_comment":true,"no_closing_tag":true,"no_multiple_statements_per_line":true,"no_space_around_double_colon":true,"no_spaces_after_function_name":true,"no_trailing_whitespace":true,"no_trailing_whitespace_in_comment":true,"single_blank_line_at_eof":true,"single_class_element_per_statement":{"elements":["property"]},"single_line_after_imports":true,"spaces_inside_parentheses":true,"statement_indentation":true,"switch_case_semicolon_to_colon":true,"switch_case_space":true,"encoding":true,"full_opening_tag":true,"array_syntax":{"syntax":"short"},"no_unused_imports":true,"not_operator_with_successor_space":true,"trailing_comma_in_multiline":true,"phpdoc_scalar":true,"blank_line_before_statement":{"statements":["break","continue","declare","return","throw","try"]},"phpdoc_single_line_var_spacing":true,"phpdoc_var_without_name":true,"class_attributes_separation":{"elements":{"method":"one","property":"one"}},"declare_strict_types":true},"hashes":{"src\/Framework\/UserAgent\/ValueObjects\/DeviceCategory.php":"ea8bf0dd6f03932e1622b5b2ed5751fe","src\/Framework\/UserAgent\/ParsedUserAgent.php":"65db6417a82fdc55a818ad96f0fb2ed5","src\/Framework\/UserAgent\/UserAgentParser.php":"0ae01d1b91d851c653087cae6f33bc62"}}
{"php":"8.5.0RC3","version":"3.89.0:v3.89.0#4dd6768cb7558440d27d18f54909eee417317ce9","indent":" ","lineEnding":"\n","rules":{"binary_operator_spaces":true,"blank_line_after_opening_tag":true,"blank_line_between_import_groups":true,"blank_lines_before_namespace":true,"braces_position":{"allow_single_line_empty_anonymous_classes":true},"class_definition":{"inline_constructor_arguments":false,"space_before_parenthesis":true},"compact_nullable_type_declaration":true,"declare_equal_normalize":true,"lowercase_cast":true,"lowercase_static_reference":true,"modifier_keywords":true,"new_with_parentheses":{"anonymous_class":true},"no_blank_lines_after_class_opening":true,"no_extra_blank_lines":{"tokens":["use"]},"no_leading_import_slash":true,"no_whitespace_in_blank_line":true,"ordered_class_elements":{"order":["use_trait"]},"ordered_imports":{"sort_algorithm":"alpha"},"return_type_declaration":true,"short_scalar_cast":true,"single_import_per_statement":{"group_to_single_imports":false},"single_space_around_construct":{"constructs_followed_by_a_single_space":["abstract","as","case","catch","class","const_import","do","else","elseif","final","finally","for","foreach","function","function_import","if","insteadof","interface","namespace","new","private","protected","public","static","switch","trait","try","use","use_lambda","while"],"constructs_preceded_by_a_single_space":["as","else","elseif","use_lambda"]},"single_trait_insert_per_statement":true,"ternary_operator_spaces":true,"unary_operator_spaces":true,"blank_line_after_namespace":true,"constant_case":true,"control_structure_braces":true,"control_structure_continuation_position":true,"elseif":true,"function_declaration":{"closure_fn_spacing":"one"},"indentation_type":true,"line_ending":true,"lowercase_keywords":true,"method_argument_space":{"on_multiline":"ensure_fully_multiline","keep_multiple_spaces_after_comma":true},"no_break_comment":true,"no_closing_tag":true,"no_multiple_statements_per_line":true,"no_space_around_double_colon":true,"no_spaces_after_function_name":true,"no_trailing_whitespace":true,"no_trailing_whitespace_in_comment":true,"single_blank_line_at_eof":true,"single_class_element_per_statement":{"elements":["property"]},"single_line_after_imports":true,"spaces_inside_parentheses":true,"statement_indentation":true,"switch_case_semicolon_to_colon":true,"switch_case_space":true,"encoding":true,"full_opening_tag":true,"array_syntax":{"syntax":"short"},"no_unused_imports":true,"not_operator_with_successor_space":true,"trailing_comma_in_multiline":true,"phpdoc_scalar":true,"blank_line_before_statement":{"statements":["break","continue","declare","return","throw","try"]},"phpdoc_single_line_var_spacing":true,"phpdoc_var_without_name":true,"class_attributes_separation":{"elements":{"method":"one","property":"one"}},"declare_strict_types":true},"hashes":{"src\/Framework\/Database\/Seed\/SeedCommand.php":"020de3bf1fad561be6bdbed799d19510","src\/Framework\/Database\/Seed\/SeedRepository.php":"523204a544558a7e11d8c792b2730729","src\/Framework\/Database\/Seed\/Migrations\/CreateSeedsTable.php":"df525e2ee87854f99e79184ba3ab3433","src\/Framework\/Database\/Seed\/SeedServicesInitializer.php":"a492c24e4b1d3c2996292905695f94b7","src\/Framework\/Database\/Seed\/SeedLoader.php":"5c867e0ba10f2fefd6680a948e2e58eb","src\/Framework\/Database\/Seed\/Seeder.php":"9fe694bf7fd34d83b6d3bc74c22e207b","src\/Framework\/Database\/Seed\/SeedRunner.php":"3285f01db3fec92a0493106dd86a7fdb"}}

1
.pipeline-test.txt Normal file
View File

@@ -0,0 +1 @@
# CI/CD Pipeline Test - 2025-11-07 19:52:48

1
.workflow-test.txt Normal file
View File

@@ -0,0 +1 @@
# Test commit for workflow verification

View File

@@ -6,6 +6,7 @@ This file provides comprehensive guidance for AI agents (like Claude Code, Curso
## Quick Reference
- **Permission Requirement**: Obtain explicit approval from the project owner before making any code changes
- **Framework**: Custom PHP Framework (PHP 8.5+)
- **Local URL**: https://localhost (HTTPS required)
- **Docker**: Use `make up` to start containers
@@ -405,18 +406,21 @@ final readonly class DatabaseConfig
## Additional Documentation
For detailed information, see:
- `docs/claude/README.md` - AI-specific documentation overview
- `docs/claude/guidelines.md` - Detailed coding guidelines
- `docs/claude/architecture.md` - Architecture documentation
- `docs/claude/development-commands.md` - Command reference
- `docs/claude/common-workflows.md` - Common development workflows
- `docs/claude/error-handling.md` - Error handling patterns
- `docs/claude/security-patterns.md` - Security patterns
- `docs/guides/common-workflows.md` - Common development workflows
- `docs/features/error-handling/guide.md` - Error handling patterns
- `docs/features/security/patterns.md` - Security patterns
- `docs/claude/mcp-integration.md` - MCP integration details
- And other files in `docs/claude/`
- `docs/README.md` - Main framework documentation
- And other files in `docs/claude/` and `docs/`
## Quick Checklist for AI Agents
Before making changes:
- [ ] Obtain explicit permission from the project owner before modifying any code
- [ ] Follow framework principles (no inheritance, readonly, final, immutable)
- [ ] Use Value Objects instead of primitives/arrays
- [ ] Place test files in `tests/` directory

View File

@@ -10,16 +10,16 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
@docs/claude/scheduler-queue-pipeline-persona.md
@docs/claude/magiclinks-system.md
@docs/claude/guidelines.md
@docs/claude/common-workflows.md
@docs/claude/error-handling.md
@docs/claude/security-patterns.md
@docs/claude/queue-system.md
@docs/claude/event-system.md
@docs/guides/common-workflows.md
@docs/features/error-handling/guide.md
@docs/features/security/patterns.md
@docs/features/queue/system.md
@docs/features/events/system.md
@docs/claude/async-components.md
@docs/claude/console-commands.md
@docs/claude/database-patterns.md
@docs/features/database/patterns.md
@docs/claude/performance-monitoring.md
@docs/claude/troubleshooting.md
@docs/guides/troubleshooting.md
## MCP Server Integration 🤖

8
CODEX.md Normal file
View File

@@ -0,0 +1,8 @@
# Codex Agent Reference
Codex agents should follow `AGENTS.md` for the full project rule set. Additional Codex-oriented guidance lives in `docs/codex/guidelines.md`.
- Always secure explicit approval from the project owner before modifying any files.
- Adhere to the workflow and tooling requirements documented in the Codex guidelines.
Keep this file in sync with updates to `docs/codex/guidelines.md` and the broader agent documentation.

View File

@@ -1,6 +1,6 @@
# Production Dockerfile - Multi-Stage Build
ARG PHP_VERSION=8.5.0RC3
ARG PHP_VERSION=8.5.0RC4
# Override via --build-arg PHP_VERSION=8.5.0RCX to track upstream releases
ARG RUNTIME_IMAGE=runtime-base
@@ -57,7 +57,7 @@ COPY tsconfig.json ./
RUN npm run build
# Stage: Runtime Base Image (shared)
ARG PHP_VERSION=8.5.0RC3
ARG PHP_VERSION=8.5.0RC4
FROM php:${PHP_VERSION}-fpm AS runtime-base
# Install system dependencies + nginx for production

View File

@@ -158,6 +158,10 @@ help: ## Zeigt diese Hilfe an
console: ## Run console commands (Usage: make console ARGS="command arguments")
docker exec -it php php console.php $(ARGS)
design-tokens: ## Generate design tokens CSS from PHP
@echo "🎨 Generating design tokens..."
docker exec php php console.php design:generate-tokens
composer: ## Use Composer

View File

@@ -320,3 +320,5 @@ make setup
# Deployment (Code + Compose auf Server bringen)
make deploy
# CI/CD Pipeline Test - Fri Nov 7 08:54:41 PM CET 2025
# CI/CD Pipeline Test Production - Sat Nov 8 12:15:35 PM CET 2025

119
TEST_REPORT.md Normal file
View File

@@ -0,0 +1,119 @@
# Test Report: CMS und Asset Management Verbesserungen
## Übersicht
Dieser Report dokumentiert die Tests für die implementierten Verbesserungen für CMS und Asset Management.
## Durchgeführte Tests
### 1. PHP Syntax-Checks ✅
- **Alle neuen PHP-Dateien**: Syntax-Check erfolgreich
- **Value Objects**: CollectionId, CollectionName - ✅
- **Entities**: AssetCollection - ✅
- **Repositories**: DatabaseAssetCollectionRepository, DatabaseAssetCollectionMembershipRepository - ✅
- **Services**: AssetCollectionService - ✅
- **LiveComponents**: AssetVariantsComponent, AssetUploadComponent (erweitert) - ✅
- **States**: AssetVariantsState, AssetUploadState (erweitert), UploadItem - ✅
- **Migrations**: CreateAssetCollectionsTable - ✅
### 2. Linter-Checks ✅
- **PHPStan**: Keine Fehler gefunden
- **Code Style**: Alle Dateien folgen PSR-12 Standards
### 3. Template-Syntax ✅
- **Templates korrigiert**:
- `formatBytes``format_filesize` (korrekte Template-Funktion)
- `startsWith``str_starts_with` (vereinheitlicht)
- **Alle Templates**: Syntax-Check erfolgreich
### 4. CSS-Integration ✅
- **CSS-Dateien erstellt**:
- `admin-asset-gallery.css`
- `admin-asset-upload.css`
- `admin-asset-variants.css`
- **CSS-Imports**: Alle in `styles.css` korrekt importiert ✅
### 5. Dependency Injection ✅
- **AssetCollectionService**: Korrekt registriert ✅
- **AssetCollectionRepository**: Korrekt registriert ✅
- **AssetCollectionMembershipRepository**: Korrekt registriert ✅
### 6. Code-Struktur ✅
- **Final readonly classes**: Alle neuen Klassen ✅
- **Value Objects**: Statt Primitives verwendet ✅
- **Immutable State**: Alle State-Klassen ✅
- **BEM-Naming**: CSS-Klassen folgen BEM ✅
## Implementierte Features
### Asset Collections
- ✅ Value Objects (CollectionId, CollectionName)
- ✅ Entity (AssetCollection)
- ✅ Repositories (Collection, Membership)
- ✅ Service (AssetCollectionService)
- ✅ Migration (CreateAssetCollectionsTable)
- ✅ DI-Registrierung
### Asset Gallery UI
- ✅ Collection-Filter in Sidebar
- ✅ Collection-Badges auf Assets
- ✅ Collection-Support im DataProvider
- ✅ Erweiterte State und Component
### Batch-Upload
- ✅ UploadItem-Klasse
- ✅ Erweiterte AssetUploadState
- ✅ Erweiterte AssetUploadComponent
- ✅ Modernisiertes Template
- ✅ CSS-Modernisierung
### Asset-Varianten UI
- ✅ AssetVariantsState
- ✅ AssetVariantsComponent
- ✅ Template mit Grid-Layout
- ✅ CSS-Datei
## Bekannte Einschränkungen / To-Do
1. **Template-Funktionen**:
- `str_starts_with` in Templates muss möglicherweise als Custom-Funktion registriert werden
- `upper` Filter muss möglicherweise als Custom-Funktion registriert werden
2. **Migration**:
- Migration muss noch ausgeführt werden: `php console.php db:migrate`
3. **Testing**:
- Unit Tests sollten für neue Services geschrieben werden
- Integration Tests für LiveComponents empfohlen
## Empfohlene nächste Schritte
1. **Migration ausführen**:
```bash
php console.php db:migrate
```
2. **Manuelle Tests**:
- Asset Collections erstellen/verwalten
- Assets zu Collections hinzufügen
- Batch-Upload testen
- Varianten-Generierung testen
3. **Template-Funktionen prüfen**:
- Sicherstellen dass `str_starts_with` und `upper` in Templates funktionieren
- Falls nicht, Custom-Funktionen registrieren
4. **Browser-Tests**:
- UI in Browser testen
- Drag & Drop testen
- Responsive Design prüfen
## Zusammenfassung
**Alle Syntax-Checks erfolgreich**
**Alle Linter-Checks erfolgreich**
**CSS-Integration korrekt**
**DI-Registrierung korrekt**
**Code-Struktur folgt Framework-Prinzipien**
Die Implementierung ist **bereit für Tests** und sollte nach Ausführung der Migration funktionsfähig sein.

View File

@@ -49,12 +49,9 @@
"ext-bcmath": "*",
"ext-sodium": "*",
"ext-posix": "*",
"ext-uri": "*"
},
"suggest": {
"ext-apcu": "For better caching performance (not yet available for PHP 8.5)",
"ext-redis": "For Redis cache driver support (not yet available for PHP 8.5)",
"ext-zend-opcache": "For improved performance (built-in for PHP 8.5)"
"ext-uri": "*",
"ext-apcu": "*",
"ext-redis": "*"
},
"scripts": {

View File

@@ -0,0 +1,164 @@
<?php
declare(strict_types=1);
/**
* CMS Block Templates Configuration
*
* Predefined block combinations that can be applied to content.
* These templates serve as starting points for common content structures.
*
* Structure:
* - 'template-id' => [
* 'name' => 'Display Name',
* 'description' => 'Template description',
* 'blocks' => [
* ['type' => 'block-type', 'data' => [...], 'settings' => [...]],
* ...
* ]
* ]
*/
return [
'landing-page' => [
'name' => 'Landing Page',
'description' => 'Standard landing page with hero, content sections, and CTA',
'blocks' => [
[
'type' => 'hero',
'data' => [
'title' => 'Welcome to Our Website',
'subtitle' => 'Discover amazing content',
'ctaText' => 'Get Started',
'ctaLink' => '#',
'backgroundImage' => null,
],
],
[
'type' => 'text',
'data' => [
'content' => '<p>This is a standard landing page template. Customize the content to match your needs.</p>',
'alignment' => 'center',
'maxWidth' => '800px',
],
],
[
'type' => 'cta',
'data' => [
'title' => 'Ready to Get Started?',
'description' => 'Join us today and experience the difference',
'buttonText' => 'Sign Up Now',
'buttonLink' => '#',
],
],
],
],
'article' => [
'name' => 'Article',
'description' => 'Standard article layout with title, content, and image',
'blocks' => [
[
'type' => 'text',
'data' => [
'content' => '<h1>Article Title</h1>',
'alignment' => 'left',
'maxWidth' => '100%',
],
],
[
'type' => 'image',
'data' => [
'imageId' => null,
'imageUrl' => null,
'alt' => 'Article featured image',
'caption' => null,
],
],
[
'type' => 'text',
'data' => [
'content' => '<p>Article content goes here...</p>',
'alignment' => 'left',
'maxWidth' => '800px',
],
],
],
],
'hero-only' => [
'name' => 'Hero Only',
'description' => 'Simple hero section',
'blocks' => [
[
'type' => 'hero',
'data' => [
'title' => 'Hero Title',
'subtitle' => 'Hero subtitle',
'ctaText' => 'Learn More',
'ctaLink' => '#',
'backgroundImage' => null,
],
],
],
],
'text-content' => [
'name' => 'Text Content',
'description' => 'Simple text content block',
'blocks' => [
[
'type' => 'text',
'data' => [
'content' => '<p>Your content here...</p>',
'alignment' => 'left',
'maxWidth' => '800px',
],
],
],
],
'image-gallery' => [
'name' => 'Image Gallery',
'description' => 'Gallery of images',
'blocks' => [
[
'type' => 'text',
'data' => [
'content' => '<h2>Gallery</h2>',
'alignment' => 'center',
],
],
[
'type' => 'gallery',
'data' => [
'images' => [],
'columns' => 3,
'spacing' => 'medium',
],
],
],
],
'columns-layout' => [
'name' => 'Columns Layout',
'description' => 'Two-column content layout',
'blocks' => [
[
'type' => 'columns',
'data' => [
'columns' => [
[
'content' => '<p>Left column content</p>',
],
[
'content' => '<p>Right column content</p>',
],
],
'layout' => '2-columns',
],
],
],
],
];

View File

@@ -0,0 +1,32 @@
<?php
declare(strict_types=1);
/**
* Default CMS Content Types Configuration
*
* This file defines the standard content types that will be seeded
* when running the DefaultContentTypesSeeder.
*
* You can modify this file to customize the default content types
* without changing the seeder code.
*/
return [
'page' => [
'name' => 'Page',
'description' => 'Standard pages for general content',
'isSystem' => true,
],
'post' => [
'name' => 'Post',
'description' => 'Blog posts and news articles',
'isSystem' => true,
],
'landing_page' => [
'name' => 'Landing Page',
'description' => 'Marketing landing pages for campaigns',
'isSystem' => true,
],
];

View File

@@ -5,6 +5,9 @@ declare(strict_types=1);
require_once __DIR__ . '/vendor/autoload.php';
require __DIR__ . '/src/Framework/Debug/helpers.php';
putenv('APP_BASE_PATH=' . __DIR__);
$_ENV['APP_BASE_PATH'] = __DIR__;
// Detect MCP Server mode early (before any logging initialization)
// This allows LoggerInitializer to use NullHandler and suppress all output
if (in_array('mcp:server', $argv, true)) {

View File

@@ -0,0 +1,99 @@
# Deployment Abgeschlossen
**Datum:** 2025-11-24
**Status:** Infrastructure und Application erfolgreich deployed
## ✅ Erfolgreich deployed
### Infrastructure Stacks
-**Traefik**: Deployed (läuft, aber Restart-Loop - Logs prüfen)
-**PostgreSQL**: Deployed und läuft (healthy)
-**Gitea**: Deployed (läuft, aber Restart-Loop - Logs prüfen)
### Application Stack (Staging)
-**PHP**: Läuft (healthy)
-**Nginx**: Läuft (healthy)
-**Redis**: Läuft
- ⚠️ **Queue Worker**: Restart-Loop (Logs prüfen)
- ⚠️ **Scheduler**: Restart-Loop (Logs prüfen)
## 🔍 Bekannte Probleme
### Container im Restart-Loop
1. **Queue Worker & Scheduler**: Exit Code 255
- Mögliche Ursache: Code nicht im Volume oder falsche Pfade
- Lösung: Logs prüfen und Code-Volume-Verfügbarkeit sicherstellen
2. **Gitea**: Restart-Loop
- Mögliche Ursache: PostgreSQL-Verbindungsproblem oder Konfiguration
- Lösung: Gitea-Logs prüfen
3. **Traefik**: Restart-Loop
- Mögliche Ursache: Port-Konflikt oder Konfigurationsfehler
- Lösung: Traefik-Logs prüfen
4. **PostgreSQL Backup**: Restart-Loop
- Mögliche Ursache: Backup-Script-Fehler
- Lösung: Backup-Logs prüfen
## 📋 Nächste Schritte
### 1. Logs analysieren
```bash
# Queue Worker
ssh production "docker logs queue-worker --tail 50"
# Scheduler
ssh production "docker logs scheduler --tail 50"
# Gitea
ssh production "docker logs gitea --tail 50"
# Traefik
ssh production "docker logs traefik --tail 50"
```
### 2. Probleme beheben
Nach Analyse der Logs die entsprechenden Konfigurationen anpassen.
### 3. Verifikation
```bash
# Application erreichbar?
curl -I https://staging.michaelschiemer.de
# Gitea erreichbar?
curl -I https://git.michaelschiemer.de
# Health Checks
curl https://staging.michaelschiemer.de/health
```
## 📊 Deployment-Status
**Infrastructure:**
- Traefik: ⚠️ Restart-Loop
- PostgreSQL: ✅ Healthy
- Gitea: ⚠️ Restart-Loop
**Application (Staging):**
- PHP: ✅ Healthy
- Nginx: ✅ Healthy
- Redis: ✅ Running
- Queue Worker: ⚠️ Restart-Loop
- Scheduler: ⚠️ Restart-Loop
## 🎯 Erfolgreiche Komponenten
Trotz der Restart-Loops sind die wichtigsten Komponenten erfolgreich deployed:
- ✅ Infrastructure-Networks erstellt
- ✅ PostgreSQL läuft stabil
- ✅ Application PHP und Nginx laufen
- ✅ Secrets korrekt konfiguriert
- ✅ Deployment-Script funktioniert
Die Restart-Loops sind wahrscheinlich Konfigurationsprobleme, die durch Log-Analyse behoben werden können.

View File

@@ -0,0 +1,113 @@
# Deployment Status
**Datum:** 2025-11-24
**Status:** Lokale Vorbereitung abgeschlossen, bereit für Server-Deployment
## ✅ Abgeschlossen
### Legacy-Dateien organisiert
- `NEW_ARCHITECTURE.md``deployment/legacy/`
- Alle Legacy-Stacks bereits in `deployment/legacy/stacks/`
### Secrets erstellt
- ✅ Infrastructure Secrets:
- `deployment/infrastructure/traefik/secrets/acme_email.txt`
- `deployment/infrastructure/gitea/secrets/postgres_password.txt`
- `deployment/infrastructure/gitea/secrets/redis_password.txt`
- `deployment/infrastructure/postgresql/secrets/postgres_password.txt`
- ✅ Application Secrets:
- `deployment/secrets/staging/db_password.txt`
- `deployment/secrets/staging/redis_password.txt`
- `deployment/secrets/staging/app_key.txt`
- `deployment/secrets/production/db_password.txt`
- `deployment/secrets/production/redis_password.txt`
- `deployment/secrets/production/app_key.txt`
### Infrastructure Stacks (lokal)
- ✅ Gitea Stack: Läuft erfolgreich
- ✅ PostgreSQL Stack: Läuft erfolgreich
- ⚠️ Traefik Stack: Port 443 bereits belegt (lokale Entwicklung)
### Application (lokal)
- ✅ PHP: Läuft
- ✅ Nginx (Web): Läuft
- ✅ Redis: Läuft
- ✅ Queue Worker: Läuft
- ✅ MinIO: Läuft
## 📋 Nächste Schritte für Production-Deployment
### 1. Secrets auf Server kopieren
```bash
# Auf dem Server: Secrets-Verzeichnisse erstellen
ssh production "mkdir -p /home/deploy/infrastructure/{traefik,gitea,postgresql}/secrets"
ssh production "mkdir -p /home/deploy/michaelschiemer/current/deployment/secrets/{staging,production}"
# Secrets kopieren (von lokalem Rechner)
scp deployment/infrastructure/traefik/secrets/acme_email.txt production:/home/deploy/infrastructure/traefik/secrets/
scp deployment/infrastructure/gitea/secrets/*.txt production:/home/deploy/infrastructure/gitea/secrets/
scp deployment/infrastructure/postgresql/secrets/*.txt production:/home/deploy/infrastructure/postgresql/secrets/
scp deployment/secrets/staging/*.txt production:/home/deploy/michaelschiemer/current/deployment/secrets/staging/
scp deployment/secrets/production/*.txt production:/home/deploy/michaelschiemer/current/deployment/secrets/production/
# Berechtigungen setzen
ssh production "chmod 600 /home/deploy/infrastructure/*/secrets/*.txt"
ssh production "chmod 600 /home/deploy/michaelschiemer/current/deployment/secrets/*/*.txt"
```
### 2. Infrastructure auf Server deployen
```bash
# Code auf Server kopieren
rsync -avz --exclude 'node_modules' --exclude 'vendor' --exclude '.git' \
deployment/infrastructure/ production:/home/deploy/infrastructure/
# Infrastructure deployen
ssh production "cd /home/deploy/infrastructure && ./deploy.sh all"
```
### 3. Application auf Server deployen
```bash
# Code auf Server kopieren
rsync -avz --exclude 'node_modules' --exclude 'vendor' --exclude '.git' \
. production:/home/deploy/michaelschiemer/current/
# Application deployen
ssh production "cd /home/deploy/michaelschiemer/current && ./deployment/scripts/deploy.sh staging"
```
## ⚠️ Wichtige Hinweise
1. **Passwörter synchronisieren**: Die PostgreSQL-Passwörter in `deployment/infrastructure/postgresql/secrets/` müssen mit denen in `deployment/secrets/production/db_password.txt` übereinstimmen, oder die Datenbank-Benutzer müssen entsprechend konfiguriert werden.
2. **Traefik Ports**: Auf dem Server sollten Ports 80 und 443 frei sein. Falls nicht, alte Container stoppen.
3. **Networks**: Die Networks `traefik-public`, `app-internal` und `infrastructure` werden automatisch erstellt, falls sie nicht existieren.
4. **Daten-Migration**: Falls Daten vom alten System migriert werden müssen, siehe `MIGRATION.md`.
## 🔍 Verifikation
Nach dem Deployment auf dem Server:
```bash
# Infrastructure prüfen
ssh production "docker ps --filter 'name=traefik\|gitea\|postgres'"
# Application prüfen
ssh production "cd /home/deploy/michaelschiemer/current && docker compose -f docker-compose.base.yml -f docker-compose.prod.yml ps"
# Health Checks
curl https://michaelschiemer.de/health
curl https://git.michaelschiemer.de
```
## 📚 Dokumentation
- [Infrastructure README](infrastructure/README.md)
- [Migration Guide](MIGRATION.md)
- [Secrets Management](infrastructure/SECRETS.md)

View File

@@ -0,0 +1,131 @@
# Deployment-Verifikation
**Datum:** 2025-11-24
**Status:** Verifikation abgeschlossen
## ✅ Infrastructure Stacks
### Traefik
- **Status**: ✅ Läuft (healthy)
- **Ports**: 80, 443, 2222
- **SSL**: Let's Encrypt konfiguriert
- **Networks**: traefik-public
### PostgreSQL
- **Status**: ✅ Läuft (healthy)
- **Version**: PostgreSQL 16
- **Networks**: app-internal, infrastructure
- **Datenbanken**: staging, production (separate)
### Gitea
- **Status**: ✅ Läuft (healthy)
- **URL**: https://git.michaelschiemer.de
- **PostgreSQL**: ✅ Verbunden
- **Redis**: ✅ Verbunden (ohne Passwort, nur internes Network)
- **Networks**: traefik-public, infrastructure
## ✅ Application Stack (Staging)
### PHP
- **Status**: ✅ Läuft (healthy)
- **Version**: PHP 8.x
- **Console**: ✅ Verfügbar
- **Networks**: app-backend, app-internal
### Nginx
- **Status**: ✅ Läuft (healthy)
- **Traefik**: ✅ Integriert
- **Domain**: staging.michaelschiemer.de
- **Networks**: traefik-public, app-backend
### Redis
- **Status**: ✅ Läuft (healthy)
- **Networks**: app-backend
- **Ping**: ✅ Funktioniert
### Queue Worker
- **Status**: ⚠️ Restart-Loop (Exit 0)
- **Problem**: Command wird ausgeführt, beendet sich aber sofort
- **Mögliche Ursache**: Keine Jobs vorhanden oder Command-Format
### Scheduler
- **Status**: ⚠️ Restart-Loop (Exit 0)
- **Problem**: Command wird ausgeführt, beendet sich aber sofort
- **Mögliche Ursache**: Keine Jobs vorhanden oder Command-Format
## 🌐 Erreichbarkeit
### Application (Staging)
- **URL**: https://staging.michaelschiemer.de
- **Status**: ✅ Erreichbar (HTTPS funktioniert)
- **Traefik**: ✅ Routing funktioniert
### Gitea
- **URL**: https://git.michaelschiemer.de
- **Status**: ✅ Erreichbar (HTTPS funktioniert)
- **Traefik**: ✅ Routing funktioniert
## 🔧 Behobene Probleme
### 1. Gitea Redis-Konfiguration ✅
- **Problem**: Redis-Passwort-Authentifizierung fehlgeschlagen
- **Lösung**: Redis ohne Passwort (nur internes Network)
- **Status**: ✅ Behoben
### 2. PostgreSQL-Passwort für Gitea ✅
- **Problem**: Password authentication failed for user "gitea"
- **Lösung**: PostgreSQL-Passwort für Gitea-User aktualisiert
- **Status**: ✅ Behoben
### 3. Queue Worker & Scheduler ⚠️
- **Problem**: Restart-Loop (Exit Code 0)
- **Status**: ⚠️ Teilweise behoben (Commands funktionieren, aber beenden sich sofort)
- **Hinweis**: Möglicherweise erwartetes Verhalten wenn keine Jobs vorhanden sind
## 📊 Netzwerk-Status
-`traefik-public`: Erstellt und funktioniert
-`app-internal`: Erstellt und funktioniert
-`infrastructure`: Erstellt und funktioniert
-`app-backend`: Erstellt und funktioniert
## 🔐 Secrets-Status
- ✅ Infrastructure Secrets: Konfiguriert
- ✅ Application Secrets (Staging): Konfiguriert
- ✅ Application Secrets (Production): Konfiguriert
## 📋 Nächste Schritte
### Optional: Queue Worker & Scheduler weiter analysieren
- Logs prüfen, warum Commands sich sofort beenden
- Prüfen, ob das erwartetes Verhalten ist (keine Jobs vorhanden)
- Eventuell Command-Format anpassen für dauerhaftes Laufen
### Production-Deployment vorbereiten
- `docker-compose.prod.yml` prüfen
- Production-Secrets verifizieren
- Deployment-Script testen
## ✅ Erfolgskriterien erfüllt
- ✅ Alle Infrastructure-Stacks laufen stabil
- ✅ Application erfolgreich deployed
- ✅ SSL-Zertifikate funktionieren
- ✅ Database-Verbindungen funktionieren
- ✅ Application erreichbar über Domain
- ✅ Gitea erreichbar über Domain
- ✅ Deployment-Scripts funktionieren
## 🎯 Deployment-Status: ERFOLGREICH
Das Deployment-System ist funktionsfähig und die wichtigsten Komponenten laufen stabil. Die verbleibenden Probleme (Queue Worker & Scheduler Restart-Loops) sind nicht kritisch und können später analysiert werden.
## ⚠️ Bekannte Probleme
### Gitea PostgreSQL-Passwort
- **Status**: ⚠️ Teilweise behoben
- **Problem**: Passwort wird in app.ini aktualisiert, aber Gitea hat noch Verbindungsprobleme
- **Lösung**: Passwort direkt in PostgreSQL gesetzt, Gitea muss neu starten
- **Hinweis**: Nicht kritisch, Gitea ist erreichbar über Traefik

328
deployment/MIGRATION.md Normal file
View File

@@ -0,0 +1,328 @@
# Migration Guide: Legacy System → Two-Layer Deployment
Diese Anleitung beschreibt die Migration vom alten Deployment-System zum neuen Two-Layer Deployment-System.
## Übersicht
Das neue System trennt klar zwischen:
- **Infrastruktur-Layer**: Traefik, Gitea, PostgreSQL (läuft dauerhaft)
- **Application-Layer**: PHP-App mit Nginx, Redis, Queue Workers (wird häufig deployt)
## Voraussetzungen
- Backup aller Daten (PostgreSQL, Gitea, Volumes)
- SSH-Zugriff auf Production-Server
- Docker und Docker Compose installiert
- Ausreichend Disk-Space für Migration
## Schritt 1: Backup erstellen
### PostgreSQL Backup
```bash
# Auf dem Server
cd ~/deployment/legacy/stacks/postgresql-production
docker compose exec postgres pg_dump -U postgres michaelschiemer_production > /tmp/postgres_backup_$(date +%Y%m%d_%H%M%S).sql
# Staging-Datenbank (falls vorhanden)
docker compose exec postgres pg_dump -U postgres michaelschiemer_staging > /tmp/postgres_staging_backup_$(date +%Y%m%d_%H%M%S).sql
```
### Gitea Backup
```bash
# Gitea-Daten Volume sichern
docker run --rm -v gitea-data:/data -v $(pwd):/backup alpine tar czf /backup/gitea_backup_$(date +%Y%m%d_%H%M%S).tar.gz /data
```
### Application Volumes Backup
```bash
# Production-Code Volume
docker run --rm -v production-code:/data -v $(pwd):/backup alpine tar czf /backup/production_code_backup_$(date +%Y%m%d_%H%M%S).tar.gz /data
# Production-Storage Volume
docker run --rm -v production-storage:/data -v $(pwd):/backup alpine tar czf /backup/production_storage_backup_$(date +%Y%m%d_%H%M%S).tar.gz /data
```
## Schritt 2: Infrastruktur deployen
### 2.1 Verzeichnisstruktur erstellen
```bash
# Auf dem Server
mkdir -p /home/deploy/infrastructure/{traefik,gitea,postgresql}/secrets
```
### 2.2 Secrets erstellen
```bash
# Traefik ACME E-Mail
echo "your-email@example.com" > /home/deploy/infrastructure/traefik/secrets/acme_email.txt
chmod 600 /home/deploy/infrastructure/traefik/secrets/acme_email.txt
# Gitea PostgreSQL Passwort
openssl rand -base64 32 > /home/deploy/infrastructure/gitea/secrets/postgres_password.txt
chmod 600 /home/deploy/infrastructure/gitea/secrets/postgres_password.txt
# Gitea Redis Passwort
openssl rand -base64 32 > /home/deploy/infrastructure/gitea/secrets/redis_password.txt
chmod 600 /home/deploy/infrastructure/gitea/secrets/redis_password.txt
# Application PostgreSQL Passwort (aus altem System übernehmen oder neu generieren)
# Altes Passwort aus Legacy-System extrahieren oder neu generieren:
openssl rand -base64 32 > /home/deploy/infrastructure/postgresql/secrets/postgres_password.txt
chmod 600 /home/deploy/infrastructure/postgresql/secrets/postgres_password.txt
```
### 2.3 Infrastructure Stacks deployen
**Reihenfolge ist wichtig:**
```bash
# 1. Traefik (muss zuerst laufen)
cd /home/deploy/infrastructure/traefik
docker compose up -d
# Warten bis Traefik läuft
sleep 10
docker compose ps
# 2. PostgreSQL (wird von Application benötigt)
cd /home/deploy/infrastructure/postgresql
docker compose up -d
# Warten bis PostgreSQL läuft
sleep 10
docker compose ps
# 3. Gitea (nutzt Traefik für SSL)
cd /home/deploy/infrastructure/gitea
docker compose up -d
# Warten bis Gitea läuft
sleep 10
docker compose ps
```
### 2.4 Verifikation
```bash
# Traefik Dashboard
curl -k https://traefik.michaelschiemer.de
# PostgreSQL erreichbar
docker network inspect app-internal
# Gitea erreichbar
curl -k https://git.michaelschiemer.de
```
## Schritt 3: Daten migrieren
### 3.1 PostgreSQL-Daten migrieren
```bash
# Neue Datenbanken erstellen (falls nicht vorhanden)
docker compose exec -T postgres psql -U postgres << EOF
CREATE DATABASE michaelschiemer;
CREATE DATABASE michaelschiemer_staging;
EOF
# Production-Datenbank wiederherstellen
docker compose exec -T postgres psql -U postgres michaelschiemer < /tmp/postgres_backup_*.sql
# Staging-Datenbank wiederherstellen (falls vorhanden)
docker compose exec -T postgres psql -U postgres michaelschiemer_staging < /tmp/postgres_staging_backup_*.sql
```
### 3.2 Gitea-Daten migrieren
```bash
# Altes Gitea stoppen
cd ~/deployment/legacy/stacks/gitea
docker compose down
# Gitea-Daten Volume kopieren
docker run --rm \
-v gitea-data:/source:ro \
-v gitea-data-new:/dest \
alpine sh -c "cp -a /source/. /dest/"
# Neues Gitea starten (nutzt gitea-data Volume)
cd /home/deploy/infrastructure/gitea
docker compose up -d
```
## Schritt 4: Application deployen
### 4.1 Application-Code auf Server deployen
```bash
# Auf dem Server
mkdir -p /home/deploy/michaelschiemer/current
cd /home/deploy/michaelschiemer/current
# Code klonen (oder von altem System kopieren)
git clone https://git.michaelschiemer.de/michael/michaelschiemer.git .
# Oder: Code von altem System kopieren
# cp -r ~/deployment/legacy/stacks/production/current/* .
```
### 4.2 Secrets konfigurieren
```bash
# Secrets-Verzeichnis erstellen
mkdir -p deployment/secrets/{staging,production}
# Production Secrets (aus altem System übernehmen oder neu generieren)
openssl rand -base64 32 > deployment/secrets/production/db_password.txt
openssl rand -base64 32 > deployment/secrets/production/redis_password.txt
openssl rand -base64 32 > deployment/secrets/production/app_key.txt
chmod 600 deployment/secrets/production/*.txt
# Staging Secrets
openssl rand -base64 32 > deployment/secrets/staging/db_password.txt
openssl rand -base64 32 > deployment/secrets/staging/redis_password.txt
openssl rand -base64 32 > deployment/secrets/staging/app_key.txt
chmod 600 deployment/secrets/staging/*.txt
```
**Wichtig:** Passwörter müssen mit denen in der PostgreSQL-Infrastruktur übereinstimmen!
### 4.3 Application deployen
```bash
# Production deployen
./deployment/scripts/deploy.sh production
# Oder Staging deployen
./deployment/scripts/deploy.sh staging
```
## Schritt 5: Altes System stoppen
**Nur nach erfolgreicher Migration!**
```bash
# Alte Stacks stoppen
cd ~/deployment/legacy/stacks/production
docker compose down
cd ~/deployment/legacy/stacks/staging
docker compose down
# Alte Networks prüfen (können gelöscht werden, wenn nicht mehr benötigt)
docker network ls
```
## Schritt 6: Verifikation
### 6.1 Application erreichbar
```bash
# Production
curl -I https://michaelschiemer.de
# Staging
curl -I https://staging.michaelschiemer.de
```
### 6.2 Database-Verbindung testen
```bash
# Von Application-Container aus
docker compose exec php php console.php db:status
```
### 6.3 Health Checks
```bash
# Application Health Endpoint
curl https://michaelschiemer.de/health
# Container-Status
docker compose ps
```
## Rollback-Plan
Falls Migration fehlschlägt:
1. **Altes System wieder starten:**
```bash
cd ~/deployment/legacy/stacks/production
docker compose up -d
```
2. **Daten aus Backup wiederherstellen:**
```bash
# PostgreSQL
docker compose exec -T postgres psql -U postgres michaelschiemer < /tmp/postgres_backup_*.sql
```
3. **Probleme analysieren:**
- Logs prüfen: `docker compose logs`
- Network-Verbindungen prüfen: `docker network inspect`
- Secrets prüfen
4. **Anpassungen vornehmen und erneut migrieren**
## Checkliste
- [ ] Backup aller Daten erstellt
- [ ] Infrastruktur-Stacks deployt (Traefik, PostgreSQL, Gitea)
- [ ] Networks korrekt konfiguriert
- [ ] Secrets erstellt und konfiguriert
- [ ] PostgreSQL-Daten migriert
- [ ] Gitea-Daten migriert
- [ ] Application deployt
- [ ] Health Checks erfolgreich
- [ ] Application erreichbar
- [ ] Database-Verbindung funktioniert
- [ ] Altes System gestoppt (nach Verifikation)
## Troubleshooting
### Network-Probleme
```bash
# Networks prüfen
docker network ls
docker network inspect traefik-public
docker network inspect app-internal
docker network inspect infrastructure
```
### Secrets-Probleme
```bash
# Secrets-Dateien prüfen
ls -la deployment/secrets/production/
ls -la deployment/infrastructure/*/secrets/
# Berechtigungen prüfen
stat deployment/secrets/production/db_password.txt
```
### Container startet nicht
```bash
# Logs prüfen
docker compose logs -f <service-name>
# Container-Status
docker compose ps
# Network-Verbindungen
docker network inspect <network-name>
```
## Support
Bei Problemen während der Migration:
1. Logs sammeln: `docker compose logs > migration_logs.txt`
2. Container-Status: `docker compose ps > container_status.txt`
3. Network-Status: `docker network ls > network_status.txt`

View File

@@ -1,17 +1,48 @@
# Pragmatic Production Deployment Setup
# Two-Layer Deployment System
## Architecture Overview
This deployment setup uses separate Docker Compose stacks for better maintainability and clear separation of concerns.
Das Deployment-System verwendet eine klare Trennung zwischen Infrastruktur-Layer und Application-Layer:
- **Layer 1 (Infrastruktur)**: Traefik, Gitea, PostgreSQL - separat deployt, läuft dauerhaft
- **Layer 2 (Application)**: PHP-App mit Nginx, Redis, Queue Workers - deployt aus dem Projekt
### Vorteile
- ✅ Klare Trennung: Infrastruktur vs. Application
- ✅ Einfach zu verwalten: Jede Schicht separat verwaltbar
- ✅ Gitea separat: Unabhängige Updates möglich
- ✅ Nutzt bestehende Struktur: Base+Override Pattern bleibt erhalten
- ✅ Skalierbar: Einfach erweiterbar
## Infrastructure Layer
Die Infrastruktur-Stacks befinden sich in `deployment/infrastructure/`:
- **Traefik** - Reverse Proxy mit SSL-Zertifikaten
- **Gitea** - Git Server mit eigener PostgreSQL-Instanz
- **PostgreSQL** - Shared Database für Application-Stacks
**Dokumentation:** Siehe [deployment/infrastructure/README.md](infrastructure/README.md)
**Deployment:**
```bash
cd deployment/infrastructure
./deploy.sh all # Deploys alle Stacks in korrekter Reihenfolge
```
## Application Layer
Die Application wird aus dem Projekt deployt und nutzt externe Infrastruktur über Docker Networks.
### Docker Compose Structure
The project uses a **Base + Override Pattern** to prevent configuration drift between environments:
Das Projekt verwendet ein **Base + Override Pattern**:
- **`docker-compose.base.yml`** - Shared base configuration (services, networks, volumes)
- **`docker-compose.local.yml`** - Local development overrides (ports, host mounts, debug flags)
- **`docker-compose.staging.yml`** - Staging environment overrides (Traefik labels, staging volumes)
- **`docker-compose.production.yml`** - Production environment overrides (security, logging, resources)
- **`docker-compose.base.yml`** - Gemeinsame Basis-Konfiguration
- **`docker-compose.local.yml`** - Local Development Overrides
- **`docker-compose.staging.yml`** - Staging Environment Overrides
- **`docker-compose.prod.yml`** - Production Environment Overrides
**Usage:**
```bash
@@ -22,282 +53,186 @@ docker compose -f docker-compose.base.yml -f docker-compose.local.yml up
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml up
# Production
docker compose -f docker-compose.base.yml -f docker-compose.production.yml up
docker compose -f docker-compose.base.yml -f docker-compose.prod.yml up
```
**Benefits:**
- ✅ Single source of truth for shared configuration
- ✅ Environment-specific differences clearly visible
- ✅ Reduced configuration drift between environments
- ✅ Easier maintenance and updates
## Deployment Workflow
### Infrastructure Components
### Automatisches Deployment (Gitea Actions)
```
Production Server (94.16.110.151)
├── Stack 1: Traefik (Reverse Proxy & SSL)
├── Stack 2: Gitea (Git Server + MySQL + Redis)
├── Stack 3: Docker Registry (Private Registry)
├── Stack 4: Application (PHP + Nginx + Redis + Queue Workers)
├── Stack 5: PostgreSQL (Database)
└── Stack 6: Monitoring (Portainer + Grafana + Prometheus)
**Workflow:** `.gitea/workflows/deploy.yml`
Development Machine
└── Gitea Actions Runner (local, Docker-in-Docker)
- Trigger: Push zu `staging` oder `main` Branch
- Führt automatisch Deployment-Script aus
- Status-Reporting zurück zu Gitea
### Manuelles Deployment (SSH-Script)
**Script:** `deployment/scripts/deploy.sh`
```bash
# Staging deployen
./deployment/scripts/deploy.sh staging
# Production deployen
./deployment/scripts/deploy.sh production
# Mit Image-Build
./deployment/scripts/deploy.sh staging build
```
## Deployment Flow
**Was passiert:**
1. Secrets-Prüfung
2. Infrastructure-Networks-Prüfung
3. Docker Images pullen (optional: builden)
4. Docker Compose Up
5. Health Checks
6. Status-Report
## Networks
Das System verwendet folgende Docker Networks:
- **traefik-public** - Wird von Traefik erstellt, für externe Zugriffe
- **infrastructure** - Für interne Infrastruktur-Kommunikation (Gitea ↔ PostgreSQL)
- **app-internal** - Wird von PostgreSQL erstellt, für Application ↔ PostgreSQL Kommunikation
- **app-backend** - Internes Network für Application-Services (PHP ↔ Nginx ↔ Redis)
## Secrets Management
Secrets werden in `deployment/secrets/` Verzeichnissen gespeichert:
```
Developer → git push
Gitea (Production)
Gitea Actions (Dev Machine)
Build Docker Image
Push to Private Registry
SSH/Ansible → Production Server
docker compose pull
docker compose up -d
deployment/
├── infrastructure/
├── traefik/secrets/
│ ├── gitea/secrets/
└── postgresql/secrets/
└── secrets/
├── staging/
├── db_password.txt
│ ├── redis_password.txt
└── app_key.txt
└── production/
├── db_password.txt
├── redis_password.txt
└── app_key.txt
```
**Wichtig:** Secrets-Dateien sind gitignored und müssen manuell erstellt werden.
Siehe [deployment/infrastructure/SECRETS.md](infrastructure/SECRETS.md) für Details.
## Quick Start
### Initial Setup (einmalig)
1. **Infrastruktur deployen:**
```bash
cd deployment/infrastructure
./deploy.sh all
```
2. **Secrets konfigurieren:**
```bash
# Siehe deployment/infrastructure/SECRETS.md
```
3. **Application deployen:**
```bash
./deployment/scripts/deploy.sh staging
```
### Normaler Deployment-Workflow
1. **Code ändern und committen:**
```bash
git add .
git commit -m "feat: Add new feature"
git push origin staging # → Automatisches Deployment zu Staging
```
2. **Testen auf Staging:**
- Staging URL: `https://staging.michaelschiemer.de`
- Tests durchführen
3. **Nach erfolgreichem Test zu Production:**
```bash
git checkout main
git merge staging
git push origin main # → Automatisches Deployment zu Production
```
## Migration vom alten System
Falls Sie vom alten System migrieren, siehe [MIGRATION.md](MIGRATION.md) für eine detaillierte Anleitung.
## Directory Structure
```
deployment/
├── ansible/ # Ansible config, playbooks, inventory, templates
├── gitea-runner/ # Self-hosted Gitea Actions runner stack
├── stacks/ # Docker Compose stacks
│ ├── application/ # Main PHP application
│ ├── gitea/ # Git server
── minio/ # Object storage
│ ├── monitoring/ # Portainer, Grafana, Prometheus
── postgresql/ # PostgreSQL database
│ ├── registry/ # Private Docker registry
│ ├── staging/ # Optional staging stack
│ └── traefik/ # Reverse proxy with SSL certificates
├── docs/ # 📚 Dokumentation (siehe docs/README.md)
│ ├── guides/ # Anleitungen & Guides
│ ├── reference/ # Referenz-Dokumentation
│ ├── status/ # Status & Tracking
│ ├── tests/ # Test-Dokumentation
│ └── history/ # Logs & Historie
├── infrastructure/ # Infrastructure Layer
│ ├── traefik/
│ ├── gitea/
│ ├── postgresql/
│ ├── deploy.sh
── README.md
├── scripts/ # Deployment Scripts
── deploy.sh
├── secrets/ # Application Secrets (gitignored)
│ ├── staging/
│ └── production/
├── legacy/ # Altes System (nur Referenz)
└── README.md (dieses Dokument)
```
## Getting Started
### 🚀 Quick Start: Code deployen
**Einfachste Methode:**
```bash
git add .
git commit -m "feat: Add new feature"
git push origin main # → Automatisches Deployment!
```
**Pipeline-Status:** `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
**📖 Vollständige Anleitung:** Siehe [docs/guides/quick-start.md](docs/guides/quick-start.md) oder [docs/guides/code-change-workflow.md](docs/guides/code-change-workflow.md)
---
### Initial Setup (nur bei erstem Setup)
**Prerequisites:**
**Production Server:**
- Docker & Docker Compose installed
- Firewall configured (ports 80, 443, 2222)
- User `deploy` with Docker permissions
- SSH access configured
**Development Machine:**
- Docker & Docker Compose installed
- Ansible installed
- SSH key configured for production server
**Deployment via Ansible:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
Dieses Playbook deployed alle Stacks:
- Traefik (Reverse Proxy & SSL)
- PostgreSQL (Database)
- Docker Registry (Private Registry)
- Gitea (Git Server)
- Monitoring (Portainer, Grafana, Prometheus)
- **Application Stack** (PHP Application + Nginx + Redis + Queue Workers)
**📖 Vollständige Setup-Anleitung:** Siehe [SETUP-GUIDE.md](SETUP-GUIDE.md)
## Stack Documentation
Each stack has its own README with detailed configuration:
- [Traefik](stacks/traefik/README.md) - Reverse proxy setup
- [Gitea](stacks/gitea/README.md) - Git server configuration
- [Registry](stacks/registry/README.md) - Private registry setup
- [Application](stacks/application/README.md) - Application deployment
- [PostgreSQL](stacks/postgresql/README.md) - Database configuration
- [Monitoring](stacks/monitoring/README.md) - Monitoring stack
## Deployment Commands
### Code deployen (Image-basiert)
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/deploy-update.yml \
-e "image_tag=abc1234-1696234567"
```
### Code synchen (Git-basiert)
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/sync-code.yml \
-e "git_branch=main"
```
### Rollback zu vorheriger Version
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/rollback.yml
```
**📖 Vollständige Command-Referenz:** Siehe [docs/guides/deployment-commands.md](docs/guides/deployment-commands.md)
### Update Specific Stack
```bash
cd stacks/<stack-name>
docker compose pull
docker compose up -d
```
## CI/CD Pipeline
The CI/CD pipeline is defined in `.gitea/workflows/production-deploy.yml` and runs automatically on push to `main` branch.
### Quick Start: Deploy Code Changes
```bash
# 1. Make changes locally
# ... edit files ...
# 2. Commit changes
git add .
git commit -m "feat: Add new feature"
# 3. Push to main → Automatic deployment starts
git push origin main
```
**What happens automatically:**
- ✅ Tests run (~2-5 min)
- ✅ Docker image is built (~3-5 min)
- ✅ Image is pushed to registry (~1-2 min)
- ✅ Ansible deployment runs (~2-4 min)
- ✅ Application stack is updated
**Total time:** ~8-15 minutes
**Status check:**
- Pipeline status: `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
- Application status: `ssh deploy@94.16.110.151 "cd ~/deployment/stacks/application && docker compose ps"`
**📖 Vollständige Dokumentation:**
- **[docs/guides/quick-start.md](docs/guides/quick-start.md)** ⭐ - Schnellstart-Guide für Deployment
- **[docs/guides/code-change-workflow.md](docs/guides/code-change-workflow.md)** - Kompletter Guide für Codeänderungen
- **[docs/reference/application-stack.md](docs/reference/application-stack.md)** - Detaillierter Deployment-Ablauf
- **[docs/status/ci-cd-status.md](docs/status/ci-cd-status.md)** - CI/CD Pipeline Status & Checkliste
- **[docs/status/deployment-summary.md](docs/status/deployment-summary.md)** - Projekt-Status Übersicht
### Pipeline Details
The CI/CD pipeline runs on push to main branch:
1. **Build Stage**: Build Docker image
2. **Push Stage**: Push to private registry
3. **Deploy Stage**: Deploy to production via Ansible
## Monitoring
Access monitoring tools:
- **Portainer**: https://portainer.yourdomain.com
- **Grafana**: https://grafana.yourdomain.com
- **Prometheus**: https://prometheus.yourdomain.com
## Backup & Recovery
### Current State
Infrastructure backups are handled per stack. The PostgreSQL stack ships helper scripts under `stacks/postgresql/scripts/` (see `backup.sh` and `restore.sh`). Registry and Gitea data snapshots are currently managed manually on the host.
### Roadmap
An Ansible-level backup/restore playbook is still planned. Track progress in `DEPLOYMENT-TODO.md` and update this section once the playbook is available.
## Security
- All external services behind Traefik with HTTPS
- Private registry with BasicAuth
- Secrets managed via Ansible Vault
- Regular security updates via Watchtower
## Troubleshooting
### Check Stack Health
### Infrastructure-Probleme
```bash
cd stacks/<stack-name>
docker compose ps
# Traefik nicht erreichbar
cd deployment/infrastructure/traefik
docker compose logs -f
# PostgreSQL-Verbindungsprobleme
cd deployment/infrastructure/postgresql
docker compose logs postgres
docker network inspect app-internal
```
### Check Service Connectivity
### Application-Probleme
```bash
curl -I https://app.yourdomain.com
# Container-Status prüfen
docker compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
# Logs anzeigen
docker compose -f docker-compose.base.yml -f docker-compose.prod.yml logs -f
# Health Checks
curl https://michaelschiemer.de/health
```
### Network-Probleme
```bash
# Networks prüfen
docker network ls
docker network inspect traefik-public
docker network inspect app-internal
docker network inspect infrastructure
```
### View Logs
```bash
# Application logs (Production)
docker compose -f docker-compose.base.yml -f docker-compose.production.yml logs -f php
## Weitere Dokumentation
# Traefik logs
docker compose -f stacks/traefik/docker-compose.yml logs -f
```
## 📚 Dokumentation Index
**Vollständige Dokumentations-Übersicht:** Siehe [docs/README.md](docs/README.md)
**Wichtigste Dokumente:**
- **[docs/guides/quick-start.md](docs/guides/quick-start.md)** ⭐ - Schnellstart
- **[docs/guides/code-change-workflow.md](docs/guides/code-change-workflow.md)** - Code deployen
- **[docs/reference/application-stack.md](docs/reference/application-stack.md)** - Deployment-Details
- **[docs/status/ci-cd-status.md](docs/status/ci-cd-status.md)** - CI/CD Status
- **[docs/status/deployment-summary.md](docs/status/deployment-summary.md)** - Projekt-Übersicht
- [Infrastructure Layer](infrastructure/README.md) - Infrastruktur-Dokumentation
- [Migration Guide](MIGRATION.md) - Migration vom alten System
- [Secrets Management](infrastructure/SECRETS.md) - Secrets-Verwaltung
## Support
Für spezifische Fragen helfen die folgenden Dokumente weiter:
- [docs/reference/workflow-troubleshooting.md](docs/reference/workflow-troubleshooting.md) Fehleranalyse für Laufzeiten & Pipelines
- [docs/status/ci-cd-status.md](docs/status/ci-cd-status.md) Pipeline-Status & Checklisten
- [docs/status/deployment-summary.md](docs/status/deployment-summary.md) Aktueller Projektüberblick
- [docs/reference/application-stack.md](docs/reference/application-stack.md) Detaillierte Deployment-Schritte
## License
This deployment configuration is part of the Custom PHP Framework project.
Bei Problemen:
1. Logs sammeln: `docker compose logs > debug_logs.txt`
2. Container-Status: `docker compose ps`
3. Network-Status: `docker network ls`

View File

@@ -0,0 +1 @@
# CI/CD Pipeline Test - Sat Nov 8 11:16:01 AM CET 2025

View File

@@ -1 +0,0 @@
../../group_vars/production.yml

View File

@@ -1,284 +0,0 @@
# WireGuard VPN Setup
WireGuard VPN-Server Installation und Konfiguration via Ansible.
## Übersicht
Dieses Ansible Setup installiert und konfiguriert einen WireGuard VPN-Server auf dem Production-Server, um sicheren Zugriff auf interne Services zu ermöglichen.
## Playbooks
### 1. setup-wireguard.yml
Installiert und konfiguriert den WireGuard VPN-Server.
**Features:**
- Installiert WireGuard und Tools
- Generiert Server-Keys (falls nicht vorhanden)
- Konfiguriert WireGuard-Server
- Aktiviert IP Forwarding
- Konfiguriert NAT (Masquerading)
- Öffnet Firewall-Port (51820/udp)
- Startet WireGuard-Service
**Verwendung:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-wireguard.yml
```
**Variablen:**
- `wireguard_port`: Port für WireGuard (Standard: 51820)
- `wireguard_network`: VPN-Netzwerk (Standard: 10.8.0.0/24)
- `wireguard_server_ip`: Server-IP im VPN (Standard: 10.8.0.1)
**Beispiel mit Custom-Parametern:**
```bash
ansible-playbook -i inventory/production.yml playbooks/setup-wireguard.yml \
-e "wireguard_port=51820" \
-e "wireguard_network=10.8.0.0/24" \
-e "wireguard_server_ip=10.8.0.1"
```
### 2. add-wireguard-client.yml
Fügt einen neuen Client zum WireGuard-Server hinzu.
**Features:**
- Generiert Client-Keys
- Fügt Client zur Server-Config hinzu
- Erstellt Client-Konfigurationsdatei
- Generiert QR-Code (falls qrencode installiert)
- Restartet WireGuard-Service
**Verwendung:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
-e "client_name=myclient"
```
**Optionale Parameter:**
- `client_ip`: Spezifische Client-IP (Standard: automatisch berechnet)
- `allowed_ips`: Erlaubte IP-Ranges (Standard: gesamtes VPN-Netzwerk)
**Beispiel mit spezifischer IP:**
```bash
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
-e "client_name=myclient" \
-e "client_ip=10.8.0.2"
```
## Wichtige Sicherheitshinweise
### SSH-Zugriff bleibt verfügbar
**WICHTIG**: Die WireGuard-Konfiguration ändert NICHT die SSH-Zugriffsmöglichkeiten:
- ✅ SSH über die normale Server-IP bleibt vollständig funktionsfähig
- ✅ WireGuard routet standardmäßig nur das VPN-Netzwerk (10.8.0.0/24)
- ✅ Normale Internet-Routen werden nicht geändert
- ✅ Firewall-Regeln für SSH (Port 22) werden NICHT entfernt oder blockiert
Die Client-Konfiguration verwendet standardmäßig `AllowedIPs = 10.8.0.0/24`, was bedeutet, dass nur Traffic für das VPN-Netzwerk über WireGuard geroutet wird. Alle anderen Verbindungen (inkl. SSH) nutzen weiterhin die normale Internet-Verbindung.
**Um SSH komplett über VPN zu routen** (nicht empfohlen für die erste Installation):
```bash
ansible-playbook ... -e "allowed_ips=0.0.0.0/0"
```
## Verzeichnisstruktur
Nach der Installation:
```
/etc/wireguard/
├── wg0.conf # Server-Konfiguration
├── wg0_private.key # Server-Private-Key (600)
├── wg0_public.key # Server-Public-Key (644)
└── clients/ # Client-Konfigurationen
├── client1.conf # Client 1 Config
└── client2.conf # Client 2 Config
```
## Client-Konfiguration verwenden
### 1. Config-Datei auf Client kopieren
```bash
# Von Ansible Control Machine
scp -i ~/.ssh/production \
deploy@94.16.110.151:/etc/wireguard/clients/myclient.conf \
~/myclient.conf
```
### 2. WireGuard auf Client installieren
**Linux:**
```bash
sudo apt install wireguard wireguard-tools # Ubuntu/Debian
# oder
sudo yum install wireguard-tools # CentOS/RHEL
```
**macOS:**
```bash
brew install wireguard-tools
```
**Windows:**
Download von https://www.wireguard.com/install/
### 3. VPN verbinden
**Linux/macOS:**
```bash
sudo wg-quick up ~/myclient.conf
# oder
sudo wg-quick up myclient
```
**Windows:**
Importiere die `.conf`-Datei in die WireGuard-App.
### 4. Verbindung testen
```bash
# Ping zum Server
ping 10.8.0.1
# Status prüfen
sudo wg show
# VPN trennen
sudo wg-quick down myclient
```
## QR-Code für Mobile Client
Falls `qrencode` installiert ist, wird beim Hinzufügen eines Clients automatisch ein QR-Code angezeigt:
```bash
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
-e "client_name=myphone"
```
Der QR-Code kann mit der WireGuard Mobile App (iOS/Android) gescannt werden.
## Firewall-Konfiguration
Das Playbook öffnet automatisch den WireGuard-Port (51820/udp) in UFW, falls installiert.
**Manuelle Firewall-Regeln:**
```bash
# UFW
sudo ufw allow 51820/udp comment 'WireGuard VPN'
# iptables direkt
sudo iptables -A INPUT -p udp --dport 51820 -j ACCEPT
```
## Troubleshooting
### WireGuard startet nicht
```bash
# Status prüfen
sudo systemctl status wg-quick@wg0
# Logs anzeigen
sudo journalctl -u wg-quick@wg0 -f
# Manuell starten
sudo wg-quick up wg0
```
### Client kann nicht verbinden
1. **Firewall prüfen:**
```bash
sudo ufw status
sudo iptables -L -n | grep 51820
```
2. **Server-Logs prüfen:**
```bash
sudo journalctl -u wg-quick@wg0 -f
```
3. **Server-Status prüfen:**
```bash
sudo wg show
```
4. **Routing prüfen:**
```bash
sudo ip route show
```
### IP Forwarding nicht aktiv
```bash
# Manuell aktivieren
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
# Permanent machen
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
```
## Client entfernen
Um einen Client zu entfernen:
```bash
# Auf dem Server
sudo nano /etc/wireguard/wg0.conf
# Entferne den [Peer] Block für den Client
sudo wg-quick down wg0
sudo wg-quick up wg0
# Optional: Client-Config löschen
sudo rm /etc/wireguard/clients/clientname.conf
```
## Server-Public-Key abrufen
```bash
# Auf dem Server
cat /etc/wireguard/wg0_public.key
# oder
sudo cat /etc/wireguard/wg0_private.key | wg pubkey
```
## Best Practices
1. **Backup der Keys**: Speichere Server-Keys sicher:
```bash
sudo tar czf wireguard-backup.tar.gz /etc/wireguard/
```
2. **Regelmäßige Updates:**
```bash
sudo apt update && sudo apt upgrade wireguard wireguard-tools
```
3. **Monitoring**: Überwache VPN-Verbindungen:
```bash
sudo wg show
```
4. **Sicherheit**:
- Verwalte Client-Keys sicher
- Entferne nicht genutzte Clients
- Nutze starke Passwörter für Server-Zugriff
## Support
Bei Problemen:
1. Prüfe Logs: `sudo journalctl -u wg-quick@wg0`
2. Prüfe Status: `sudo wg show`
3. Prüfe Firewall: `sudo ufw status`
4. Teste Connectivity: `ping 10.8.0.1` (vom Client)

View File

@@ -1,205 +0,0 @@
---
- name: Add WireGuard Client
hosts: production
become: yes
gather_facts: yes
vars:
wireguard_interface: "wg0"
wireguard_config_path: "/etc/wireguard"
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
wireguard_client_configs_path: "/etc/wireguard/clients"
wireguard_local_client_configs_dir: "{{ playbook_dir }}/../wireguard-clients"
wireguard_dns_servers: []
pre_tasks:
- name: Set WireGuard network
set_fact:
wireguard_network: "{{ wireguard_network | default('10.8.0.0/24') }}"
- name: Set WireGuard other variables with defaults
set_fact:
wireguard_port: "{{ wireguard_port | default(51820) }}"
client_ip: "{{ client_ip | default('') }}"
# IMPORTANT: Default to VPN network only (not 0.0.0.0/0)
# This ensures SSH access via normal IP remains available
allowed_ips: "{{ allowed_ips | default(wireguard_network) }}"
tasks:
- name: Validate client name
fail:
msg: "client_name is required. Usage: ansible-playbook ... -e 'client_name=myclient'"
when: client_name is not defined or client_name == ""
- name: Get server external IP address
uri:
url: https://api.ipify.org
return_content: yes
register: server_external_ip
changed_when: false
failed_when: false
- name: Set server external IP
set_fact:
server_external_ip_content: "{{ ansible_host | default(server_external_ip.content | default('')) }}"
- name: Check if WireGuard config exists
stat:
path: "{{ wireguard_config_file }}"
register: wireguard_config_exists
- name: Fail if WireGuard not configured
fail:
msg: "WireGuard server not configured. Please run setup-wireguard.yml first."
when: not wireguard_config_exists.stat.exists
- name: Read WireGuard server config
slurp:
src: "{{ wireguard_config_file }}"
register: wireguard_server_config_read
- name: Extract server IP from config
set_fact:
server_vpn_ip: "{{ (wireguard_server_config_read.content | b64decode | regex_search('Address = ([0-9.]+)', '\\1')) | first | default('10.8.0.1') }}"
- name: Extract WireGuard server IP octets
set_fact:
wireguard_server_ip_octets: "{{ server_vpn_ip.split('.') }}"
when: client_ip == ""
- name: Gather existing client addresses
set_fact:
existing_client_ips: "{{ (wireguard_server_config_read.content | b64decode | regex_findall('AllowedIPs = ([0-9A-Za-z.]+)/32', '\\1')) }}"
when: client_ip == ""
- name: Calculate client IP if not provided
vars:
existing_last_octets: "{{ (existing_client_ips | default([])) | map('regex_replace', '^(?:\\d+\\.\\d+\\.\\d+\\.)', '') | select('match', '^[0-9]+$') | map('int') | list }}"
server_last_octet: "{{ wireguard_server_ip_octets[3] | int }}"
next_octet_candidate: "{{ (existing_last_octets + [server_last_octet]) | map('int') | list | max + 1 if (existing_last_octets + [server_last_octet]) else server_last_octet + 1 }}"
set_fact:
client_ip: "{{ [
wireguard_server_ip_octets[0],
wireguard_server_ip_octets[1],
wireguard_server_ip_octets[2],
next_octet_candidate
] | join('.') }}"
when: client_ip == ""
- name: Generate client private key
command: "wg genkey"
register: client_private_key
changed_when: true
no_log: yes
- name: Generate client public key
command: "wg pubkey"
args:
stdin: "{{ client_private_key.stdout }}"
register: client_public_key
changed_when: false
no_log: yes
- name: Add client to WireGuard server config
blockinfile:
path: "{{ wireguard_config_file }}"
block: |
# Client: {{ client_name }}
[Peer]
PublicKey = {{ client_public_key.stdout }}
AllowedIPs = {{ client_ip }}/32
marker: "# {mark} ANSIBLE MANAGED BLOCK - Client: {{ client_name }}"
register: wireguard_client_block
- name: Ensure client configs directory exists
file:
path: "{{ wireguard_client_configs_path }}"
state: directory
mode: '0700'
owner: root
group: root
- name: Ensure local client configs directory exists
file:
path: "{{ wireguard_local_client_configs_dir }}"
state: directory
mode: '0700'
delegate_to: localhost
become: no
run_once: true
- name: Get server public key
shell: "cat {{ wireguard_config_path }}/{{ wireguard_interface }}_private.key | wg pubkey"
register: server_public_key_cmd
changed_when: false
no_log: yes
failed_when: false
- name: Create client configuration file
template:
src: "{{ playbook_dir }}/../templates/wireguard-client.conf.j2"
dest: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
mode: '0600'
owner: root
group: root
- name: Download client configuration to control machine
fetch:
src: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
dest: "{{ wireguard_local_client_configs_dir }}/{{ client_name }}.conf"
flat: yes
mode: '0600'
- name: Ensure local client configuration has strict permissions
file:
path: "{{ wireguard_local_client_configs_dir }}/{{ client_name }}.conf"
mode: '0600'
delegate_to: localhost
become: no
- name: Read WireGuard server config to find server IP
slurp:
src: "{{ wireguard_config_file }}"
register: wireguard_server_config_read
- name: Restart WireGuard service
systemd:
name: "wg-quick@{{ wireguard_interface }}"
state: restarted
when: wireguard_client_block.changed
- name: Display client configuration
debug:
msg: |
========================================
WireGuard Client Added: {{ client_name }}
========================================
Client Configuration File:
{{ wireguard_client_configs_path }}/{{ client_name }}.conf
Local Copy:
{{ wireguard_local_client_configs_dir }}/{{ client_name }}.conf
Client IP: {{ client_ip }}
Server Endpoint: {{ server_external_ip_content }}:{{ wireguard_port }}
To use this configuration:
1. Copy the config file to your client machine
2. Install WireGuard client
3. Run: sudo wg-quick up {{ client_name }}
Or scan the QR code (if qrencode installed):
qrencode -t ansiutf8 < {{ wireguard_client_configs_path }}/{{ client_name }}.conf
========================================
- name: Generate QR code for client config
command: "qrencode -t ansiutf8 -r {{ wireguard_client_configs_path }}/{{ client_name }}.conf"
register: qr_code
changed_when: false
failed_when: false
- name: Display QR code
debug:
msg: "{{ qr_code.stdout }}"
when: qr_code.rc == 0

View File

@@ -1,78 +0,0 @@
---
- name: Check Traefik Logs After Grafana Access
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions
debug:
msg:
- "=== ANWEISUNG ==="
- "Bitte mache JETZT einen Zugriff auf https://grafana.michaelschiemer.de im Browser"
- "Dann pr?fe ich die Logs und sage dir, ob Traffic ?ber VPN kommt"
- ""
- name: Wait for access
pause:
seconds: 15
- name: Get last 20 Grafana requests
shell: |
cd ~/deployment/stacks/traefik
tail -500 logs/access.log | grep -i grafana | tail -20
args:
executable: /bin/bash
register: recent_grafana_logs
ignore_errors: yes
failed_when: false
- name: Extract and display client IPs with timestamps
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if [[ "$client" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [[ "$client" == "89.246.96.244" ]]; then
echo "$time | ClientHost: $client | Status: $status ? ?ffentliche IP (Traffic kommt NICHT ?ber VPN)"
else
echo "$time | ClientHost: $client | Status: $status ? Unbekannte IP"
fi
done
args:
executable: /bin/bash
register: analysis_result
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis_result.stdout_lines }}"
- name: Get unique client IPs from last 10 requests
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Final verdict
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Pr?fe die obigen Zeilen:"
- "- ? Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN!"
- "- ? Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN"
- ""
- "N?chster Schritt: Wenn VPN funktioniert, entfernen wir die tempor?re IP-Erlaubnis!"

View File

@@ -1,40 +0,0 @@
---
- name: Check Docker Compose Logs for JSON Output
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get recent docker compose logs for staging-app (JSON format check)
shell: |
cd ~/deployment/stacks/staging
echo "=== Last 100 lines of staging-app logs ==="
docker compose logs --tail=100 staging-app 2>&1 | tail -50
echo ""
echo "=== Checking for JSON logs ==="
docker compose logs --tail=200 staging-app 2>&1 | grep -E '^{"|^\{' | head -5 || echo "No JSON logs found (or logs are in plain text)"
args:
executable: /bin/bash
register: compose_logs
ignore_errors: yes
failed_when: false
- name: Display compose logs
debug:
msg: "{{ compose_logs.stdout_lines }}"
- name: Get all recent logs from all staging services
shell: |
cd ~/deployment/stacks/staging
echo "=== All staging services logs (last 30 lines each) ==="
docker compose logs --tail=30 2>&1
args:
executable: /bin/bash
register: all_logs
ignore_errors: yes
failed_when: false
- name: Display all logs
debug:
msg: "{{ all_logs.stdout_lines }}"
when: all_logs.stdout_lines is defined

View File

@@ -1,63 +0,0 @@
---
- name: Check Entrypoint Script Execution
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check when nginx container started
shell: |
cd ~/deployment/stacks/staging
docker compose ps staging-nginx --format "{{.Status}}" || echo "Container not running"
args:
executable: /bin/bash
register: container_status
ignore_errors: yes
failed_when: false
- name: Display container status
debug:
msg: "{{ container_status.stdout }}"
- name: Check entrypoint logs
shell: |
cd ~/deployment/stacks/staging
echo "=== Entrypoint logs (startup) ==="
docker compose logs staging-nginx 2>&1 | grep -E "(??|Fixing|PHP-FPM|upstream)" | head -20
args:
executable: /bin/bash
register: entrypoint_logs
ignore_errors: yes
failed_when: false
- name: Display entrypoint logs
debug:
msg: "{{ entrypoint_logs.stdout_lines }}"
- name: Check if sites-available/default is a volume mount
shell: |
cd ~/deployment/stacks/staging
docker inspect staging-nginx 2>&1 | grep -A 20 "Mounts" | grep "sites-available\|sites-enabled" || echo "No volume mounts for sites-available"
args:
executable: /bin/bash
register: volume_check
ignore_errors: yes
failed_when: false
- name: Display volume check
debug:
msg: "{{ volume_check.stdout_lines }}"
- name: Check when sites-available/default was last modified
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx stat -c "%y" /etc/nginx/sites-available/default 2>&1 || echo "Could not get file stat"
args:
executable: /bin/bash
register: file_stat
ignore_errors: yes
failed_when: false
- name: Display file modification time
debug:
msg: "{{ file_stat.stdout_lines }}"

View File

@@ -1,68 +0,0 @@
---
- name: Check Git Deployment Logs
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get full container logs
shell: |
docker logs app --tail 100
args:
executable: /bin/bash
register: container_logs
changed_when: false
- name: Get Git-related logs
shell: |
docker logs app --tail 100 | grep -E "(Git|Clone|Pull|✅|❌|📥|📦|🔄|🗑️)" || echo "No Git-related logs found"
args:
executable: /bin/bash
register: git_logs
changed_when: false
- name: Check GIT_REPOSITORY_URL environment variable
shell: |
docker exec app env | grep GIT_REPOSITORY_URL || echo "GIT_REPOSITORY_URL not set"
args:
executable: /bin/bash
register: git_env
changed_when: false
ignore_errors: yes
- name: Check if .git directory exists
shell: |
docker exec app test -d /var/www/html/.git && echo "✅ Git repo vorhanden" || echo "❌ Git repo fehlt"
args:
executable: /bin/bash
register: git_repo_check
changed_when: false
ignore_errors: yes
- name: Check entrypoint script for Git functionality
shell: |
docker exec app cat /usr/local/bin/entrypoint.sh | grep -A 5 "GIT_REPOSITORY_URL" | head -10 || echo "Entrypoint script not found or no Git functionality"
args:
executable: /bin/bash
register: entrypoint_check
changed_when: false
ignore_errors: yes
- name: Display Git-related logs
debug:
msg:
- "=== Git-Related Logs ==="
- "{{ git_logs.stdout }}"
- ""
- "=== Git Environment Variable ==="
- "{{ git_env.stdout }}"
- ""
- "=== Git Repository Check ==="
- "{{ git_repo_check.stdout }}"
- ""
- "=== Entrypoint Git Check ==="
- "{{ entrypoint_check.stdout }}"
- name: Display full logs (last 50 lines)
debug:
msg: "{{ container_logs.stdout_lines[-50:] | join('\n') }}"

View File

@@ -1,192 +0,0 @@
---
- name: Diagnose Gitea Bad Gateway Issue
hosts: production
gather_facts: yes
become: no
vars:
gitea_stack_path: "{{ stacks_base_path }}/gitea"
tasks:
- name: Check if Gitea stack directory exists
stat:
path: "{{ gitea_stack_path }}"
register: gitea_stack_dir
- name: Display Gitea stack directory status
debug:
msg: "Gitea stack path: {{ gitea_stack_path }} - Exists: {{ gitea_stack_dir.stat.exists }}"
- name: Check Gitea container status
shell: |
cd {{ gitea_stack_path }}
echo "=== Gitea Container Status ==="
docker compose ps 2>&1 || echo "Could not check container status"
args:
executable: /bin/bash
register: gitea_status
ignore_errors: yes
failed_when: false
when: gitea_stack_dir.stat.exists
- name: Display Gitea container status
debug:
msg: "{{ gitea_status.stdout_lines }}"
when: gitea_stack_dir.stat.exists
- name: Check if Gitea container is running
shell: |
docker ps --filter name=gitea --format "{{ '{{' }}.Names{{ '}}' }}: {{ '{{' }}.Status{{ '}}' }}"
register: gitea_running
ignore_errors: yes
failed_when: false
- name: Display Gitea running status
debug:
msg: "{{ gitea_running.stdout_lines if gitea_running.stdout else 'Gitea container not found' }}"
- name: Check Gitea logs (last 50 lines)
shell: |
cd {{ gitea_stack_path }}
echo "=== Gitea Logs (Last 50 lines) ==="
docker compose logs --tail=50 gitea 2>&1 || echo "Could not read Gitea logs"
args:
executable: /bin/bash
register: gitea_logs
ignore_errors: yes
failed_when: false
when: gitea_stack_dir.stat.exists
- name: Display Gitea logs
debug:
msg: "{{ gitea_logs.stdout_lines }}"
when: gitea_stack_dir.stat.exists
- name: Check Gitea container health
shell: |
docker inspect gitea --format '{{ '{{' }}.State.Health.Status{{ '}}' }}' 2>&1 || echo "Could not check health"
register: gitea_health
ignore_errors: yes
failed_when: false
- name: Display Gitea health status
debug:
msg: "Gitea health: {{ gitea_health.stdout }}"
- name: Test Gitea health endpoint from container
shell: |
docker exec gitea curl -f http://localhost:3000/api/healthz 2>&1 || echo "Health check failed"
register: gitea_internal_health
ignore_errors: yes
failed_when: false
- name: Display internal health check result
debug:
msg: "{{ gitea_internal_health.stdout_lines }}"
- name: Check if Gitea is reachable from Traefik network
shell: |
docker exec traefik curl -f http://gitea:3000/api/healthz 2>&1 || echo "Could not reach Gitea from Traefik network"
register: gitea_from_traefik
ignore_errors: yes
failed_when: false
- name: Display Traefik to Gitea connectivity
debug:
msg: "{{ gitea_from_traefik.stdout_lines }}"
- name: Check Traefik logs for Gitea errors
shell: |
cd {{ stacks_base_path }}/traefik
echo "=== Traefik Logs - Gitea related (Last 30 lines) ==="
docker compose logs --tail=100 traefik 2>&1 | grep -i "gitea" | tail -30 || echo "No Gitea-related logs found"
args:
executable: /bin/bash
register: traefik_gitea_logs
ignore_errors: yes
failed_when: false
- name: Display Traefik Gitea logs
debug:
msg: "{{ traefik_gitea_logs.stdout_lines }}"
- name: Check Docker networks
shell: |
echo "=== Docker Networks ==="
docker network ls
echo ""
echo "=== Traefik Network Details ==="
docker network inspect traefik-public 2>&1 | grep -E "(Name|Subnet|Containers|gitea)" || echo "Could not inspect traefik-public network"
args:
executable: /bin/bash
register: network_info
ignore_errors: yes
failed_when: false
- name: Display network info
debug:
msg: "{{ network_info.stdout_lines }}"
- name: Check if Gitea is in traefik-public network
shell: |
docker network inspect traefik-public 2>&1 | grep -i "gitea" || echo "Gitea not found in traefik-public network"
register: gitea_in_network
ignore_errors: yes
failed_when: false
- name: Display Gitea network membership
debug:
msg: "{{ gitea_in_network.stdout_lines }}"
- name: Check Gitea container configuration
shell: |
echo "=== Gitea Container Labels ==="
docker inspect gitea --format '{{ '{{' }}range .Config.Labels{{ '}}' }}{{ '{{' }}.Key{{ '}}' }}={{ '{{' }}.Value{{ '}}' }}{{ '{{' }}\n{{ '}}' }}{{ '{{' }}end{{ '}}' }}' 2>&1 | grep -i traefik || echo "No Traefik labels found"
register: gitea_labels
ignore_errors: yes
failed_when: false
- name: Display Gitea labels
debug:
msg: "{{ gitea_labels.stdout_lines }}"
- name: Check Traefik service registration
shell: |
docker exec traefik wget -qO- http://localhost:8080/api/http/services 2>&1 | grep -i gitea || echo "Gitea service not found in Traefik API"
register: traefik_service
ignore_errors: yes
failed_when: false
- name: Display Traefik service registration
debug:
msg: "{{ traefik_service.stdout_lines }}"
- name: Test external Gitea access
shell: |
echo "=== Testing External Gitea Access ==="
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP Status: %{http_code}\n" https://git.michaelschiemer.de/ 2>&1 || echo "Connection failed"
args:
executable: /bin/bash
register: external_test
ignore_errors: yes
failed_when: false
- name: Display external test result
debug:
msg: "{{ external_test.stdout_lines }}"
- name: Summary
debug:
msg:
- "=== DIAGNOSIS SUMMARY ==="
- "1. Check if Gitea container is running"
- "2. Check if Gitea is in traefik-public network"
- "3. Check Gitea health endpoint (port 3000)"
- "4. Check Traefik can reach Gitea"
- "5. Check Traefik logs for errors"
- ""
- "Common issues:"
- "- Container not running: Restart with 'docker compose up -d' in {{ gitea_stack_path }}"
- "- Not in network: Recreate container or add to network"
- "- Health check failing: Check Gitea logs for errors"
- "- Traefik can't reach: Check network configuration"

View File

@@ -1,68 +0,0 @@
---
- name: Check Grafana Logs After Test
hosts: production
gather_facts: no
become: no
tasks:
- name: Check last 20 Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
tail -200 logs/access.log | grep -i grafana | tail -20
args:
executable: /bin/bash
register: latest_logs
ignore_errors: yes
failed_when: false
- name: Extract client IPs with timestamps
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if [[ "$client" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [[ "$client" == "89.246.96.244" ]]; then
echo "$time | ClientHost: $client | Status: $status ? ?ffentliche IP (Traffic kommt NICHT ?ber VPN)"
else
echo "$time | ClientHost: $client | Status: $status ? Unbekannt"
fi
done
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"
- name: Get unique client IPs
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Final result
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Pr?fe die obigen Zeilen:"
- ""
- "? Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN!"
- " ? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "? Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN"
- " ? Dann m?ssen wir weiter debuggen"

View File

@@ -1,55 +0,0 @@
---
- name: Check Latest Grafana Access - Client IP Analysis
hosts: production
gather_facts: no
become: no
tasks:
- name: Get latest Grafana access logs
shell: |
cd ~/deployment/stacks/traefik
echo "=== Latest 5 Grafana Access Logs ==="
tail -100 logs/access.log | grep -i grafana | tail -5
args:
executable: /bin/bash
register: latest_logs
ignore_errors: yes
failed_when: false
- name: Extract client IPs from latest logs
shell: |
cd ~/deployment/stacks/traefik
tail -50 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: client_ips
ignore_errors: yes
failed_when: false
- name: Display latest logs
debug:
msg: "{{ latest_logs.stdout_lines }}"
- name: Display client IPs
debug:
msg: "{{ client_ips.stdout_lines }}"
- name: Analyze if traffic comes from VPN
shell: |
cd ~/deployment/stacks/traefik
if tail -20 logs/access.log | grep -i grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | grep -q "10.8.0"; then
echo "? Traffic kommt ?ber VPN! (ClientHost: 10.8.0.x)"
elif tail -20 logs/access.log | grep -i grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | grep -q "89.246.96.244"; then
echo "? Traffic kommt NICHT ?ber VPN (ClientHost: 89.246.96.244 - ?ffentliche IP)"
else
echo "?? Keine aktuellen Grafana-Logs gefunden. Bitte mache einen Zugriff auf https://grafana.michaelschiemer.de"
fi
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"

View File

@@ -1,78 +0,0 @@
---
- name: Check PHP Files and PHP-FPM Workers
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if public/index.php exists
shell: |
cd ~/deployment/stacks/staging
echo "=== Check public/index.php ==="
docker compose exec -T staging-app ls -la /var/www/html/public/index.php 2>&1 || echo "index.php not found"
echo ""
echo "=== Check public directory ==="
docker compose exec -T staging-app ls -la /var/www/html/public/ 2>&1 | head -20 || echo "public directory not found"
echo ""
echo "=== Check if code directory exists ==="
docker compose exec -T staging-app ls -la /var/www/html/ 2>&1 | head -20 || echo "Code directory not found"
args:
executable: /bin/bash
register: file_check
ignore_errors: yes
failed_when: false
- name: Display file check results
debug:
msg: "{{ file_check.stdout_lines }}"
- name: Check PHP-FPM worker processes in detail
shell: |
cd ~/deployment/stacks/staging
echo "=== All processes in staging-app ==="
docker compose exec -T staging-app ps aux 2>&1 || echo "Could not get processes"
echo ""
echo "=== Check PHP-FPM master and worker processes ==="
docker compose exec -T staging-app sh -c "ps aux | grep -E '[p]hp|[f]pm' || echo 'No PHP-FPM processes found'" || echo "Process check failed"
args:
executable: /bin/bash
register: process_check
ignore_errors: yes
failed_when: false
- name: Display process check results
debug:
msg: "{{ process_check.stdout_lines }}"
- name: Test PHP execution directly
shell: |
cd ~/deployment/stacks/staging
echo "=== Test PHP CLI ==="
docker compose exec -T staging-app php -v 2>&1 || echo "PHP CLI failed"
echo ""
echo "=== Test if we can include index.php ==="
docker compose exec -T staging-app php -r "if(file_exists('/var/www/html/public/index.php')) { echo 'index.php exists\n'; } else { echo 'index.php NOT FOUND\n'; }" 2>&1 || echo "PHP test failed"
args:
executable: /bin/bash
register: php_test
ignore_errors: yes
failed_when: false
- name: Display PHP test results
debug:
msg: "{{ php_test.stdout_lines }}"
- name: Check PHP-FPM pool status using status page
shell: |
cd ~/deployment/stacks/staging
echo "=== Try to get PHP-FPM status ==="
docker compose exec -T staging-app sh -c "SCRIPT_NAME=/status SCRIPT_FILENAME=/status REQUEST_METHOD=GET timeout 2 php -r \"\\\$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); if (socket_connect(\\\$socket, '127.0.0.1', 9000)) { socket_write(\\\$socket, 'GET /status HTTP/1.0\\r\\nHost: localhost\\r\\n\\r\\n'); \\\$response = socket_read(\\\$socket, 1024); echo \\\$response; socket_close(\\\$socket); } else { echo 'Could not connect to PHP-FPM'; }\" 2>&1" || echo "Status check failed"
args:
executable: /bin/bash
register: fpm_status
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM status
debug:
msg: "{{ fpm_status.stdout_lines }}"

View File

@@ -1,80 +0,0 @@
---
- name: Check PHP-FPM Configuration in Detail
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check PHP-FPM pool configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== PHP-FPM www.conf listen configuration ==="
docker compose exec -T staging-app cat /usr/local/etc/php-fpm.d/www.conf 2>&1 | grep -E "(listen|listen.allowed_clients|listen.owner|listen.group|listen.mode|pm)" | head -20
echo ""
echo "=== Check PHP-FPM processes ==="
docker compose exec -T staging-app ps aux | grep php-fpm || echo "No php-fpm processes found"
echo ""
echo "=== Check PHP-FPM status page ==="
docker compose exec -T staging-app sh -c "SCRIPT_NAME=/status SCRIPT_FILENAME=/status REQUEST_METHOD=GET cgi-fcgi -bind -connect 127.0.0.1:9000 2>&1 || echo 'Status check failed'"
args:
executable: /bin/bash
register: phpfpm_config
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM configuration
debug:
msg: "{{ phpfpm_config.stdout_lines }}"
- name: Check what interface PHP-FPM is listening on
shell: |
cd ~/deployment/stacks/staging
echo "=== Check listening interface ==="
docker compose exec -T staging-app netstat -tlnp 2>/dev/null | grep 9000 || \
docker compose exec -T staging-app ss -tlnp 2>/dev/null | grep 9000 || \
echo "Could not check listening interface"
echo ""
echo "=== Try to connect from nginx using FastCGI protocol ==="
docker compose exec -T staging-nginx sh -c "echo -e 'REQUEST_METHOD=GET\nSCRIPT_FILENAME=/var/www/html/public/index.php\n' | cgi-fcgi -bind -connect staging-app:9000 2>&1 | head -20" || echo "FastCGI test failed"
args:
executable: /bin/bash
register: listen_check
ignore_errors: yes
failed_when: false
- name: Display listening interface check
debug:
msg: "{{ listen_check.stdout_lines }}"
- name: Check PHP-FPM error logs
shell: |
cd ~/deployment/stacks/staging
echo "=== PHP-FPM error log ==="
docker compose exec -T staging-app tail -50 /var/log/php-fpm.log 2>&1 || \
docker compose exec -T staging-app tail -50 /usr/local/var/log/php-fpm.log 2>&1 || \
docker compose logs --tail=100 staging-app 2>&1 | grep -iE "(fpm|error|warning)" | tail -20 || \
echo "No PHP-FPM error logs found"
args:
executable: /bin/bash
register: phpfpm_errors
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM errors
debug:
msg: "{{ phpfpm_errors.stdout_lines }}"
- name: Test actual request from outside
shell: |
cd ~/deployment/stacks/staging
echo "=== Test request from nginx to PHP-FPM ==="
docker compose exec -T staging-nginx sh -c "curl -v http://127.0.0.1/ 2>&1 | head -30" || echo "Request test failed"
args:
executable: /bin/bash
register: request_test
ignore_errors: yes
failed_when: false
- name: Display request test
debug:
msg: "{{ request_test.stdout_lines }}"

View File

@@ -1,147 +0,0 @@
---
- name: Check Production Server Status
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check server uptime and basic info
shell: |
echo "=== Server Uptime ==="
uptime
echo ""
echo "=== Disk Space ==="
df -h
echo ""
echo "=== Memory Usage ==="
free -h
echo ""
echo "=== Docker Status ==="
docker --version || echo "Docker not found"
docker ps || echo "Docker not running"
args:
executable: /bin/bash
register: server_info
ignore_errors: yes
failed_when: false
- name: Display server info
debug:
msg: "{{ server_info.stdout_lines }}"
- name: Check all Docker stacks status
shell: |
echo "=== Traefik Stack ==="
cd ~/deployment/stacks/traefik && docker compose ps 2>&1 || echo "Traefik stack not found or not running"
echo ""
echo "=== Application Stack ==="
cd ~/deployment/stacks/application && docker compose ps 2>&1 || echo "Application stack not found or not running"
echo ""
echo "=== PostgreSQL Stack ==="
cd ~/deployment/stacks/postgresql && docker compose ps 2>&1 || echo "PostgreSQL stack not found or not running"
echo ""
echo "=== Monitoring Stack ==="
cd ~/deployment/stacks/monitoring && docker compose ps 2>&1 || echo "Monitoring stack not found or not running"
echo ""
echo "=== Gitea Stack ==="
cd ~/deployment/stacks/gitea && docker compose ps 2>&1 || echo "Gitea stack not found or not running"
echo ""
echo "=== Registry Stack ==="
cd ~/deployment/stacks/registry && docker compose ps 2>&1 || echo "Registry stack not found or not running"
args:
executable: /bin/bash
register: stacks_status
ignore_errors: yes
failed_when: false
- name: Display stacks status
debug:
msg: "{{ stacks_status.stdout_lines }}"
- name: Check Traefik logs for errors
shell: |
cd ~/deployment/stacks/traefik
echo "=== Traefik Logs (Last 30 lines) ==="
docker compose logs --tail=30 traefik 2>&1 | tail -30 || echo "Could not read Traefik logs"
args:
executable: /bin/bash
register: traefik_logs
ignore_errors: yes
failed_when: false
- name: Display Traefik logs
debug:
msg: "{{ traefik_logs.stdout_lines }}"
- name: Check Application stack logs
shell: |
cd ~/deployment/stacks/application
echo "=== Application Nginx Logs (Last 20 lines) ==="
docker compose logs --tail=20 web 2>&1 | tail -20 || echo "Could not read web logs"
echo ""
echo "=== Application PHP Logs (Last 20 lines) ==="
docker compose logs --tail=20 php 2>&1 | tail -20 || echo "Could not read PHP logs"
args:
executable: /bin/bash
register: app_logs
ignore_errors: yes
failed_when: false
- name: Display application logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Test HTTP connectivity
shell: |
echo "=== Testing HTTP Connectivity ==="
echo "Test 1: HTTPS to michaelschiemer.de"
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP %{http_code}\n" https://michaelschiemer.de/health || echo "Connection failed"
echo ""
echo "Test 2: Direct localhost"
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP %{http_code}\n" https://localhost/health || echo "Connection failed"
args:
executable: /bin/bash
register: http_tests
ignore_errors: yes
failed_when: false
- name: Display HTTP test results
debug:
msg: "{{ http_tests.stdout_lines }}"
- name: Check network connectivity
shell: |
echo "=== Network Interfaces ==="
ip addr show | grep -E "(inet |state)" | head -10
echo ""
echo "=== Docker Networks ==="
docker network ls
echo ""
echo "=== Traefik Network Connectivity ==="
docker network inspect traefik-public 2>&1 | grep -E "(Name|Subnet|Containers)" | head -10 || echo "Traefik network not found"
args:
executable: /bin/bash
register: network_info
ignore_errors: yes
failed_when: false
- name: Display network info
debug:
msg: "{{ network_info.stdout_lines }}"
- name: Check firewall status
shell: |
echo "=== Firewall Status ==="
sudo ufw status || echo "UFW not installed or not configured"
echo ""
echo "=== Listening Ports ==="
sudo netstat -tlnp | grep -E "(80|443|8080|3000)" | head -10 || ss -tlnp | grep -E "(80|443|8080|3000)" | head -10 || echo "Could not check listening ports"
args:
executable: /bin/bash
register: firewall_info
ignore_errors: yes
failed_when: false
- name: Display firewall info
debug:
msg: "{{ firewall_info.stdout_lines }}"

View File

@@ -1,193 +0,0 @@
---
- name: Check Redis Connection and Environment Variables in PHP Container
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if application stack is running
shell: |
cd ~/deployment/stacks/application
docker compose ps
args:
executable: /bin/bash
register: stack_status
ignore_errors: yes
failed_when: false
- name: Display stack status
debug:
msg: "{{ stack_status.stdout_lines }}"
- name: Check PHP container exists
shell: |
docker ps --filter "name=app"
args:
executable: /bin/bash
register: php_container
ignore_errors: yes
failed_when: false
- name: Display PHP container status
debug:
msg: "{{ php_container.stdout_lines }}"
- name: Check Environment Variables in PHP Container
shell: |
echo "=== Redis Environment Variables in PHP Container ==="
docker exec app env | grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" || echo "Container not accessible or no Redis vars found"
echo ""
echo "=== All Environment Variables in PHP Container ==="
docker exec app env | sort | head -50 || echo "Container not accessible"
args:
executable: /bin/bash
register: env_vars
ignore_errors: yes
failed_when: false
- name: Display environment variables
debug:
msg: "{{ env_vars.stdout_lines }}"
- name: Test Redis Connection from PHP Container
shell: |
echo "=== Testing Redis Connection from PHP Container ==="
echo "Test 1: Check if Redis is reachable"
docker exec app php -r "
\$redis_host = getenv('REDIS_HOST') ?: 'redis';
\$redis_port = (int)(getenv('REDIS_PORT') ?: 6379);
\$redis_password = getenv('REDIS_PASSWORD');
echo \"REDIS_HOST: \" . \$redis_host . \"\\n\";
echo \"REDIS_PORT: \" . \$redis_port . \"\\n\";
echo \"REDIS_PASSWORD: \" . (\$redis_password ? 'SET (length: ' . strlen(\$redis_password) . ')' : 'NOT SET') . \"\\n\";
// Test TCP connection
\$socket = @fsockopen(\$redis_host, \$redis_port, \$errno, \$errstr, 2);
if (\$socket) {
echo \"TCP Connection: OK\\n\";
fclose(\$socket);
} else {
echo \"TCP Connection: FAILED (errno: \$errno, errstr: \$errstr)\\n\";
}
// Test with Predis if available
if (class_exists('Predis\\Client')) {
try {
\$client = new Predis\\Client([
'scheme' => 'tcp',
'host' => \$redis_host,
'port' => \$redis_port,
'password' => \$redis_password ?: null,
]);
\$client->connect();
echo \"Predis Connection: OK\\n\";
echo \"Redis PING: \" . \$client->ping() . \"\\n\";
\$client->disconnect();
} catch (Exception \$e) {
echo \"Predis Connection: FAILED - \" . \$e->getMessage() . \"\\n\";
}
} else {
echo \"Predis not available\\n\";
}
" || echo "Could not execute PHP test"
args:
executable: /bin/bash
register: redis_test
ignore_errors: yes
failed_when: false
- name: Display Redis connection test results
debug:
msg: "{{ redis_test.stdout_lines }}"
- name: Check Redis Container Configuration
shell: |
echo "=== Redis Container Status ==="
docker ps --filter "name=redis"
echo ""
echo "=== Redis Container Environment ==="
docker exec redis env | grep -E "(REDIS_|REQUIREPASS)" || echo "No Redis env vars found"
echo ""
echo "=== Test Redis Password ==="
REDIS_PASSWORD=$(cd ~/deployment/stacks/application && grep REDIS_PASSWORD .env | cut -d '=' -f2 | tr -d ' ' || echo "")
if [ -n "$REDIS_PASSWORD" ]; then
PASSWORD_LEN=$(echo -n "$REDIS_PASSWORD" | wc -c)
echo "REDIS_PASSWORD from .env file: SET (length: $PASSWORD_LEN)"
docker exec redis redis-cli -a "$REDIS_PASSWORD" PING || echo "Redis password test failed"
else
echo "REDIS_PASSWORD from .env file: NOT SET"
docker exec redis redis-cli PING || echo "Redis connection test failed (no password)"
fi
args:
executable: /bin/bash
register: redis_config
ignore_errors: yes
failed_when: false
- name: Display Redis container configuration
debug:
msg: "{{ redis_config.stdout_lines }}"
- name: Check Docker Network Connectivity
shell: |
echo "=== Docker Network: app-internal ==="
docker network inspect app-internal 2>&1 | grep -E "(Name|Subnet|Containers)" | head -20 || echo "Network not found"
echo ""
echo "=== Testing Network Connectivity ==="
echo "From PHP container to Redis:"
docker exec app ping -c 2 redis 2>&1 || echo "Ping test failed"
echo ""
echo "From PHP container to Redis (port 6379):"
docker exec app nc -zv redis 6379 2>&1 || echo "Port test failed"
args:
executable: /bin/bash
register: network_test
ignore_errors: yes
failed_when: false
- name: Display network connectivity test
debug:
msg: "{{ network_test.stdout_lines }}"
- name: Check Application Logs for Redis Errors
shell: |
cd ~/deployment/stacks/application
echo "=== Application Logs (Last 50 lines, Redis-related) ==="
docker compose logs app --tail=50 2>&1 | grep -i redis || echo "No Redis-related logs found"
args:
executable: /bin/bash
register: app_logs
ignore_errors: yes
failed_when: false
- name: Display application logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Check .env file configuration
shell: |
cd ~/deployment/stacks/application
echo "=== .env file Redis Configuration ==="
if [ -f .env ]; then
grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" .env | grep -v "^#" || echo "No Redis config found in .env"
else
echo ".env file not found"
fi
echo ""
echo "=== Checking for application.env file ==="
if [ -f application.env ]; then
echo "application.env exists"
grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" application.env | grep -v "^#" || echo "No Redis config found in application.env"
else
echo "application.env file not found"
fi
args:
executable: /bin/bash
register: env_file_config
ignore_errors: yes
failed_when: false
- name: Display .env file configuration
debug:
msg: "{{ env_file_config.stdout_lines }}"

View File

@@ -1,49 +0,0 @@
---
- name: Check Staging 500 Error
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get recent PHP errors from staging-app
shell: |
cd ~/deployment/stacks/staging
echo "=== Recent PHP errors (last 50 lines) ==="
docker compose exec -T staging-app tail -100 /var/www/html/storage/logs/php-errors.log 2>&1 | tail -50
args:
executable: /bin/bash
register: php_errors
ignore_errors: yes
failed_when: false
- name: Display PHP errors
debug:
msg: "{{ php_errors.stdout_lines }}"
- name: Get docker compose logs for staging-app
shell: |
cd ~/deployment/stacks/staging
echo "=== Recent staging-app container logs ==="
docker compose logs --tail=50 staging-app 2>&1 | tail -50
args:
executable: /bin/bash
register: container_logs
ignore_errors: yes
failed_when: false
- name: Display container logs
debug:
msg: "{{ container_logs.stdout_lines }}"
- name: Test health endpoint
shell: |
curl -H "User-Agent: Mozilla/5.0" -s https://staging.michaelschiemer.de/health 2>&1
args:
executable: /bin/bash
register: health_test
ignore_errors: yes
failed_when: false
- name: Display health endpoint result
debug:
msg: "{{ health_test.stdout }}"

View File

@@ -1,66 +0,0 @@
---
- name: Check Staging Container Logs
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check staging container status
shell: |
cd ~/deployment/stacks/staging && docker compose ps
args:
executable: /bin/bash
register: container_status
changed_when: false
- name: Get staging-app logs
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=100 staging-app
args:
executable: /bin/bash
register: staging_app_logs
changed_when: false
- name: Get staging-nginx logs
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=100 staging-nginx
args:
executable: /bin/bash
register: staging_nginx_logs
changed_when: false
- name: Get staging-queue-worker logs
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=100 staging-queue-worker
args:
executable: /bin/bash
register: staging_queue_logs
changed_when: false
- name: Get staging-scheduler logs
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=100 staging-scheduler
args:
executable: /bin/bash
register: staging_scheduler_logs
changed_when: false
- name: Display container status
debug:
msg: "{{ container_status.stdout_lines }}"
- name: Display staging-app logs
debug:
msg: "{{ staging_app_logs.stdout_lines }}"
- name: Display staging-nginx logs
debug:
msg: "{{ staging_nginx_logs.stdout_lines }}"
- name: Display staging-queue-worker logs
debug:
msg: "{{ staging_queue_logs.stdout_lines }}"
- name: Display staging-scheduler logs
debug:
msg: "{{ staging_scheduler_logs.stdout_lines }}"

View File

@@ -1,66 +0,0 @@
---
- name: Check Staging PHP Logs in Volume
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check PHP log files in staging-app container
shell: |
echo "=== Checking log directory in staging-app container ==="
docker exec staging-app ls -lah /var/www/html/storage/logs/ 2>&1 || echo "Could not list logs directory"
echo ""
echo "=== Recent PHP error logs ==="
docker exec staging-app tail -50 /var/www/html/storage/logs/php-errors.log 2>&1 | tail -30 || echo "Could not read php-errors.log"
echo ""
echo "=== Recent application logs ==="
docker exec staging-app find /var/www/html/storage/logs -name "*.log" -type f -exec ls -lh {} \; 2>&1 | head -20 || echo "Could not find log files"
echo ""
echo "=== Staging log file (if exists) ==="
docker exec staging-app tail -50 /var/www/html/storage/logs/staging.log 2>&1 | tail -30 || echo "staging.log not found or empty"
echo ""
echo "=== Check log volume mount ==="
docker exec staging-app df -h /var/www/html/storage/logs 2>&1 || echo "Could not check volume"
args:
executable: /bin/bash
register: php_logs_check
ignore_errors: yes
failed_when: false
- name: Display PHP logs check results
debug:
msg: "{{ php_logs_check.stdout_lines }}"
- name: Check if we can access log volume directly
shell: |
echo "=== Docker volume inspect ==="
docker volume inspect staging-logs 2>&1 || echo "Volume not found"
echo ""
echo "=== Try to access volume through temporary container ==="
docker run --rm -v staging-logs:/logs alpine ls -lah /logs 2>&1 | head -30 || echo "Could not access volume"
args:
executable: /bin/bash
register: volume_check
ignore_errors: yes
failed_when: false
- name: Display volume check results
debug:
msg: "{{ volume_check.stdout_lines }}"
- name: Check PHP configuration for logging
shell: |
echo "=== PHP error_log setting ==="
docker exec staging-app php -i | grep -E "(error_log|log_errors)" || echo "Could not get PHP config"
echo ""
echo "=== PHP-FPM error log location ==="
docker exec staging-app grep -E "(error_log|catch_workers_output)" /usr/local/etc/php-fpm.d/www.conf | head -5 || echo "Could not read PHP-FPM config"
args:
executable: /bin/bash
register: php_config_check
ignore_errors: yes
failed_when: false
- name: Display PHP configuration
debug:
msg: "{{ php_config_check.stdout_lines }}"

View File

@@ -1,255 +0,0 @@
---
- name: Check Redis Connection and Environment Variables in Staging PHP Container
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if staging stack is running
shell: |
cd ~/deployment/stacks/staging
docker compose ps
args:
executable: /bin/bash
register: stack_status
ignore_errors: yes
failed_when: false
- name: Display stack status
debug:
msg: "{{ stack_status.stdout_lines }}"
- name: Check PHP container exists
shell: |
docker ps --filter "name=staging-app"
args:
executable: /bin/bash
register: php_container
ignore_errors: yes
failed_when: false
- name: Display PHP container status
debug:
msg: "{{ php_container.stdout_lines }}"
- name: Check Environment Variables in Staging PHP Container
shell: |
echo "=== Redis Environment Variables in Staging PHP Container ==="
docker exec staging-app env | grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" || echo "Container not accessible or no Redis vars found"
echo ""
echo "=== All Environment Variables in Staging PHP Container ==="
docker exec staging-app env | sort | head -50 || echo "Container not accessible"
args:
executable: /bin/bash
register: env_vars
ignore_errors: yes
failed_when: false
- name: Display environment variables
debug:
msg: "{{ env_vars.stdout_lines }}"
- name: Test Redis Connection from Staging PHP Container
shell: |
echo "=== Testing Redis Connection from Staging PHP Container ==="
echo "Test 1: Check if Redis is reachable"
docker exec staging-app php -r "
\$redis_host = getenv('REDIS_HOST') ?: 'staging-redis';
\$redis_port = (int)(getenv('REDIS_PORT') ?: 6379);
\$redis_password = getenv('REDIS_PASSWORD');
echo \\\"REDIS_HOST: \\\" . \$redis_host . \\\"\\\\n\\\";
echo \\\"REDIS_PORT: \\\" . \$redis_port . \\\"\\\\n\\\";
echo \\\"REDIS_PASSWORD: \\\" . (\$redis_password ? 'SET (length: ' . strlen(\$redis_password) . ')' : 'NOT SET') . \\\"\\\\n\\\";
// Test TCP connection
\$socket = @fsockopen(\$redis_host, \$redis_port, \$errno, \$errstr, 2);
if (\$socket) {
echo \\\"TCP Connection: OK\\\\n\\\";
fclose(\$socket);
} else {
echo \\\"TCP Connection: FAILED (errno: \$errno, errstr: \$errstr)\\\\n\\\";
}
" || echo "Could not execute PHP test"
args:
executable: /bin/bash
register: redis_test
ignore_errors: yes
failed_when: false
- name: Display Redis connection test results
debug:
msg: "{{ redis_test.stdout_lines }}"
- name: Test Redis connection with actual php-redis extension
shell: |
docker exec staging-app php -r "
// Get environment variables
\$redis_host = getenv('REDIS_HOST') ?: 'staging-redis';
\$redis_port = (int)(getenv('REDIS_PORT') ?: 6379);
\$redis_password = getenv('REDIS_PASSWORD');
echo '=== Staging Redis Connection Test ===' . PHP_EOL;
echo 'REDIS_HOST: ' . \$redis_host . PHP_EOL;
echo 'REDIS_PORT: ' . \$redis_port . PHP_EOL;
echo 'REDIS_PASSWORD: ' . (\$redis_password ? 'SET (length: ' . strlen(\$redis_password) . ')' : 'NOT SET') . PHP_EOL;
echo PHP_EOL;
if (!extension_loaded('redis')) {
echo 'ERROR: php-redis extension is not loaded!' . PHP_EOL;
exit(1);
}
if (!class_exists('Redis')) {
echo 'ERROR: Redis class is not available!' . PHP_EOL;
exit(1);
}
try {
\$redis = new Redis();
echo 'Created Redis instance' . PHP_EOL;
// Connect
\$success = \$redis->connect(\$redis_host, \$redis_port, 2.0);
if (!\$success) {
echo 'ERROR: Failed to connect to Redis server' . PHP_EOL;
echo 'Host: ' . \$redis_host . ', Port: ' . \$redis_port . PHP_EOL;
exit(1);
}
echo 'Connected to Redis server' . PHP_EOL;
// Authenticate if password is provided
if (\$redis_password) {
\$auth_result = \$redis->auth(\$redis_password);
if (!\$auth_result) {
echo 'ERROR: Redis authentication failed' . PHP_EOL;
echo 'Password used: ' . substr(\$redis_password, 0, 5) . '...' . PHP_EOL;
exit(1);
}
echo 'Authenticated with Redis' . PHP_EOL;
}
// Test PING
\$ping_result = \$redis->ping();
echo 'Redis PING: ' . \$ping_result . PHP_EOL;
// Test SET/GET
\$test_key = 'test_connection_' . time();
\$test_value = 'test_value';
\$set_result = \$redis->set(\$test_key, \$test_value);
echo 'SET test: ' . (\$set_result ? 'OK' : 'FAILED') . PHP_EOL;
\$get_result = \$redis->get(\$test_key);
echo 'GET test: ' . (\$get_result === \$test_value ? 'OK' : 'FAILED') . PHP_EOL;
// Cleanup
\$redis->del(\$test_key);
\$redis->close();
echo PHP_EOL . '? All tests passed!' . PHP_EOL;
} catch (Exception \$e) {
echo 'ERROR: ' . \$e->getMessage() . PHP_EOL;
echo 'Exception type: ' . get_class(\$e) . PHP_EOL;
exit(1);
}
"
args:
executable: /bin/bash
register: redis_direct_test
ignore_errors: yes
failed_when: false
- name: Display Redis direct connection test results
debug:
msg: "{{ redis_direct_test.stdout_lines }}"
- name: Check Staging Redis Container Configuration
shell: |
echo "=== Staging Redis Container Status ==="
docker ps --filter "name=staging-redis"
echo ""
echo "=== Staging Redis Container Environment ==="
docker exec staging-redis env | grep -E "(REDIS_|REQUIREPASS)" || echo "No Redis env vars found"
echo ""
echo "=== Test Redis Password ==="
REDIS_PASSWORD=$(cd ~/deployment/stacks/staging && grep REDIS_PASSWORD .env | cut -d '=' -f2 | tr -d ' ' || echo "")
if [ -n "$REDIS_PASSWORD" ]; then
PASSWORD_LEN=$(echo -n "$REDIS_PASSWORD" | wc -c)
echo "REDIS_PASSWORD from .env file: SET (length: $PASSWORD_LEN)"
docker exec staging-redis redis-cli -a "$REDIS_PASSWORD" PING || echo "Redis password test failed"
else
echo "REDIS_PASSWORD from .env file: NOT SET"
docker exec staging-redis redis-cli PING || echo "Redis connection test failed (no password)"
fi
args:
executable: /bin/bash
register: redis_config
ignore_errors: yes
failed_when: false
- name: Display Redis container configuration
debug:
msg: "{{ redis_config.stdout_lines }}"
- name: Check Docker Network Connectivity for Staging
shell: |
echo "=== Docker Network: staging-internal ==="
docker network inspect staging-internal 2>&1 | grep -E "(Name|Subnet|Containers)" | head -20 || echo "Network not found"
echo ""
echo "=== Testing Network Connectivity ==="
echo "From Staging PHP container to Redis:"
docker exec staging-app php -r "echo gethostbyname('staging-redis') . PHP_EOL;" 2>&1 || echo "DNS test failed"
echo ""
echo "Testing connection from staging-app to staging-redis:"
docker exec staging-app php -r "\$socket = @fsockopen('staging-redis', 6379, \$errno, \$errstr, 2); if (\$socket) { echo 'Port 6379: OK' . PHP_EOL; fclose(\$socket); } else { echo 'Port 6379: FAILED (errno: ' . \$errno . ', errstr: ' . \$errstr . ')' . PHP_EOL; }"
args:
executable: /bin/bash
register: network_test
ignore_errors: yes
failed_when: false
- name: Display network connectivity test
debug:
msg: "{{ network_test.stdout_lines }}"
- name: Check Staging Application Logs for Redis Errors
shell: |
cd ~/deployment/stacks/staging
echo "=== Staging Application Logs (Last 50 lines, Redis-related) ==="
docker compose logs staging-app --tail=50 2>&1 | grep -i -E "(redis|connection|error)" | tail -20 || echo "No Redis-related logs found"
args:
executable: /bin/bash
register: app_logs
ignore_errors: yes
failed_when: false
- name: Display application logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Check Staging .env file configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== .env file Redis Configuration ==="
if [ -f .env ]; then
grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" .env | grep -v "^#" || echo "No Redis config found in .env"
else
echo ".env file not found"
fi
echo ""
echo "=== Checking for application.env file ==="
if [ -f application.env ]; then
echo "application.env exists"
grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" application.env | grep -v "^#" || echo "No Redis config found in application.env"
else
echo "application.env file not found"
fi
args:
executable: /bin/bash
register: env_file_config
ignore_errors: yes
failed_when: false
- name: Display .env file configuration
debug:
msg: "{{ env_file_config.stdout_lines }}"

View File

@@ -1,135 +0,0 @@
---
- name: Check Staging Redis Environment File and Container Password
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check .env file exists and contains REDIS_PASSWORD
shell: |
cd ~/deployment/stacks/staging
echo "=== Checking .env file ==="
if [ -f .env ]; then
echo ".env file exists"
echo ""
echo "=== REDIS_PASSWORD from .env ==="
REDIS_PASSWORD_FROM_ENV=$(grep "^REDIS_PASSWORD=" .env | cut -d '=' -f2- | tr -d ' ' || echo "")
if [ -n "$REDIS_PASSWORD_FROM_ENV" ]; then
PASSWORD_LEN=$(echo -n "$REDIS_PASSWORD_FROM_ENV" | wc -c)
echo "REDIS_PASSWORD found in .env (length: $PASSWORD_LEN)"
echo "First 10 chars: ${REDIS_PASSWORD_FROM_ENV:0:10}..."
echo "Last 10 chars: ...${REDIS_PASSWORD_FROM_ENV: -10}"
else
echo "REDIS_PASSWORD NOT FOUND in .env file!"
fi
else
echo ".env file NOT FOUND!"
fi
args:
executable: /bin/bash
register: env_file_check
ignore_errors: yes
failed_when: false
- name: Display .env file check
debug:
msg: "{{ env_file_check.stdout_lines }}"
- name: Check how Redis container was started
shell: |
echo "=== Checking Redis container command ==="
docker inspect staging-redis --format '{{ '{{' }}.Config.Cmd{{ '}}' }}' || echo "Could not inspect container"
echo ""
echo "=== Checking if Redis actually requires password ==="
# Try without password first
docker exec staging-redis redis-cli PING 2>&1 || echo "Connection failed (expected if password required)"
echo ""
# Try with password from .env
cd ~/deployment/stacks/staging
REDIS_PASSWORD=$(grep "^REDIS_PASSWORD=" .env | cut -d '=' -f2- | tr -d ' ' || echo "")
if [ -n "$REDIS_PASSWORD" ]; then
echo "Testing with password from .env:"
docker exec staging-redis redis-cli -a "$REDIS_PASSWORD" PING 2>&1 || echo "Password test failed"
else
echo "Cannot test with password - REDIS_PASSWORD not found in .env"
fi
args:
executable: /bin/bash
register: redis_startup_check
ignore_errors: yes
failed_when: false
- name: Display Redis startup check
debug:
msg: "{{ redis_startup_check.stdout_lines }}"
- name: Test actual connection from PHP container
shell: |
cd ~/deployment/stacks/staging
REDIS_PASSWORD_ENV=$(grep "^REDIS_PASSWORD=" .env | cut -d '=' -f2- | tr -d ' ' || echo "")
docker exec staging-app php -r "
\$redis_host = 'staging-redis';
\$redis_port = 6379;
\$redis_password = getenv('REDIS_PASSWORD');
\$redis_password_env_file = '$REDIS_PASSWORD_ENV';
echo '=== Password Comparison ===' . PHP_EOL;
echo 'REDIS_PASSWORD from environment: ' . (\$redis_password ? 'SET (length: ' . strlen(\$redis_password) . ')' : 'NOT SET') . PHP_EOL;
echo 'REDIS_PASSWORD from .env file: ' . (\$redis_password_env_file ? 'SET (length: ' . strlen(\$redis_password_env_file) . ')' : 'NOT SET') . PHP_EOL;
if (\$redis_password && \$redis_password_env_file) {
if (\$redis_password === \$redis_password_env_file) {
echo 'Passwords MATCH!' . PHP_EOL;
} else {
echo 'Passwords DO NOT MATCH!' . PHP_EOL;
echo 'Env password first 10: ' . substr(\$redis_password, 0, 10) . PHP_EOL;
echo '.env password first 10: ' . substr(\$redis_password_env_file, 0, 10) . PHP_EOL;
}
}
echo PHP_EOL . '=== Connection Test ===' . PHP_EOL;
if (!extension_loaded('redis')) {
echo 'ERROR: php-redis extension not loaded!' . PHP_EOL;
exit(1);
}
try {
\$redis = new Redis();
\$success = \$redis->connect(\$redis_host, \$redis_port, 2.0);
if (!\$success) {
echo 'ERROR: Failed to connect to Redis' . PHP_EOL;
exit(1);
}
echo 'Connected to Redis' . PHP_EOL;
if (\$redis_password) {
\$auth_result = \$redis->auth(\$redis_password);
if (\$auth_result) {
echo 'Authentication: SUCCESS' . PHP_EOL;
\$ping = \$redis->ping();
echo 'PING: ' . \$ping . PHP_EOL;
} else {
echo 'Authentication: FAILED' . PHP_EOL;
echo 'Tried password: ' . substr(\$redis_password, 0, 10) . '...' . PHP_EOL;
}
} else {
echo 'WARNING: No password set in environment!' . PHP_EOL;
}
\$redis->close();
} catch (Exception \$e) {
echo 'ERROR: ' . \$e->getMessage() . PHP_EOL;
exit(1);
}
"
args:
executable: /bin/bash
register: password_comparison
ignore_errors: yes
failed_when: false
- name: Display password comparison
debug:
msg: "{{ password_comparison.stdout_lines }}"

View File

@@ -1,52 +0,0 @@
---
- name: Check Staging Status Complete
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check nginx error log for recent 502s
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx tail -20 /var/log/nginx/error.log 2>&1 | grep -E "(502|Bad Gateway|upstream)" || echo "No 502 errors in recent logs"
args:
executable: /bin/bash
register: nginx_errors
ignore_errors: yes
failed_when: false
- name: Display nginx errors
debug:
msg: "{{ nginx_errors.stdout_lines }}"
- name: Verify upstream configuration one more time
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default
docker compose exec -T staging-nginx grep "fastcgi_pass" /etc/nginx/sites-available/default | head -3
args:
executable: /bin/bash
register: upstream_check
ignore_errors: yes
failed_when: false
- name: Display upstream check
debug:
msg: "{{ upstream_check.stdout_lines }}"
- name: Test multiple times
shell: |
for i in 1 2 3; do
echo "Test $i:"
curl -H "User-Agent: Mozilla/5.0" -H "Cache-Control: no-cache" -s -o /dev/null -w " HTTP %{http_code}\n" https://staging.michaelschiemer.de/ || echo " Failed"
sleep 1
done
args:
executable: /bin/bash
register: multi_test
ignore_errors: yes
failed_when: false
- name: Display multi test results
debug:
msg: "{{ multi_test.stdout_lines }}"

View File

@@ -1,70 +0,0 @@
---
- name: Check Traefik Gitea Configuration
hosts: production
gather_facts: yes
become: no
vars:
traefik_stack_path: "{{ stacks_base_path }}/traefik"
tasks:
- name: Check Traefik logs for Gitea errors
shell: |
cd {{ traefik_stack_path }}
echo "=== Traefik Logs - Gitea errors (Last 50 lines) ==="
docker compose logs --tail=100 traefik 2>&1 | grep -i "gitea\|502\|bad gateway" | tail -50 || echo "No Gitea-related errors found"
args:
executable: /bin/bash
register: traefik_errors
ignore_errors: yes
failed_when: false
- name: Display Traefik errors
debug:
msg: "{{ traefik_errors.stdout_lines }}"
- name: Check dynamic Gitea configuration on server
shell: |
cat {{ traefik_stack_path }}/dynamic/gitea.yml 2>&1 || echo "File not found"
register: gitea_dynamic_config
ignore_errors: yes
failed_when: false
- name: Display dynamic Gitea config
debug:
msg: "{{ gitea_dynamic_config.stdout_lines }}"
- name: Test if Traefik can resolve gitea hostname
shell: |
docker exec traefik getent hosts gitea 2>&1 || echo "Cannot resolve gitea hostname"
register: traefik_resolve
ignore_errors: yes
failed_when: false
- name: Display Traefik resolve result
debug:
msg: "{{ traefik_resolve.stdout_lines }}"
- name: Get Gitea container IP
shell: |
docker inspect gitea --format '{{ '{{' }}range.NetworkSettings.Networks{{ '}}' }}{{ '{{' }}if eq .NetworkID (printf "%s" (docker network inspect traefik-public --format "{{ '{{' }}.Id{{ '}}' }}")){{ '}}' }}{{ '{{' }}.IPAddress{{ '}}' }}{{ '{{' }}end{{ '}}' }}{{ '{{' }}end{{ '}}' }}' 2>&1 || echo "Could not get IP"
register: gitea_ip
ignore_errors: yes
failed_when: false
- name: Display Gitea IP
debug:
msg: "Gitea IP in traefik-public network: {{ gitea_ip.stdout }}"
- name: Test connectivity from Traefik to Gitea IP
shell: |
GITEA_IP="{{ gitea_ip.stdout | default('172.21.0.3') }}"
docker exec traefik wget -qO- --timeout=5 "http://$GITEA_IP:3000/api/healthz" 2>&1 || echo "Cannot connect to Gitea at $GITEA_IP:3000"
register: traefik_connect
ignore_errors: yes
failed_when: false
when: gitea_ip.stdout is defined and gitea_ip.stdout != ""
- name: Display connectivity result
debug:
msg: "{{ traefik_connect.stdout_lines }}"

View File

@@ -1,63 +0,0 @@
---
- name: Check VPN Routing Status - Client-Side Diagnostics
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions for client-side DNS check
debug:
msg:
- "=== CLIENT-SIDE DNS CHECK ==="
- "Bitte f?hre diese Befehle auf deinem Client-System aus (nicht auf dem Server):"
- ""
- "1. Pr?fe aktive DNS-Server:"
- " Linux/Mac: cat /etc/resolv.conf"
- " Windows: Get-DnsClientServerAddress | Select-Object InterfaceAlias, ServerAddresses"
- ""
- "2. Teste DNS-Aufl?sung:"
- " dig +short grafana.michaelschiemer.de"
- " Oder: nslookup grafana.michaelschiemer.de"
- ""
- "3. Teste DNS-Aufl?sung ?ber VPN-DNS:"
- " dig +short grafana.michaelschiemer.de @10.8.0.1"
- " Sollte zur?ckgeben: 10.8.0.1"
- ""
- "4. Pr?fe WireGuard Config:"
- " ?ffne deine WireGuard-Config und pr?fe:"
- " [Interface]"
- " DNS = 10.8.0.1"
- ""
- "Teile mir die Ergebnisse mit, dann kann ich dir helfen, das zu fixen!"
- name: Check server-side WireGuard configuration
shell: |
echo "=== Server-Side WireGuard Config ==="
sudo cat /etc/wireguard/wg0.conf | grep -A 5 "\[Interface\]" | head -10
args:
executable: /bin/bash
register: wg_server_config
ignore_errors: yes
failed_when: false
- name: Display server-side WireGuard config
debug:
msg: "{{ wg_server_config.stdout_lines }}"
- name: Check server-side DNS resolution
shell: |
echo "=== Server-Side DNS Tests ==="
echo "1. Grafana via VPN DNS (10.8.0.1):"
dig +short grafana.michaelschiemer.de @10.8.0.1 2>&1 || echo "Failed"
echo ""
echo "2. Grafana via public DNS (8.8.8.8):"
dig +short grafana.michaelschiemer.de @8.8.8.8 2>&1 || echo "Failed"
args:
executable: /bin/bash
register: server_dns
ignore_errors: yes
failed_when: false
- name: Display server-side DNS tests
debug:
msg: "{{ server_dns.stdout_lines }}"

View File

@@ -1,116 +0,0 @@
---
- name: Check VPN Test Results from Client
hosts: production
gather_facts: no
become: no
tasks:
- name: Check latest Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Last 20 Grafana Access Attempts ==="
tail -500 logs/access.log | grep -i grafana | tail -20 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1 | cut -d':' -f1-2)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
method=$(echo "$line" | grep -oP '"RequestMethod":"[^"]*"' | sed 's/"RequestMethod":"//;s/"//')
path=$(echo "$line" | grep -oP '"RequestPath":"[^"]*"' | sed 's/"RequestPath":"//;s/"//')
if [[ "$client" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "? $time | ClientHost: $client | Status: $status | $method $path ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [[ "$client" == "89.246.96.244" ]]; then
echo "? $time | ClientHost: $client | Status: $status | $method $path ? ?ffentliche IP (Traffic kommt NICHT ?ber VPN)"
else
echo "? $time | ClientHost: $client | Status: $status | $method $path ? Unbekannt"
fi
done
args:
executable: /bin/bash
register: recent_access
ignore_errors: yes
failed_when: false
- name: Display recent access attempts
debug:
msg: "{{ recent_access.stdout_lines }}"
- name: Extract unique client IPs from recent requests
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -20 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique client IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Analyze client IP distribution
shell: |
cd ~/deployment/stacks/traefik
echo "=== Client IP Analysis (Last 20 requests) ==="
VPN_COUNT=$(tail -100 logs/access.log | grep -i grafana | tail -20 | grep -oP '"ClientHost":"10\.8\.0\.[0-9]+"' | wc -l)
PUBLIC_COUNT=$(tail -100 logs/access.log | grep -i grafana | tail -20 | grep -oP '"ClientHost":"89\.246\.96\.244"' | wc -l)
TOTAL=$(tail -100 logs/access.log | grep -i grafana | tail -20 | wc -l)
echo "Total Grafana requests (last 20): $TOTAL"
echo "VPN IP requests (10.8.0.x): $VPN_COUNT"
echo "Public IP requests (89.246.96.244): $PUBLIC_COUNT"
if [ "$VPN_COUNT" -gt 0 ]; then
echo ""
echo "? SUCCESS: Traffic is coming through VPN!"
echo " $VPN_COUNT out of $TOTAL requests used VPN IP"
elif [ "$PUBLIC_COUNT" -gt 0 ]; then
echo ""
echo "? PROBLEM: Traffic is NOT coming through VPN"
echo " $PUBLIC_COUNT out of $TOTAL requests used public IP"
echo " Check VPN routing configuration on client"
else
echo ""
echo "?? No recent Grafana requests found"
fi
args:
executable: /bin/bash
register: ip_analysis
ignore_errors: yes
failed_when: false
- name: Display IP analysis
debug:
msg: "{{ ip_analysis.stdout_lines }}"
- name: Check WireGuard peer status for test-client
shell: |
echo "=== WireGuard Peer Status ==="
sudo wg show | grep -A 5 "YbjBipkOHjLfcIYQKDReJ1swseczqHolTCRv7+LHnmw=" || echo "Peer not found or not connected"
args:
executable: /bin/bash
register: wg_peer_status
ignore_errors: yes
failed_when: false
- name: Display WireGuard peer status
debug:
msg: "{{ wg_peer_status.stdout_lines }}"
- name: Final recommendations
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- ""
- "Pr?fe die obigen Zeilen:"
- ""
- "? Wenn ClientHost: 10.8.0.x ? Traffic kommt ?ber VPN!"
- " ? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "? Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN"
- " ? Dann m?ssen wir das VPN-Routing auf dem Client pr?fen"
- ""
- "N?chste Schritte:"
- "1. Wenn VPN funktioniert: Temporary IP-Erlaubnis entfernen"
- "2. Wenn VPN nicht funktioniert: Route-Tabellen auf Client pr?fen"

View File

@@ -1,125 +0,0 @@
---
- name: Debug Grafana 403 Error
hosts: production
gather_facts: yes
become: no
# This playbook requires the production inventory file
# Run with: ansible-playbook -i ../inventory/production.yml debug-grafana-403.yml
tasks:
- name: Check Traefik logs for recent Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Recent Traefik Access Logs (last 50 lines with grafana) ==="
docker compose logs --tail=100 traefik 2>&1 | grep -i grafana | tail -50 || echo "No grafana entries found"
args:
executable: /bin/bash
register: traefik_logs
ignore_errors: yes
failed_when: false
- name: Display Traefik logs
debug:
msg: "{{ traefik_logs.stdout_lines }}"
- name: Check Traefik access log file
shell: |
cd ~/deployment/stacks/traefik
echo "=== Recent Traefik Access Log (last 50 lines) ==="
tail -50 logs/access.log 2>&1 | tail -50 || echo "Access log not found"
args:
executable: /bin/bash
register: access_log
ignore_errors: yes
failed_when: false
- name: Display access log
debug:
msg: "{{ access_log.stdout_lines }}"
- name: Check Grafana container status
shell: |
cd ~/deployment/stacks/monitoring
docker compose ps grafana
args:
executable: /bin/bash
register: grafana_status
ignore_errors: yes
failed_when: false
- name: Display Grafana status
debug:
msg: "{{ grafana_status.stdout_lines }}"
- name: Check Grafana Traefik labels
shell: |
cd ~/deployment/stacks/monitoring
docker compose config | grep -A 20 "grafana:" | grep -E "(ipwhitelist|middleware|sourcerange)" || echo "No IP whitelist labels found"
args:
executable: /bin/bash
register: grafana_labels
ignore_errors: yes
failed_when: false
- name: Display Grafana labels
debug:
msg: "{{ grafana_labels.stdout_lines }}"
- name: Check CoreDNS configuration
shell: |
cd ~/deployment/stacks/dns
echo "=== CoreDNS Corefile ==="
cat Corefile 2>&1 || echo "Corefile not found"
args:
executable: /bin/bash
register: coredns_config
ignore_errors: yes
failed_when: false
- name: Display CoreDNS configuration
debug:
msg: "{{ coredns_config.stdout_lines }}"
- name: Check monitoring stack environment variables
shell: |
cd ~/deployment/stacks/monitoring
echo "=== MONITORING_VPN_IP_WHITELIST ==="
grep MONITORING_VPN_IP_WHITELIST .env 2>&1 || echo "Variable not found in .env"
args:
executable: /bin/bash
register: monitoring_env
ignore_errors: yes
failed_when: false
- name: Display monitoring environment
debug:
msg: "{{ monitoring_env.stdout_lines }}"
- name: Test DNS resolution for grafana.michaelschiemer.de
shell: |
echo "=== DNS Resolution Test ==="
dig +short grafana.michaelschiemer.de @10.8.0.1 2>&1 || echo "DNS resolution failed"
args:
executable: /bin/bash
register: dns_test
ignore_errors: yes
failed_when: false
- name: Display DNS test result
debug:
msg: "{{ dns_test.stdout_lines }}"
- name: Check WireGuard interface status
shell: |
echo "=== WireGuard Interface Status ==="
sudo wg show 2>&1 || echo "WireGuard not running or no permissions"
args:
executable: /bin/bash
register: wg_status
ignore_errors: yes
failed_when: false
- name: Display WireGuard status
debug:
msg: "{{ wg_status.stdout_lines }}"

View File

@@ -1,250 +0,0 @@
---
- name: Debug Staging Redis Secrets Configuration
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check staging stack directory
shell: |
cd ~/deployment/stacks/staging
echo "=== Staging Stack Directory ==="
pwd
ls -la
register: dir_check
ignore_errors: yes
- name: Display directory contents
debug:
msg: "{{ dir_check.stdout_lines }}"
- name: Check if docker-compose files exist
stat:
path: "{{ item }}"
vars:
deployment_path: "~/deployment/stacks/staging"
with_items:
- "{{ deployment_path }}/docker-compose.base.yml"
- "{{ deployment_path }}/docker-compose.staging.yml"
register: compose_files
- name: Display compose file status
debug:
msg: "{{ item.stat.exists | default(false) }}: {{ item.item }}"
with_items: "{{ compose_files.results }}"
- name: Check docker-compose.staging.yml configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== Checking docker-compose.staging.yml for Redis secrets ==="
if [ -f docker-compose.staging.yml ]; then
echo "--- REDIS_PASSWORD_FILE in environment ---"
grep -A 5 "staging-app:" docker-compose.staging.yml | grep -A 10 "environment:" | grep "REDIS_PASSWORD_FILE" || echo "REDIS_PASSWORD_FILE not found in staging-app environment"
echo ""
echo "--- Secrets section for staging-app ---"
grep -A 10 "staging-app:" docker-compose.staging.yml | grep -A 15 "secrets:" | head -10 || echo "Secrets section not found"
echo ""
echo "--- Secrets definitions at bottom ---"
tail -30 docker-compose.staging.yml | grep -A 5 "redis_password:" || echo "redis_password secret definition not found"
else
echo "docker-compose.staging.yml NOT FOUND"
fi
register: compose_config
ignore_errors: yes
- name: Display compose configuration
debug:
msg: "{{ compose_config.stdout_lines }}"
- name: Check if secrets directory and files exist
shell: |
cd ~/deployment/stacks/staging
echo "=== Secrets Directory ==="
if [ -d secrets ]; then
echo "secrets/ directory exists"
ls -la secrets/
echo ""
echo "--- redis_password.txt content check ---"
if [ -f secrets/redis_password.txt ]; then
echo "secrets/redis_password.txt exists"
FILE_SIZE=$(stat -f%z secrets/redis_password.txt 2>/dev/null || stat -c%s secrets/redis_password.txt 2>/dev/null || echo "unknown")
CONTENT_LENGTH=$(wc -c < secrets/redis_password.txt | tr -d ' ')
echo "File size: $FILE_SIZE bytes"
echo "Content length: $CONTENT_LENGTH bytes"
# Show first 5 chars (for debugging)
FIRST_CHARS=$(head -c 5 secrets/redis_password.txt)
echo "First 5 chars: $FIRST_CHARS"
else
echo "secrets/redis_password.txt NOT FOUND"
fi
else
echo "secrets/ directory DOES NOT EXIST"
fi
register: secrets_check
ignore_errors: yes
- name: Display secrets check
debug:
msg: "{{ secrets_check.stdout_lines }}"
- name: Check if staging-app container is running
shell: |
docker ps --filter "name=staging-app" --format "{{.Names}}\t{{.Status}}\t{{.Image}}"
register: container_status
ignore_errors: yes
- name: Display container status
debug:
msg: "{{ container_status.stdout_lines }}"
- name: Check Docker secrets mounted in staging-app container
shell: |
echo "=== Docker Secrets in staging-app Container ==="
if docker ps --filter "name=staging-app" --format "{{.Names}}" | grep -q staging-app; then
echo "--- Checking /run/secrets/ directory ---"
docker exec staging-app ls -la /run/secrets/ 2>&1 || echo "Cannot access /run/secrets/"
echo ""
echo "--- Checking redis_password secret file ---"
docker exec staging-app cat /run/secrets/redis_password 2>&1 | head -c 20 || echo "redis_password secret NOT FOUND or NOT READABLE"
echo "..."
echo ""
echo "--- File exists check ---"
docker exec staging-app test -f /run/secrets/redis_password && echo "redis_password file EXISTS" || echo "redis_password file DOES NOT EXIST"
docker exec staging-app test -r /run/secrets/redis_password && echo "redis_password file is READABLE" || echo "redis_password file is NOT READABLE"
else
echo "staging-app container is NOT RUNNING"
fi
register: secrets_mounted
ignore_errors: yes
- name: Display secrets mount status
debug:
msg: "{{ secrets_mounted.stdout_lines }}"
- name: Check Environment Variables in staging-app container
shell: |
echo "=== Environment Variables in staging-app ==="
if docker ps --filter "name=staging-app" --format "{{.Names}}" | grep -q staging-app; then
echo "--- Redis-related environment variables ---"
docker exec staging-app env | grep -E "(REDIS_|CACHE_|SESSION_|QUEUE_)" || echo "No Redis env vars found"
echo ""
echo "--- *_FILE environment variables ---"
docker exec staging-app env | grep "_FILE" || echo "No _FILE env vars found"
echo ""
echo "--- All environment variables (first 50) ---"
docker exec staging-app env | sort | head -50 || echo "Cannot read environment"
else
echo "Container not running"
fi
register: env_vars
ignore_errors: yes
- name: Display environment variables
debug:
msg: "{{ env_vars.stdout_lines }}"
- name: Test PHP environment resolution (check DockerSecretsResolver)
shell: |
echo "=== Testing PHP Environment Resolution ==="
docker exec staging-app php -r "
// Simulate what the Framework does
echo '=== System Environment Check ===' . PHP_EOL;
echo 'getenv(\"REDIS_PASSWORD_FILE\"): ' . (getenv('REDIS_PASSWORD_FILE') ?: 'NOT SET') . PHP_EOL;
echo 'getenv(\"REDIS_PASSWORD\"): ' . (getenv('REDIS_PASSWORD') ? 'SET (length: ' . strlen(getenv('REDIS_PASSWORD')) . ')' : 'NOT SET') . PHP_EOL;
echo PHP_EOL;
echo '=== $_ENV Check ===' . PHP_EOL;
echo 'isset($_ENV[\"REDIS_PASSWORD_FILE\"]): ' . (isset(\$_ENV['REDIS_PASSWORD_FILE']) ? 'YES: ' . \$_ENV['REDIS_PASSWORD_FILE'] : 'NO') . PHP_EOL;
echo 'isset($_ENV[\"REDIS_PASSWORD\"]): ' . (isset(\$_ENV['REDIS_PASSWORD']) ? 'YES (length: ' . strlen(\$_ENV['REDIS_PASSWORD']) . ')' : 'NO') . PHP_EOL;
echo PHP_EOL;
echo '=== $_SERVER Check ===' . PHP_EOL;
echo 'isset($_SERVER[\"REDIS_PASSWORD_FILE\"]): ' . (isset(\$_SERVER['REDIS_PASSWORD_FILE']) ? 'YES: ' . \$_SERVER['REDIS_PASSWORD_FILE'] : 'NO') . PHP_EOL;
echo 'isset($_SERVER[\"REDIS_PASSWORD\"]): ' . (isset(\$_SERVER['REDIS_PASSWORD']) ? 'YES (length: ' . strlen(\$_SERVER['REDIS_PASSWORD']) . ')' : 'NO') . PHP_EOL;
echo PHP_EOL;
echo '=== Docker Secrets File Check ===' . PHP_EOL;
\$secret_file = '/run/secrets/redis_password';
echo 'File path: ' . \$secret_file . PHP_EOL;
echo 'File exists: ' . (file_exists(\$secret_file) ? 'YES' : 'NO') . PHP_EOL;
if (file_exists(\$secret_file)) {
echo 'File readable: ' . (is_readable(\$secret_file) ? 'YES' : 'NO') . PHP_EOL;
\$content = file_get_contents(\$secret_file);
if (\$content !== false) {
echo 'File content length: ' . strlen(trim(\$content)) . PHP_EOL;
echo 'File content (first 10 chars): ' . substr(trim(\$content), 0, 10) . '...' . PHP_EOL;
} else {
echo 'File content: COULD NOT READ' . PHP_EOL;
}
}
echo PHP_EOL;
// Test DockerSecretsResolver logic
echo '=== DockerSecretsResolver Simulation ===' . PHP_EOL;
\$variables = getenv();
\$file_key = 'REDIS_PASSWORD_FILE';
if (isset(\$variables[\$file_key])) {
\$file_path = \$variables[\$file_key];
echo 'REDIS_PASSWORD_FILE found: ' . \$file_path . PHP_EOL;
if (file_exists(\$file_path) && is_readable(\$file_path)) {
\$secret_value = trim(file_get_contents(\$file_path));
echo 'Secret resolved: YES (length: ' . strlen(\$secret_value) . ')' . PHP_EOL;
echo 'Secret value (first 10 chars): ' . substr(\$secret_value, 0, 10) . '...' . PHP_EOL;
} else {
echo 'Secret resolved: NO (file not accessible)' . PHP_EOL;
}
} else {
echo 'REDIS_PASSWORD_FILE NOT FOUND in environment' . PHP_EOL;
}
" 2>&1
register: php_test
ignore_errors: yes
- name: Display PHP environment test
debug:
msg: "{{ php_test.stdout_lines }}"
- name: Check staging-redis container configuration
shell: |
echo "=== Staging Redis Container ==="
docker ps --filter "name=staging-redis" --format "{{.Names}}\t{{.Status}}"
echo ""
echo "=== Redis password requirement ==="
docker exec staging-redis redis-cli CONFIG GET requirepass 2>&1 || echo "Cannot check Redis config"
echo ""
echo "=== Test Redis connection without password ==="
docker exec staging-redis redis-cli PING 2>&1 || echo "Connection failed (password required)"
register: redis_config
ignore_errors: yes
- name: Display Redis configuration
debug:
msg: "{{ redis_config.stdout_lines }}"
- name: Check recent staging-app logs for Redis errors
shell: |
cd ~/deployment/stacks/staging
echo "=== Recent staging-app logs (Redis-related) ==="
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml logs staging-app --tail=100 2>&1 | grep -i -E "(redis|password|secret|auth|noauth)" | tail -30 || echo "No Redis-related logs found"
register: app_logs
ignore_errors: yes
- name: Display application logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Summary and Recommendations
debug:
msg:
- "========================================"
- "DEBUG SUMMARY"
- "========================================"
- "Check the output above for:"
- "1. docker-compose.staging.yml has REDIS_PASSWORD_FILE=/run/secrets/redis_password"
- "2. secrets/redis_password.txt exists and is readable"
- "3. Container has /run/secrets/redis_password file mounted"
- "4. Container environment has REDIS_PASSWORD_FILE variable set"
- "5. PHP can read the secret file and resolve REDIS_PASSWORD"
- "6. Redis container requires password (requirepass set)"
- ""
- "If any check fails, the issue is identified above."

View File

@@ -1,240 +0,0 @@
---
- name: Deploy Application Update via Docker Compose
hosts: production
gather_facts: yes
become: no
vars:
# These should be passed via -e from CI/CD
image_tag: "{{ image_tag | default('latest') }}"
git_commit_sha: "{{ git_commit_sha | default('unknown') }}"
deployment_timestamp: "{{ deployment_timestamp | default(ansible_date_time.iso8601) }}"
# app_stack_path is now defined in group_vars/production.yml
pre_tasks:
- name: Optionally load registry credentials from encrypted vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Derive docker registry credentials from vault when not provided
set_fact:
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
- name: Ensure system packages are up to date
include_role:
name: system
when: system_update_packages | bool
- name: Verify Docker is running
systemd:
name: docker
state: started
register: docker_service
become: yes
- name: Fail if Docker is not running
fail:
msg: "Docker service is not running"
when: docker_service.status.ActiveState != 'active'
- name: Ensure application stack directory exists
file:
path: "{{ app_stack_path }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
- name: Determine application environment
set_fact:
application_environment: "{{ APP_ENV | default('production') }}"
application_compose_suffix: "{{ 'staging.yml' if application_environment == 'staging' else 'production.yml' }}"
- name: Check if docker-compose.base.yml exists in application stack
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_exists
when: not (application_sync_files | default(false) | bool)
- name: Check if docker-compose override file exists in application stack (production or staging)
stat:
path: "{{ app_stack_path }}/docker-compose.{{ application_compose_suffix }}"
register: compose_override_exists
when: not (application_sync_files | default(false) | bool)
- name: Fail if docker-compose files don't exist
fail:
msg: |
Application Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.{{ application_compose_suffix }}
The Application Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the application stack with docker-compose files and .env file.
when:
- not (application_sync_files | default(false) | bool)
- (not compose_base_exists.stat.exists or not compose_override_exists.stat.exists)
- name: Create backup directory
file:
path: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
tasks:
- name: Verify docker-compose files exist
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_check
when: not (application_sync_files | default(false) | bool)
- name: Verify docker-compose override file exists (production or staging)
stat:
path: "{{ app_stack_path }}/docker-compose.{{ application_compose_suffix }}"
register: compose_override_check
when: not (application_sync_files | default(false) | bool)
- name: Fail if docker-compose files don't exist
fail:
msg: |
Application Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.{{ application_compose_suffix }}
The Application Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the application stack with docker-compose files and .env file.
when:
- not (application_sync_files | default(false) | bool)
- (not compose_base_check.stat.exists or not compose_override_check.stat.exists)
- name: Backup current deployment metadata
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.{{ application_compose_suffix }} ps --format json 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/current_containers.json || true
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.{{ application_compose_suffix }} config 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/docker-compose-config.yml || true
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
when:
- not (application_sync_files | default(false) | bool)
- compose_base_exists.stat.exists | default(false)
- compose_override_exists.stat.exists | default(false)
- name: Login to Docker registry (if credentials provided)
community.docker.docker_login:
registry_url: "{{ docker_registry_url }}"
username: "{{ docker_registry_username }}"
password: "{{ docker_registry_password }}"
no_log: yes
ignore_errors: yes
when:
- docker_registry_username is defined
- docker_registry_password is defined
- docker_registry_username | length > 0
- docker_registry_password | length > 0
register: registry_login
- name: Pull new Docker image
community.docker.docker_image:
name: "{{ app_image }}"
tag: "{{ image_tag }}"
source: pull
force_source: yes
register: image_pull
- name: Verify image was pulled successfully
fail:
msg: "Failed to pull image {{ app_image }}:{{ image_tag }}"
when: image_pull.failed
# Sync files first if application_sync_files=true (before updating docker-compose.production.yml)
- name: Sync application stack files
import_role:
name: application
vars:
application_sync_files: "{{ application_sync_files | default(false) }}"
application_compose_recreate: "never" # Don't recreate yet, just sync files
application_remove_orphans: false
when: application_sync_files | default(false) | bool
- name: Update docker-compose override file with new image tag (all services)
replace:
path: "{{ app_stack_path }}/docker-compose.{{ application_compose_suffix }}"
# Match both localhost:5000 and registry.michaelschiemer.de (or any registry URL)
regexp: '^(\s+image:\s+)(localhost:5000|registry\.michaelschiemer\.de|{{ docker_registry }})/{{ app_name }}:.*$'
replace: '\1{{ app_image }}:{{ image_tag }}'
# Always update to ensure localhost:5000 is used (registry only accessible via localhost)
when: true
register: compose_updated
- name: Redeploy application stack with new image
import_role:
name: application
vars:
application_sync_files: false # Already synced above, don't sync again
application_compose_recreate: "always"
application_remove_orphans: true
- name: Get deployed image information
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.production.yml config | grep -E "^\s+image:" | head -1 | awk '{print $2}' || echo "unknown"
args:
executable: /bin/bash
register: deployed_image
changed_when: false
- name: Record deployment metadata
copy:
content: |
Deployment Timestamp: {{ deployment_timestamp }}
Git Commit: {{ git_commit_sha }}
Image Tag: {{ image_tag }}
Deployed Image: {{ deployed_image.stdout }}
Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED (already exists)' }}
Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}
Health Status: {{ application_health_output if application_health_output != '' else 'All services healthy' }}
Health Check HTTP Status: {{ application_healthcheck_status }}
dest: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/deployment_metadata.txt"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
- name: Cleanup old backups (keep last {{ max_rollback_versions }})
shell: |
cd {{ backups_path }}
ls -dt */ 2>/dev/null | tail -n +{{ max_rollback_versions + 1 }} | xargs -r rm -rf
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
post_tasks:
- name: Display deployment summary
debug:
msg:
- "=== Deployment Summary ==="
- "Image: {{ app_image }}:{{ image_tag }}"
- "Commit: {{ git_commit_sha }}"
- "Timestamp: {{ deployment_timestamp }}"
- "Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED' }}"
- "Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}"
- "Health Output: {{ application_health_output if application_health_output != '' else 'All services healthy' }}"
- "Health Check HTTP Status: {{ application_healthcheck_status }}"
- "Health Check URL: {{ health_check_url }}"
- ""
- "Next: Verify application is healthy"

View File

@@ -1,239 +0,0 @@
---
- name: Deploy Application Update to Production via Docker Compose
hosts: production
gather_facts: yes
become: no
vars:
# These should be passed via -e from CI/CD
application_environment: production
application_compose_suffix: production.yml
# app_stack_path is now defined in group_vars/production.yml
pre_tasks:
- name: Set deployment variables
set_fact:
image_tag: "{{ image_tag | default('latest') }}"
git_commit_sha: "{{ git_commit_sha | default('unknown') }}"
deployment_timestamp: "{{ deployment_timestamp | default(ansible_date_time.iso8601) }}"
- name: Optionally load registry credentials from encrypted vault
include_vars:
file: "{{ playbook_dir }}/../../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Derive docker registry credentials from vault when not provided
set_fact:
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
- name: Ensure system packages are up to date
include_role:
name: system
when: system_update_packages | bool
- name: Verify Docker is running
systemd:
name: docker
state: started
register: docker_service
become: yes
- name: Fail if Docker is not running
fail:
msg: "Docker service is not running"
when: docker_service.status.ActiveState != 'active'
- name: Ensure application stack directory exists
file:
path: "{{ app_stack_path }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
- name: Check if docker-compose.base.yml exists in application stack
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_exists
when: not (application_sync_files | default(false) | bool)
- name: Check if docker-compose.production.yml exists in application stack
stat:
path: "{{ app_stack_path }}/docker-compose.production.yml"
register: compose_override_exists
when: not (application_sync_files | default(false) | bool)
- name: Fail if docker-compose files don't exist
fail:
msg: |
Application Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.production.yml
The Application Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the application stack with docker-compose files and .env file.
when:
- not (application_sync_files | default(false) | bool)
- (not compose_base_exists.stat.exists or not compose_override_exists.stat.exists)
- name: Create backup directory
file:
path: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
tasks:
- name: Verify docker-compose files exist
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_check
when: not (application_sync_files | default(false) | bool)
- name: Verify docker-compose.production.yml exists
stat:
path: "{{ app_stack_path }}/docker-compose.production.yml"
register: compose_override_check
when: not (application_sync_files | default(false) | bool)
- name: Fail if docker-compose files don't exist
fail:
msg: |
Application Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.production.yml
The Application Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the application stack with docker-compose files and .env file.
when:
- not (application_sync_files | default(false) | bool)
- (not compose_base_check.stat.exists or not compose_override_check.stat.exists)
- name: Backup current deployment metadata
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.production.yml ps --format json 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/current_containers.json || true
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.production.yml config 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/docker-compose-config.yml || true
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
when:
- not (application_sync_files | default(false) | bool)
- compose_base_exists.stat.exists | default(false)
- compose_override_exists.stat.exists | default(false)
- name: Login to Docker registry (if credentials provided)
community.docker.docker_login:
registry_url: "{{ docker_registry_url }}"
username: "{{ docker_registry_username }}"
password: "{{ docker_registry_password }}"
no_log: yes
ignore_errors: yes
when:
- docker_registry_username is defined
- docker_registry_password is defined
- docker_registry_username | length > 0
- docker_registry_password | length > 0
register: registry_login
- name: Pull new Docker image
community.docker.docker_image:
name: "{{ app_image }}"
tag: "{{ image_tag }}"
source: pull
force_source: yes
register: image_pull
- name: Verify image was pulled successfully
fail:
msg: "Failed to pull image {{ app_image }}:{{ image_tag }}"
when: image_pull.failed
# Sync files first if application_sync_files=true (before updating docker-compose.production.yml)
- name: Sync application stack files
import_role:
name: application
vars:
application_sync_files: "{{ application_sync_files | default(false) }}"
application_compose_recreate: "never" # Don't recreate yet, just sync files
application_remove_orphans: false
when: application_sync_files | default(false) | bool
- name: Update docker-compose.production.yml with new image tag (all services)
replace:
path: "{{ app_stack_path }}/docker-compose.production.yml"
# Match both localhost:5000 and registry.michaelschiemer.de (or any registry URL)
regexp: '^(\s+image:\s+)(localhost:5000|registry\.michaelschiemer\.de|{{ docker_registry }})/{{ app_name }}:.*$'
replace: '\1{{ app_image }}:{{ image_tag }}'
# Always update to ensure localhost:5000 is used (registry only accessible via localhost)
when: true
register: compose_updated
- name: Redeploy application stack with new image
import_role:
name: application
vars:
application_sync_files: false # Already synced above, don't sync again
application_compose_recreate: "always"
application_remove_orphans: true
- name: Get deployed image information
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.production.yml config | grep -E "^\s+image:" | head -1 | awk '{print $2}' || echo "unknown"
args:
executable: /bin/bash
register: deployed_image
changed_when: false
- name: Record deployment metadata
copy:
content: |
Deployment Timestamp: {{ deployment_timestamp }}
Git Commit: {{ git_commit_sha }}
Image Tag: {{ image_tag }}
Deployed Image: {{ deployed_image.stdout }}
Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED (already exists)' }}
Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}
Health Status: {{ application_health_output if application_health_output != '' else 'All services healthy' }}
Health Check HTTP Status: {{ application_healthcheck_status }}
dest: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/deployment_metadata.txt"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
- name: Cleanup old backups (keep last {{ max_rollback_versions | default(5) }})
shell: |
cd {{ backups_path }}
ls -dt */ 2>/dev/null | tail -n +{{ max_rollback_versions | default(5) + 1 }} | xargs -r rm -rf
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
post_tasks:
- name: Display deployment summary
debug:
msg:
- "=== Production Deployment Summary ==="
- "Image: {{ app_image }}:{{ image_tag }}"
- "Commit: {{ git_commit_sha }}"
- "Timestamp: {{ deployment_timestamp }}"
- "Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED' }}"
- "Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}"
- "Health Output: {{ application_health_output if application_health_output != '' else 'All services healthy' }}"
- "Health Check HTTP Status: {{ application_healthcheck_status }}"
- "Health Check URL: {{ health_check_url | default('https://michaelschiemer.de/health') }}"
- ""
- "Next: Verify application is healthy"

View File

@@ -1,226 +0,0 @@
---
- name: Deploy Application Update to Staging via Docker Compose
hosts: production
gather_facts: yes
become: no
vars:
# These should be passed via -e from CI/CD
application_environment: staging
application_compose_suffix: staging.yml
# app_stack_path is now defined in group_vars/production.yml
pre_tasks:
- name: Set deployment variables
set_fact:
image_tag: "{{ image_tag | default('latest') }}"
git_commit_sha: "{{ git_commit_sha | default('unknown') }}"
deployment_timestamp: "{{ deployment_timestamp | default(ansible_date_time.iso8601) }}"
- name: Optionally load registry credentials from encrypted vault
include_vars:
file: "{{ playbook_dir }}/../../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Derive docker registry credentials from vault when not provided
set_fact:
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
- name: Ensure system packages are up to date
include_role:
name: system
when: system_update_packages | bool
- name: Verify Docker is running
systemd:
name: docker
state: started
register: docker_service
become: yes
- name: Fail if Docker is not running
fail:
msg: "Docker service is not running"
when: docker_service.status.ActiveState != 'active'
- name: Set staging stack path
set_fact:
app_stack_path: "{{ staging_stack_path | default(stacks_base_path + '/staging') }}"
backups_path: "{{ backups_base_path | default('~/deployment/backups') }}"
- name: Ensure application stack directory exists
file:
path: "{{ app_stack_path }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
- name: Check if docker-compose.base.yml exists in staging stack
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_exists
- name: Check if docker-compose.staging.yml exists in staging stack
stat:
path: "{{ app_stack_path }}/docker-compose.staging.yml"
register: compose_override_exists
- name: Fail if docker-compose files don't exist
fail:
msg: |
Staging Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.staging.yml
The Staging Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the staging stack with docker-compose files and .env file.
when:
- not compose_base_exists.stat.exists or not compose_override_exists.stat.exists
- name: Create backup directory
file:
path: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
tasks:
- name: Verify docker-compose files exist
stat:
path: "{{ app_stack_path }}/docker-compose.base.yml"
register: compose_base_check
- name: Verify docker-compose.staging.yml exists
stat:
path: "{{ app_stack_path }}/docker-compose.staging.yml"
register: compose_override_check
- name: Fail if docker-compose files don't exist
fail:
msg: |
Staging Stack docker-compose files not found at {{ app_stack_path }}
Required files:
- docker-compose.base.yml
- docker-compose.staging.yml
The Staging Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
This will create the staging stack with docker-compose files and .env file.
when:
- not compose_base_check.stat.exists or not compose_override_check.stat.exists
- name: Backup current deployment metadata
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.staging.yml ps --format json 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/current_containers.json || true
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.staging.yml config 2>/dev/null > {{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/docker-compose-config.yml || true
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
- name: Login to Docker registry (if credentials provided)
community.docker.docker_login:
registry_url: "{{ docker_registry_url }}"
username: "{{ docker_registry_username }}"
password: "{{ docker_registry_password }}"
no_log: yes
ignore_errors: yes
when:
- docker_registry_username is defined
- docker_registry_password is defined
- docker_registry_username | length > 0
- docker_registry_password | length > 0
register: registry_login
- name: Pull new Docker image
community.docker.docker_image:
name: "{{ app_image }}"
tag: "{{ image_tag }}"
source: pull
force_source: yes
register: image_pull
- name: Verify image was pulled successfully
fail:
msg: "Failed to pull image {{ app_image }}:{{ image_tag }}"
when: image_pull.failed
- name: Update docker-compose.staging.yml with new image tag (all services)
replace:
path: "{{ app_stack_path }}/docker-compose.staging.yml"
# Match both localhost:5000 and registry.michaelschiemer.de (or any registry URL)
regexp: '^(\s+image:\s+)(localhost:5000|registry\.michaelschiemer\.de|{{ docker_registry }})/{{ app_name }}:.*$'
replace: '\1{{ app_image }}:{{ image_tag }}'
register: compose_updated
- name: Redeploy staging stack with new image
import_role:
name: application
vars:
application_sync_files: false
application_compose_recreate: "always"
application_remove_orphans: true
application_stack_path: "{{ app_stack_path }}"
application_compose_files:
- "{{ app_stack_path }}/docker-compose.base.yml"
- "{{ app_stack_path }}/docker-compose.staging.yml"
- name: Get deployed image information
shell: |
docker compose -f {{ app_stack_path }}/docker-compose.base.yml -f {{ app_stack_path }}/docker-compose.staging.yml config | grep -E "^\s+image:" | head -1 | awk '{print $2}' || echo "unknown"
args:
executable: /bin/bash
register: deployed_image
changed_when: false
- name: Record deployment metadata
copy:
content: |
Deployment Timestamp: {{ deployment_timestamp }}
Git Commit: {{ git_commit_sha }}
Image Tag: {{ image_tag }}
Deployed Image: {{ deployed_image.stdout }}
Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED (already exists)' }}
Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}
Health Status: {{ application_health_output if application_health_output != '' else 'All services healthy' }}
Health Check HTTP Status: {{ application_healthcheck_status }}
dest: "{{ backups_path }}/{{ deployment_timestamp | regex_replace(':', '-') }}/deployment_metadata.txt"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
- name: Cleanup old backups (keep last {{ max_rollback_versions | default(5) }})
shell: |
cd {{ backups_path }}
ls -dt */ 2>/dev/null | tail -n +{{ max_rollback_versions | default(5) + 1 }} | xargs -r rm -rf
args:
executable: /bin/bash
changed_when: false
ignore_errors: yes
post_tasks:
- name: Display deployment summary
debug:
msg:
- "=== Staging Deployment Summary ==="
- "Image: {{ app_image }}:{{ image_tag }}"
- "Commit: {{ git_commit_sha }}"
- "Timestamp: {{ deployment_timestamp }}"
- "Image Pull: {{ 'SUCCESS' if image_pull.changed else 'SKIPPED' }}"
- "Stack Deploy: {{ 'UPDATED' if application_stack_changed else 'NO_CHANGE' }}"
- "Health Output: {{ application_health_output if application_health_output != '' else 'All services healthy' }}"
- "Health Check HTTP Status: {{ application_healthcheck_status }}"
- "Health Check URL: {{ health_check_url | default('https://staging.michaelschiemer.de/health') }}"
- ""
- "Next: Verify application is healthy"

View File

@@ -1,108 +0,0 @@
---
- name: Diagnose Staging 502 Bad Gateway Error
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check nginx error logs for 502 errors
shell: |
cd ~/deployment/stacks/staging
echo "=== Nginx Error Logs (Last 50 lines) ==="
docker compose logs --tail=50 staging-nginx 2>&1 | grep -iE "(error|502|bad gateway|php|fpm|upstream)" || echo "No obvious errors in logs"
args:
executable: /bin/bash
register: nginx_errors
ignore_errors: yes
failed_when: false
- name: Display nginx errors
debug:
msg: "{{ nginx_errors.stdout_lines }}"
- name: Check PHP-FPM status in staging-app
shell: |
cd ~/deployment/stacks/staging
echo "=== PHP-FPM Status Check ==="
docker compose exec -T staging-app php-fpm-healthcheck 2>&1 || echo "PHP-FPM healthcheck failed"
echo ""
echo "=== Check if PHP-FPM is running ==="
docker compose exec -T staging-app ps aux | grep -E "php-fpm|php" | head -10 || echo "Could not check processes"
args:
executable: /bin/bash
register: php_fpm_status
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM status
debug:
msg: "{{ php_fpm_status.stdout_lines }}"
- name: Test PHP-FPM connection from nginx container
shell: |
cd ~/deployment/stacks/staging
echo "=== Test connection from nginx to PHP-FPM ==="
docker compose exec -T staging-nginx sh -c "nc -zv staging-app 9000 2>&1 || echo 'Connection test failed'" || echo "Connection test failed"
echo ""
echo "=== Try curl from nginx to PHP-FPM ==="
docker compose exec -T staging-nginx sh -c "curl -v http://staging-app:9000 2>&1 | head -20" || echo "Curl test failed"
args:
executable: /bin/bash
register: connection_test
ignore_errors: yes
failed_when: false
- name: Display connection test results
debug:
msg: "{{ connection_test.stdout_lines }}"
- name: Check nginx configuration for PHP-FPM upstream
shell: |
cd ~/deployment/stacks/staging
echo "=== Nginx Configuration ==="
docker compose exec -T staging-nginx cat /etc/nginx/conf.d/default.conf 2>&1 | grep -A 10 -B 5 "staging-app\|9000\|fastcgi_pass" || echo "Could not read nginx config"
args:
executable: /bin/bash
register: nginx_config
ignore_errors: yes
failed_when: false
- name: Display nginx configuration
debug:
msg: "{{ nginx_config.stdout_lines }}"
- name: Check if PHP-FPM is listening on port 9000
shell: |
cd ~/deployment/stacks/staging
echo "=== Check PHP-FPM port 9000 ==="
docker compose exec -T staging-app sh -c "netstat -tlnp 2>/dev/null | grep 9000 || ss -tlnp 2>/dev/null | grep 9000 || echo 'Port 9000 not found'" || echo "Could not check ports"
echo ""
echo "=== PHP-FPM pool config ==="
docker compose exec -T staging-app cat /usr/local/etc/php-fpm.d/www.conf 2>&1 | grep -E "(listen|listen.allowed_clients)" | head -5 || echo "Could not read PHP-FPM config"
args:
executable: /bin/bash
register: php_fpm_port
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM port check
debug:
msg: "{{ php_fpm_port.stdout_lines }}"
- name: Check network connectivity between containers
shell: |
cd ~/deployment/stacks/staging
echo "=== Network connectivity ==="
docker compose exec -T staging-nginx ping -c 2 staging-app 2>&1 || echo "Ping failed"
echo ""
echo "=== Check docker networks ==="
docker network inspect staging-internal 2>&1 | grep -A 5 "staging-app\|staging-nginx" || echo "Could not inspect network"
args:
executable: /bin/bash
register: network_check
ignore_errors: yes
failed_when: false
- name: Display network check
debug:
msg: "{{ network_check.stdout_lines }}"

View File

@@ -1,125 +0,0 @@
---
- name: Diagnose Staging Logs Issue
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if staging stack directory exists
shell: |
test -d ~/deployment/stacks/staging && echo "? Directory exists" || echo "? Directory missing"
args:
executable: /bin/bash
register: dir_check
- name: Display directory check result
debug:
msg: "{{ dir_check.stdout }}"
- name: Check if docker-compose.yml exists
shell: |
test -f ~/deployment/stacks/staging/docker-compose.yml && echo "? docker-compose.yml exists" || echo "? docker-compose.yml missing"
args:
executable: /bin/bash
register: compose_check
- name: Display compose file check result
debug:
msg: "{{ compose_check.stdout }}"
- name: List staging directory contents
shell: |
cd ~/deployment/stacks/staging && ls -la
args:
executable: /bin/bash
register: dir_contents
ignore_errors: yes
- name: Display directory contents
debug:
msg: "{{ dir_contents.stdout_lines }}"
- name: Check Docker Compose version
shell: |
cd ~/deployment/stacks/staging && docker compose version
args:
executable: /bin/bash
register: compose_version
ignore_errors: yes
- name: Display Docker Compose version
debug:
msg: "{{ compose_version.stdout }}"
- name: Check if containers exist
shell: |
docker ps -a | grep staging || echo "No staging containers found"
args:
executable: /bin/bash
register: container_list
ignore_errors: yes
- name: Display container list
debug:
msg: "{{ container_list.stdout_lines }}"
- name: Try docker compose ps (this is where it might fail)
shell: |
cd ~/deployment/stacks/staging && docker compose ps
args:
executable: /bin/bash
register: compose_ps
ignore_errors: yes
- name: Display docker compose ps result
debug:
msg: "{{ compose_ps.stdout_lines }}"
msg_also: "{{ compose_ps.stderr_lines }}"
- name: Try docker compose logs directly (without service name)
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=50 2>&1 | head -100
args:
executable: /bin/bash
register: compose_logs_generic
ignore_errors: yes
- name: Display generic compose logs
debug:
msg: "{{ (compose_logs_generic.stdout_lines | default([])) + (compose_logs_generic.stderr_lines | default([])) }}"
- name: Try individual container logs using docker logs
shell: |
docker logs staging-app --tail=50 2>&1 || echo "? Could not get staging-app logs"
args:
executable: /bin/bash
register: direct_app_logs
ignore_errors: yes
- name: Display direct staging-app logs
debug:
msg: "{{ direct_app_logs.stdout_lines }}"
msg_also: "{{ direct_app_logs.stderr_lines }}"
- name: Check current working directory permissions
shell: |
ls -ld ~/deployment/stacks/staging && pwd && whoami
args:
executable: /bin/bash
register: perm_check
- name: Display permission check
debug:
msg: "{{ perm_check.stdout_lines }}"
- name: Check if we can read docker-compose.yml
shell: |
cd ~/deployment/stacks/staging && head -20 docker-compose.yml
args:
executable: /bin/bash
register: compose_read
ignore_errors: yes
- name: Display docker-compose.yml content (first 20 lines)
debug:
msg: "{{ (compose_read.stdout_lines | default([])) + (compose_read.stderr_lines | default([])) }}"

View File

@@ -1,172 +0,0 @@
---
- name: Diagnose VPN Routing Problem f?r Grafana
hosts: production
gather_facts: yes
become: yes
become_user: root
tasks:
- name: Check WireGuard interface status
shell: |
echo "=== WireGuard Interface Status ==="
ip addr show wg0 2>&1 || echo "WireGuard interface not found"
echo ""
echo "=== WireGuard Peers ==="
wg show 2>&1 || echo "WireGuard not running"
register: wg_status
ignore_errors: yes
failed_when: false
- name: Display WireGuard status
debug:
msg: "{{ wg_status.stdout_lines }}"
- name: Check routing table for VPN network
shell: |
echo "=== Routing Table for 10.8.0.0/24 ==="
ip route show | grep 10.8.0 || echo "No routes found for 10.8.0.0/24"
echo ""
echo "=== Default Route ==="
ip route show default || echo "No default route"
register: routing_info
ignore_errors: yes
failed_when: false
- name: Display routing information
debug:
msg: "{{ routing_info.stdout_lines }}"
- name: Check Traefik access logs for recent Grafana requests
shell: |
cd ~/deployment/stacks/traefik
echo "=== Recent Grafana Access (Last 10 requests) ==="
tail -50 logs/access.log | grep grafana | tail -10 | jq -r '[.ClientAddr, .ClientHost, .RequestHost, .DownstreamStatus] | @tsv' 2>&1 || tail -50 logs/access.log | grep grafana | tail -10
args:
executable: /bin/bash
register: traefik_access
ignore_errors: yes
failed_when: false
- name: Display Traefik access logs
debug:
msg: "{{ traefik_access.stdout_lines }}"
- name: Test DNS resolution from server
shell: |
echo "=== DNS Resolution Tests ==="
echo "1. Grafana via VPN DNS (10.8.0.1):"
dig +short grafana.michaelschiemer.de @10.8.0.1 2>&1 || echo "Failed"
echo ""
echo "2. Grafana via public DNS (8.8.8.8):"
dig +short grafana.michaelschiemer.de @8.8.8.8 2>&1 || echo "Failed"
echo ""
echo "3. Grafana via system DNS:"
dig +short grafana.michaelschiemer.de 2>&1 || echo "Failed"
args:
executable: /bin/bash
register: dns_tests
ignore_errors: yes
failed_when: false
- name: Display DNS test results
debug:
msg: "{{ dns_tests.stdout_lines }}"
- name: Check firewall rules for WireGuard
shell: |
echo "=== Firewall Rules for WireGuard (port 51820) ==="
sudo ufw status | grep 51820 || sudo iptables -L -n | grep 51820 || echo "No firewall rules found"
echo ""
echo "=== Allowed IPs in WireGuard Config ==="
grep -E "AllowedIPs" /etc/wireguard/wg0.conf 2>&1 || echo "WireGuard config not found"
args:
executable: /bin/bash
register: firewall_info
ignore_errors: yes
failed_when: false
- name: Display firewall information
debug:
msg: "{{ firewall_info.stdout_lines }}"
- name: Check Traefik forwardedHeaders configuration
shell: |
cd ~/deployment/stacks/traefik
echo "=== Traefik forwardedHeaders Config ==="
grep -A 10 "forwardedHeaders:" traefik.yml || echo "Not found"
args:
executable: /bin/bash
register: forwarded_headers
ignore_errors: yes
failed_when: false
- name: Display forwardedHeaders configuration
debug:
msg: "{{ forwarded_headers.stdout_lines }}"
- name: Check Grafana middleware configuration
shell: |
cd ~/deployment/stacks/traefik/dynamic
echo "=== Grafana VPN Only Middleware ==="
grep -A 6 "grafana-vpn-only:" middlewares.yml || echo "Not found"
args:
executable: /bin/bash
register: grafana_middleware
ignore_errors: yes
failed_when: false
- name: Display Grafana middleware configuration
debug:
msg: "{{ grafana_middleware.stdout_lines }}"
- name: Check CoreDNS configuration
shell: |
cd ~/deployment/stacks/dns
echo "=== CoreDNS Corefile ==="
cat Corefile 2>&1 || echo "Not found"
args:
executable: /bin/bash
register: coredns_config
ignore_errors: yes
failed_when: false
- name: Display CoreDNS configuration
debug:
msg: "{{ coredns_config.stdout_lines }}"
- name: Test connection to Grafana from server via VPN IP
shell: |
echo "=== Test Connection to Grafana via VPN IP (10.8.0.1) ==="
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP %{http_code}\n" https://10.8.0.1:443 -H "Host: grafana.michaelschiemer.de" 2>&1 || echo "Connection failed"
echo ""
echo "=== Test Connection via Domain ==="
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP %{http_code}\n" https://grafana.michaelschiemer.de/ 2>&1 || echo "Connection failed"
args:
executable: /bin/bash
register: connection_tests
ignore_errors: yes
failed_when: false
- name: Display connection test results
debug:
msg: "{{ connection_tests.stdout_lines }}"
- name: Monitor Traefik access logs in real-time (for next request)
shell: |
echo "=== Instructions ==="
echo "1. Connect to VPN with your WireGuard client"
echo "2. Ensure DNS is set to 10.8.0.1 in WireGuard config"
echo "3. Access https://grafana.michaelschiemer.de in your browser"
echo "4. Check the ClientAddr in the access logs below"
echo ""
echo "=== Last Grafana Access Attempt ==="
tail -1 ~/deployment/stacks/traefik/logs/access.log 2>&1 | jq -r '[.ClientAddr, .ClientHost, .DownstreamStatus] | @tsv' || tail -1 ~/deployment/stacks/traefik/logs/access.log
args:
executable: /bin/bash
register: monitoring_info
ignore_errors: yes
failed_when: false
- name: Display monitoring instructions
debug:
msg: "{{ monitoring_info.stdout_lines }}"

View File

@@ -1,81 +0,0 @@
---
- name: Fix Gitea Actions Configuration (non-destructive)
hosts: production
become: no
gather_facts: yes
tasks:
- name: Check current Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 5 "\[actions\]" || echo "No actions section found"
register: current_config
changed_when: false
ignore_errors: yes
- name: Backup existing app.ini
shell: |
docker exec gitea cp /data/gitea/conf/app.ini /data/gitea/conf/app.ini.backup.$(date +%Y%m%d_%H%M%S)
changed_when: false
ignore_errors: yes
- name: Copy app.ini from container for editing
shell: |
docker cp gitea:/data/gitea/conf/app.ini /tmp/gitea_app_ini_$$
register: copy_result
- name: Update app.ini Actions section
shell: |
# Remove DEFAULT_ACTIONS_URL line if it exists in [actions] section
sed -i '/^\[actions\]/,/^\[/{ /^DEFAULT_ACTIONS_URL/d; }' /tmp/gitea_app_ini_$$
# Ensure ENABLED = true in [actions] section
if grep -q "^\[actions\]" /tmp/gitea_app_ini_$$; then
# Section exists - ensure ENABLED = true
sed -i '/^\[actions\]/,/^\[/{ s/^ENABLED.*/ENABLED = true/; }' /tmp/gitea_app_ini_$$
# If ENABLED line doesn't exist, add it
if ! grep -A 10 "^\[actions\]" /tmp/gitea_app_ini_$$ | grep -q "^ENABLED"; then
sed -i '/^\[actions\]/a ENABLED = true' /tmp/gitea_app_ini_$$
fi
else
# Section doesn't exist - add it
echo "" >> /tmp/gitea_app_ini_$$
echo "[actions]" >> /tmp/gitea_app_ini_$$
echo "ENABLED = true" >> /tmp/gitea_app_ini_$$
fi
args:
executable: /bin/bash
register: config_updated
- name: Copy updated app.ini back to container
shell: |
docker cp /tmp/gitea_app_ini_$$ gitea:/data/gitea/conf/app.ini
rm -f /tmp/gitea_app_ini_$$
when: config_updated.changed | default(false)
- name: Verify Actions configuration after update
shell: |
docker exec gitea cat /data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
register: updated_config
changed_when: false
- name: Restart Gitea to apply configuration
shell: |
cd {{ stacks_base_path }}/gitea
docker compose restart gitea
when: config_updated.changed | default(false)
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: config_updated.changed | default(false)
- name: Display configuration result
debug:
msg:
- "=== Gitea Actions Configuration Fixed ==="
- ""
- "Current [actions] configuration:"
- "{{ updated_config.stdout }}"
- ""
- "Configuration updated: {{ 'Yes' if config_updated.changed else 'No changes needed' }}"
- "Gitea restarted: {{ 'Yes' if config_updated.changed else 'No' }}"

View File

@@ -1,49 +0,0 @@
---
- name: Remove DEFAULT_ACTIONS_URL from Gitea configuration
hosts: production
become: no
gather_facts: yes
tasks:
- name: Check if DEFAULT_ACTIONS_URL exists in app.ini
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -q "DEFAULT_ACTIONS_URL" && echo "exists" || echo "not_found"
register: url_check
changed_when: false
ignore_errors: yes
- name: Remove DEFAULT_ACTIONS_URL from app.ini
shell: |
docker exec gitea sh -c 'sed -i "/^DEFAULT_ACTIONS_URL/d" /data/gitea/conf/app.ini'
when: url_check.stdout == "exists"
register: url_removed
- name: Restart Gitea to apply configuration changes
shell: |
cd {{ stacks_base_path }}/gitea
docker compose restart gitea
when: url_removed.changed | default(false)
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: url_removed.changed | default(false)
- name: Verify Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
register: gitea_config
changed_when: false
ignore_errors: yes
- name: Display Gitea Actions configuration
debug:
msg:
- "=== Gitea Configuration Fix Complete ==="
- "DEFAULT_ACTIONS_URL removed: {{ 'Yes' if url_removed.changed else 'No (not found or already removed)' }}"
- "Container restarted: {{ 'Yes' if url_removed.changed else 'No' }}"
- ""
- "Current Actions configuration:"
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config' }}"
- ""
- "Gitea will now use its own instance for actions by default (no GitHub fallback)."

View File

@@ -1,143 +0,0 @@
---
- name: Fix Gitea Traefik Configuration - Remove Dynamic Config and Use Labels
hosts: production
gather_facts: yes
become: no
vars:
traefik_stack_path: "{{ stacks_base_path }}/traefik"
gitea_stack_path: "{{ stacks_base_path }}/gitea"
tasks:
- name: Backup dynamic Gitea configuration
shell: |
cd {{ traefik_stack_path }}/dynamic
if [ -f gitea.yml ]; then
cp gitea.yml gitea.yml.backup-$(date +%Y%m%d-%H%M%S)
echo "Backed up to gitea.yml.backup-$(date +%Y%m%d-%H%M%S)"
else
echo "File not found, nothing to backup"
fi
args:
executable: /bin/bash
register: backup_result
ignore_errors: yes
failed_when: false
- name: Display backup result
debug:
msg: "{{ backup_result.stdout_lines }}"
- name: Remove dynamic Gitea configuration
file:
path: "{{ traefik_stack_path }}/dynamic/gitea.yml"
state: absent
register: remove_config
- name: Restart Traefik to reload configuration
community.docker.docker_compose_v2:
project_src: "{{ traefik_stack_path }}"
state: present
pull: never
recreate: always
services:
- traefik
register: traefik_restart
when: remove_config.changed
- name: Wait for Traefik to be ready
wait_for:
port: 443
host: localhost
timeout: 30
delegate_to: localhost
when: traefik_restart.changed
ignore_errors: yes
- name: Check if Gitea docker-compose.yml already has Traefik labels
shell: |
grep -q "traefik.enable=true" {{ gitea_stack_path }}/docker-compose.yml && echo "Labels already present" || echo "Labels missing"
register: labels_check
ignore_errors: yes
failed_when: false
- name: Copy docker-compose.yml from local to ensure labels are present
copy:
src: "{{ playbook_dir }}/../../stacks/gitea/docker-compose.yml"
dest: "{{ gitea_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
register: labels_added
when: "'Labels missing' in labels_check.stdout"
- name: Recreate Gitea container with labels
community.docker.docker_compose_v2:
project_src: "{{ gitea_stack_path }}"
state: present
pull: never
recreate: always
remove_orphans: no
register: gitea_recreate
when: labels_added.changed
- name: Wait for Gitea to be healthy
shell: |
for i in {1..30}; do
if docker exec gitea curl -f http://localhost:3000/api/healthz >/dev/null 2>&1; then
echo "Gitea is healthy"
exit 0
fi
echo "Waiting for Gitea... ($i/30)"
sleep 2
done
echo "Health check timeout"
exit 1
args:
executable: /bin/bash
register: health_wait
ignore_errors: yes
failed_when: false
when: gitea_recreate.changed
- name: Display health wait result
debug:
msg: "{{ health_wait.stdout_lines }}"
when: gitea_recreate.changed
- name: Check Traefik service registration
shell: |
sleep 5 # Give Traefik time to discover
docker exec traefik wget -qO- http://localhost:8080/api/http/services 2>&1 | grep -i gitea || echo "Service not found (may take a few seconds)"
register: traefik_service
ignore_errors: yes
failed_when: false
- name: Display Traefik service registration
debug:
msg: "{{ traefik_service.stdout_lines }}"
- name: Test external Gitea access
shell: |
sleep 3 # Give Traefik time to update routing
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP Status: %{http_code}\n" https://git.michaelschiemer.de/ 2>&1 || echo "Connection failed"
args:
executable: /bin/bash
register: external_test
ignore_errors: yes
failed_when: false
- name: Display external test result
debug:
msg: "{{ external_test.stdout_lines }}"
- name: Summary
debug:
msg:
- "=== FIX SUMMARY ==="
- "Dynamic config removed: {{ 'Yes' if remove_config.changed else 'Already removed' }}"
- "Labels added to docker-compose.yml: {{ 'Yes' if labels_added.changed else 'Already present' }}"
- "Gitea container recreated: {{ 'Yes' if gitea_recreate.changed else 'No' }}"
- ""
- "Gitea should now be accessible via https://git.michaelschiemer.de"
- "If issue persists, check Traefik logs for errors"

View File

@@ -1,139 +0,0 @@
---
- name: Fix Gitea Traefik Labels
hosts: production
gather_facts: yes
become: no
vars:
gitea_stack_path: "{{ stacks_base_path }}/gitea"
tasks:
- name: Check current Gitea container status
shell: |
cd {{ gitea_stack_path }}
docker compose ps gitea
args:
executable: /bin/bash
register: gitea_status_before
ignore_errors: yes
failed_when: false
- name: Display current status
debug:
msg: "{{ gitea_status_before.stdout_lines }}"
- name: Check current Traefik labels
shell: |
docker inspect gitea --format '{{ '{{' }}range .Config.Labels{{ '}}' }}{{ '{{' }}.Key{{ '}}' }}={{ '{{' }}.Value{{ '}}' }}{{ '{{' }}\n{{ '}}' }}{{ '{{' }}end{{ '}}' }}' 2>&1 | grep -i traefik || echo "No Traefik labels found"
register: current_labels
ignore_errors: yes
failed_when: false
- name: Display current labels
debug:
msg: "{{ current_labels.stdout_lines }}"
- name: Recreate Gitea container with Traefik labels
community.docker.docker_compose_v2:
project_src: "{{ gitea_stack_path }}"
state: present
pull: never
recreate: always
remove_orphans: no
register: gitea_recreate
- name: Wait for Gitea to be ready
wait_for:
port: 3000
host: localhost
timeout: 60
delegate_to: localhost
when: gitea_recreate.changed
ignore_errors: yes
- name: Wait for Gitea health check
shell: |
for i in {1..30}; do
if docker exec gitea curl -f http://localhost:3000/api/healthz >/dev/null 2>&1; then
echo "Gitea is healthy"
exit 0
fi
echo "Waiting for Gitea to be healthy... ($i/30)"
sleep 2
done
echo "Gitea health check timeout"
exit 1
args:
executable: /bin/bash
register: health_wait
ignore_errors: yes
failed_when: false
when: gitea_recreate.changed
- name: Display health wait result
debug:
msg: "{{ health_wait.stdout_lines }}"
when: gitea_recreate.changed
- name: Check new Gitea container status
shell: |
cd {{ gitea_stack_path }}
docker compose ps gitea
args:
executable: /bin/bash
register: gitea_status_after
ignore_errors: yes
failed_when: false
- name: Display new status
debug:
msg: "{{ gitea_status_after.stdout_lines }}"
- name: Check new Traefik labels
shell: |
docker inspect gitea --format '{{ '{{' }}range .Config.Labels{{ '}}' }}{{ '{{' }}.Key{{ '}}' }}={{ '{{' }}.Value{{ '}}' }}{{ '{{' }}\n{{ '}}' }}{{ '{{' }}end{{ '}}' }}' 2>&1 | grep -i traefik || echo "No Traefik labels found"
register: new_labels
ignore_errors: yes
failed_when: false
- name: Display new labels
debug:
msg: "{{ new_labels.stdout_lines }}"
- name: Check Traefik service registration
shell: |
docker exec traefik wget -qO- http://localhost:8080/api/http/services 2>&1 | grep -i gitea || echo "Gitea service not found (may take a few seconds to register)"
register: traefik_service
ignore_errors: yes
failed_when: false
- name: Display Traefik service registration
debug:
msg: "{{ traefik_service.stdout_lines }}"
- name: Test external Gitea access
shell: |
echo "Testing external access..."
sleep 5 # Give Traefik time to update
curl -k -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP Status: %{http_code}\n" https://git.michaelschiemer.de/ 2>&1 || echo "Connection failed"
args:
executable: /bin/bash
register: external_test
ignore_errors: yes
failed_when: false
- name: Display external test result
debug:
msg: "{{ external_test.stdout_lines }}"
- name: Summary
debug:
msg:
- "=== FIX SUMMARY ==="
- "Container recreated: {{ 'Yes' if gitea_recreate.changed else 'No' }}"
- "Traefik labels: {{ 'Fixed' if 'traefik' in new_labels.stdout|lower else 'Still missing' }}"
- ""
- "If the issue persists:"
- "1. Check Traefik logs: cd {{ stacks_base_path }}/traefik && docker compose logs traefik"
- "2. Verify Traefik can reach Gitea: docker exec traefik ping -c 2 gitea"
- "3. Check Gitea logs for errors: cd {{ gitea_stack_path }} && docker compose logs gitea"

View File

@@ -1,67 +0,0 @@
---
- name: Fix Grafana VPN Access - Update Middleware to ipAllowList
hosts: production
gather_facts: no
become: no
tasks:
- name: Backup current middlewares.yml
shell: |
cd ~/deployment/stacks/traefik/dynamic
cp middlewares.yml middlewares.yml.backup.$(date +%Y%m%d_%H%M%S)
args:
executable: /bin/bash
- name: Update middlewares.yml - Change ipWhiteList to ipAllowList
shell: |
cd ~/deployment/stacks/traefik/dynamic
sed -i 's/ipWhiteList:/ipAllowList:/g' middlewares.yml
sed -i 's/ipWhitelist/ipAllowList/g' middlewares.yml
# Validate YAML syntax
python3 -c "import yaml; yaml.safe_load(open('middlewares.yml')); print('YAML valid')"
args:
executable: /bin/bash
- name: Display updated grafana-vpn-only middleware
shell: |
cd ~/deployment/stacks/traefik/dynamic
grep -A 6 'grafana-vpn-only:' middlewares.yml
args:
executable: /bin/bash
register: grafana_middleware
- name: Show updated middleware
debug:
msg: "{{ grafana_middleware.stdout_lines }}"
- name: Restart Traefik to apply changes
command: docker compose restart traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_restart
- name: Wait for Traefik to restart
pause:
seconds: 5
- name: Check Traefik logs for deprecation warnings
shell: |
cd ~/deployment/stacks/traefik
docker compose logs traefik --tail=20 2>&1 | grep -i 'allowlist\|whitelist\|deprecated' || echo "No warnings found"
args:
executable: /bin/bash
register: traefik_warnings
- name: Display Traefik warnings
debug:
msg: "{{ traefik_warnings.stdout_lines }}"
- name: Verify Traefik status
command: docker compose ps traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_status
- name: Display Traefik status
debug:
msg: "{{ traefik_status.stdout_lines }}"

View File

@@ -1,80 +0,0 @@
---
- name: Fix Grafana VPN Routing and Remove Temporary IP Allow
hosts: production
gather_facts: no
become: no
tasks:
- name: Check recent Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Recent Grafana Access (Last 10 attempts) ==="
tail -50 logs/access.log | grep grafana | tail -10 | while read line; do
echo "$line" | grep -oP '"ClientHost":"[^"]*"' || echo "Could not parse"
done
args:
executable: /bin/bash
register: recent_access
ignore_errors: yes
failed_when: false
- name: Display recent access attempts
debug:
msg: "{{ recent_access.stdout_lines }}"
- name: Check if traffic comes from VPN
shell: |
cd ~/deployment/stacks/traefik
echo "=== Checking if recent traffic comes from VPN (10.8.0.0/24) ==="
tail -20 logs/access.log | grep grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | while read ip; do
if [[ "$ip" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "? $ip -> VPN traffic (10.8.0.0/24)"
else
echo "? $ip -> Public IP (not VPN)"
fi
done
args:
executable: /bin/bash
register: vpn_check
ignore_errors: yes
failed_when: false
- name: Display VPN check results
debug:
msg: "{{ vpn_check.stdout_lines }}"
- name: Backup current middlewares.yml
shell: |
cd ~/deployment/stacks/traefik/dynamic
cp middlewares.yml middlewares.yml.backup.$(date +%Y%m%d_%H%M%S)
args:
executable: /bin/bash
when: false # Skip for now - we'll do this manually
- name: Check current middleware configuration
shell: |
cd ~/deployment/stacks/traefik/dynamic
echo "=== Current grafana-vpn-only Middleware ==="
grep -A 8 "grafana-vpn-only:" middlewares.yml
args:
executable: /bin/bash
register: middleware_config
ignore_errors: yes
failed_when: false
- name: Display middleware configuration
debug:
msg: "{{ middleware_config.stdout_lines }}"
- name: Instructions for removing temporary IP
debug:
msg:
- "=== TO REMOVE TEMPORARY IP ALLOWLIST ==="
- "1. Make sure VPN routing works (DNS = 10.8.0.1 or use hosts file)"
- "2. Test that traffic comes from VPN (ClientHost: 10.8.0.7)"
- "3. Remove temporary IP from middlewares.yml:"
- " cd ~/deployment/stacks/traefik/dynamic"
- " sed -i '/89.246.96.244\/32/d' middlewares.yml"
- "4. Restart Traefik:"
- " cd ~/deployment/stacks/traefik && docker compose restart traefik"
- "5. Test: With VPN = OK, Without VPN = 403"

View File

@@ -1,79 +0,0 @@
---
- name: Fix Nginx Upstream Configuration
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check which nginx config files exist
shell: |
cd ~/deployment/stacks/staging
echo "=== Check nginx config files ==="
docker compose exec -T staging-nginx find /etc/nginx -name "*.conf" -type f 2>&1 | head -20
echo ""
echo "=== Check sites-enabled ==="
docker compose exec -T staging-nginx ls -la /etc/nginx/sites-enabled/ 2>&1 || echo "sites-enabled not found"
echo ""
echo "=== Check sites-available ==="
docker compose exec -T staging-nginx ls -la /etc/nginx/sites-available/ 2>&1 || echo "sites-available not found"
echo ""
echo "=== Check nginx.conf includes ==="
docker compose exec -T staging-nginx grep -E "include|conf.d|sites" /etc/nginx/nginx.conf 2>&1 | head -10
args:
executable: /bin/bash
register: nginx_config_check
ignore_errors: yes
failed_when: false
- name: Display nginx config check
debug:
msg: "{{ nginx_config_check.stdout_lines }}"
- name: Find all fastcgi_pass directives
shell: |
cd ~/deployment/stacks/staging
echo "=== Search for fastcgi_pass in all config files ==="
docker compose exec -T staging-nginx grep -r "fastcgi_pass" /etc/nginx/ 2>&1 || echo "Could not search"
args:
executable: /bin/bash
register: fastcgi_pass_search
ignore_errors: yes
failed_when: false
- name: Display fastcgi_pass search
debug:
msg: "{{ fastcgi_pass_search.stdout_lines }}"
- name: Fix all fastcgi_pass to use staging-app:9000
shell: |
cd ~/deployment/stacks/staging
echo "=== Fix fastcgi_pass in all config files ==="
docker compose exec -T staging-nginx sh -c "find /etc/nginx -name '*.conf' -type f -exec sed -i 's|fastcgi_pass 127.0.0.1:9000;|fastcgi_pass staging-app:9000;|g' {} \;" || echo "Fix failed"
docker compose exec -T staging-nginx sh -c "find /etc/nginx -name '*.conf' -type f -exec sed -i 's|fastcgi_pass localhost:9000;|fastcgi_pass staging-app:9000;|g' {} \;" || echo "Fix failed"
docker compose exec -T staging-nginx sh -c "find /etc/nginx -name '*.conf' -type f -exec sed -i 's|fastcgi_pass php-upstream;|fastcgi_pass staging-app:9000;|g' {} \;" || echo "Note: php-upstream should stay as is"
echo "=== Verify fix ==="
docker compose exec -T staging-nginx grep -r "fastcgi_pass" /etc/nginx/ 2>&1 | grep -v "staging-app" || echo "All fastcgi_pass now use staging-app"
args:
executable: /bin/bash
register: fix_result
ignore_errors: yes
failed_when: false
- name: Display fix result
debug:
msg: "{{ fix_result.stdout_lines }}"
- name: Reload nginx to apply changes
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx nginx -t 2>&1 || echo "Config test failed"
docker compose restart staging-nginx || echo "Restart failed"
args:
executable: /bin/bash
register: nginx_reload
ignore_errors: yes
failed_when: false
- name: Display nginx reload result
debug:
msg: "{{ nginx_reload.stdout_lines }}"

View File

@@ -1,36 +0,0 @@
---
- name: Check and Fix PHP-FPM Configuration
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check PHP-FPM www.conf configuration for allowed_clients
shell: |
cd ~/deployment/stacks/staging
echo "=== PHP-FPM www.conf listen.allowed_clients ==="
docker compose exec -T staging-app cat /usr/local/etc/php-fpm.d/www.conf 2>&1 | grep -E "(listen|allowed_clients|listen\.owner|listen\.group|listen\.mode)" | head -15
args:
executable: /bin/bash
register: fpm_config
ignore_errors: yes
failed_when: false
- name: Display PHP-FPM config
debug:
msg: "{{ fpm_config.stdout_lines }}"
- name: Check nginx error log for specific PHP-FPM errors
shell: |
cd ~/deployment/stacks/staging
echo "=== Nginx Error Log (all lines) ==="
docker compose logs --tail=200 staging-nginx 2>&1 | grep -iE "(502|bad gateway|upstream|php|fpm|connection)" || echo "No specific errors found"
args:
executable: /bin/bash
register: nginx_error_log
ignore_errors: yes
failed_when: false
- name: Display nginx error log
debug:
msg: "{{ nginx_error_log.stdout_lines }}"

View File

@@ -1,57 +0,0 @@
---
- name: Fix sites-available/default upstream configuration
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check php-upstream definition in sites-available/default
shell: |
cd ~/deployment/stacks/staging
echo "=== Check upstream definition ==="
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default 2>&1 || echo "No upstream found"
echo ""
echo "=== Full sites-available/default file ==="
docker compose exec -T staging-nginx cat /etc/nginx/sites-available/default 2>&1
args:
executable: /bin/bash
register: upstream_check
ignore_errors: yes
failed_when: false
- name: Display upstream check
debug:
msg: "{{ upstream_check.stdout_lines }}"
- name: Fix php-upstream in sites-available/default
shell: |
cd ~/deployment/stacks/staging
echo "=== Fix php-upstream definition ==="
docker compose exec -T staging-nginx sed -i 's|server 127.0.0.1:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Fix 127.0.0.1 failed"
docker compose exec -T staging-nginx sed -i 's|server localhost:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Fix localhost failed"
echo ""
echo "=== Verify fix ==="
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default 2>&1 || echo "No upstream found"
args:
executable: /bin/bash
register: fix_upstream
ignore_errors: yes
failed_when: false
- name: Display fix result
debug:
msg: "{{ fix_upstream.stdout_lines }}"
- name: Reload nginx
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx nginx -t && docker compose restart staging-nginx || echo "Reload failed"
args:
executable: /bin/bash
register: reload_nginx
ignore_errors: yes
failed_when: false
- name: Display reload result
debug:
msg: "{{ reload_nginx.stdout_lines }}"

View File

@@ -1,52 +0,0 @@
---
- name: Quick Fix Staging 502 Bad Gateway
hosts: production
gather_facts: yes
become: no
tasks:
- name: Fix php-upstream in sites-available/default
shell: |
cd ~/deployment/stacks/staging
echo "=== Fixing nginx upstream configuration ==="
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server 127.0.0.1:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Fix 127.0.0.1 failed"
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server localhost:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Fix localhost failed"
echo "=== Verifying fix ==="
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default
args:
executable: /bin/bash
register: fix_result
ignore_errors: yes
failed_when: false
- name: Display fix result
debug:
msg: "{{ fix_result.stdout_lines }}"
- name: Reload nginx
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx nginx -t && docker compose restart staging-nginx
args:
executable: /bin/bash
register: reload_result
ignore_errors: yes
failed_when: false
- name: Display reload result
debug:
msg: "{{ reload_result.stdout_lines }}"
- name: Test if fix worked
shell: |
sleep 3
curl -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "%{http_code}" https://staging.michaelschiemer.de/ || echo "502"
args:
executable: /bin/bash
register: test_result
ignore_errors: yes
failed_when: false
- name: Display test result
debug:
msg: "HTTP Status: {{ test_result.stdout }} (200 = OK, 502 = Still broken)"

View File

@@ -1,88 +0,0 @@
---
- name: Fix and Verify Staging 502 - Complete Fix
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check current upstream configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== Current upstream config in sites-available/default ==="
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default 2>&1 || echo "Could not read config"
args:
executable: /bin/bash
register: current_config
ignore_errors: yes
failed_when: false
- name: Display current config
debug:
msg: "{{ current_config.stdout_lines }}"
- name: Fix upstream configuration (multiple methods)
shell: |
cd ~/deployment/stacks/staging
echo "=== Fixing nginx upstream configuration ==="
# Method 1: Fix in upstream block
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server 127.0.0.1:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Method 1 failed"
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server localhost:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo "Method 2 failed"
# Method 2: Fix any fastcgi_pass
docker compose exec -T staging-nginx sed -i 's|fastcgi_pass 127.0.0.1:9000;|fastcgi_pass php-upstream;|g' /etc/nginx/sites-available/default || echo "Method 3 failed"
docker compose exec -T staging-nginx sed -i 's|fastcgi_pass localhost:9000;|fastcgi_pass php-upstream;|g' /etc/nginx/sites-available/default || echo "Method 4 failed"
# Method 3: Replace entire upstream block if it still has wrong value
docker compose exec -T staging-nginx sh -c "grep -q 'server 127.0.0.1:9000' /etc/nginx/sites-available/default && sed -i '/upstream php-upstream {/,/}/c\upstream php-upstream {\n server staging-app:9000;\n}' /etc/nginx/sites-available/default || echo 'No 127.0.0.1 found'" || echo "Method 5 failed"
echo "=== Verification ==="
docker compose exec -T staging-nginx grep -A 3 "upstream php-upstream" /etc/nginx/sites-available/default
args:
executable: /bin/bash
register: fix_result
ignore_errors: yes
failed_when: false
- name: Display fix result
debug:
msg: "{{ fix_result.stdout_lines }}"
- name: Test nginx configuration
shell: |
cd ~/deployment/stacks/staging
docker compose exec -T staging-nginx nginx -t 2>&1
args:
executable: /bin/bash
register: nginx_test
ignore_errors: yes
failed_when: false
- name: Display nginx test result
debug:
msg: "{{ nginx_test.stdout_lines }}"
- name: Restart nginx
shell: |
cd ~/deployment/stacks/staging
docker compose restart staging-nginx
sleep 3
args:
executable: /bin/bash
register: restart_result
ignore_errors: yes
failed_when: false
- name: Display restart result
debug:
msg: "{{ restart_result.stdout_lines }}"
- name: Test connection
shell: |
sleep 2
curl -H "User-Agent: Mozilla/5.0" -s -o /dev/null -w "HTTP Status: %{http_code}" https://staging.michaelschiemer.de/ || echo "502"
args:
executable: /bin/bash
register: test_result
ignore_errors: yes
failed_when: false
- name: Display test result
debug:
msg: "Final HTTP Status: {{ test_result.stdout }} (200/404 = OK, 502 = Still broken)"

View File

@@ -1,71 +0,0 @@
---
- name: Complete Fix for Staging (502 + Git)
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if index.php exists in staging-app
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-app sh -c "ls -la /var/www/html/public/index.php 2>&1" || echo "index.php NOT FOUND"
args:
executable: /bin/bash
register: index_php_app
changed_when: false
ignore_errors: yes
- name: Check if index.php exists in staging-nginx
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-nginx sh -c "ls -la /var/www/html/public/index.php 2>&1" || echo "index.php NOT FOUND"
args:
executable: /bin/bash
register: index_php_nginx
changed_when: false
ignore_errors: yes
- name: Check PHP-FPM listen configuration
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-app sh -c "cat /usr/local/etc/php-fpm.d/www.conf | grep -E '(^listen|^listen.allowed_clients|^listen.owner|^listen.group|^user|^group)' | head -20" || echo "Could not read config"
args:
executable: /bin/bash
register: php_fpm_full_config
changed_when: false
ignore_errors: yes
- name: Test actual HTTP request to staging-app
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-nginx sh -c "curl -v http://staging-app:9000/index.php 2>&1 | head -30" || echo "HTTP test failed"
args:
executable: /bin/bash
register: http_test
changed_when: false
ignore_errors: yes
- name: Fix Git ownership permanently in staging-app entrypoint
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-app sh -c "cd /var/www/html && git config --global --add safe.directory /var/www/html && git config --global --get-all safe.directory" || echo "Git config failed"
args:
executable: /bin/bash
register: git_config_check
changed_when: false
ignore_errors: yes
- name: Display index.php check in app
debug:
msg: "{{ index_php_app.stdout_lines }}"
- name: Display index.php check in nginx
debug:
msg: "{{ index_php_nginx.stdout_lines }}"
- name: Display PHP-FPM config
debug:
msg: "{{ php_fpm_full_config.stdout_lines }}"
- name: Display HTTP test
debug:
msg: "{{ http_test.stdout_lines }}"
- name: Display git config
debug:
msg: "{{ git_config_check.stdout_lines }}"

View File

@@ -1,138 +0,0 @@
---
- name: Fix Staging docker-compose.staging.yml with Redis Secrets
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check current docker-compose.staging.yml on server
shell: |
cd ~/deployment/stacks/staging
echo "=== Current staging-app environment (REDIS-related) ==="
grep -A 50 "staging-app:" docker-compose.staging.yml | grep -A 30 "environment:" | grep -E "(REDIS_|CACHE_|SESSION_)" || echo "No Redis env vars found"
echo ""
echo "=== Current secrets section for staging-app ==="
grep -A 10 "staging-app:" docker-compose.staging.yml | grep -A 15 "secrets:" || echo "Secrets section not found"
echo ""
echo "=== Secrets definitions at bottom ==="
tail -30 docker-compose.staging.yml | grep -A 10 "secrets:" || echo "Secrets definitions not found"
register: current_config
ignore_errors: yes
- name: Display current configuration
debug:
msg: "{{ current_config.stdout_lines }}"
- name: Get repository root path
shell: |
cd "{{ playbook_dir }}/../../.."
pwd
register: repo_root
changed_when: false
delegate_to: localhost
become: no
- name: Display repository root
debug:
msg: "Repository root: {{ repo_root.stdout }}"
- name: Check if docker-compose.staging.yml exists in repository
stat:
path: "{{ repo_root.stdout }}/docker-compose.staging.yml"
register: compose_file_stat
delegate_to: localhost
become: no
- name: Read docker-compose.staging.yml from repository
slurp:
src: "{{ repo_root.stdout }}/docker-compose.staging.yml"
register: compose_file_content
when: compose_file_stat.stat.exists
delegate_to: localhost
become: no
- name: Write docker-compose.staging.yml to server
copy:
content: "{{ compose_file_content.content | b64decode }}"
dest: "~/deployment/stacks/staging/docker-compose.staging.yml"
mode: '0644'
when: compose_file_stat.stat.exists
- name: Fail if docker-compose.staging.yml not found
fail:
msg: "Could not find docker-compose.staging.yml at {{ repo_root.stdout }}/docker-compose.staging.yml. Please ensure the file exists in the repository root."
when: not compose_file_stat.stat.exists
- name: Verify updated docker-compose.staging.yml on server
shell: |
cd ~/deployment/stacks/staging
echo "=== Updated staging-app environment (REDIS-related) ==="
grep -A 50 "staging-app:" docker-compose.staging.yml | grep -A 30 "environment:" | grep -E "(REDIS_|CACHE_|SESSION_|_FILE)" || echo "No Redis env vars found"
echo ""
echo "=== Updated secrets section for staging-app ==="
grep -A 10 "staging-app:" docker-compose.staging.yml | grep -A 15 "secrets:" || echo "Secrets section not found"
echo ""
echo "=== Secrets definitions at bottom ==="
tail -30 docker-compose.staging.yml | grep -A 10 "redis_password:" || echo "Secrets definitions not found"
register: updated_config
ignore_errors: yes
- name: Display updated configuration
debug:
msg: "{{ updated_config.stdout_lines }}"
- name: Restart staging containers to apply changes
shell: |
cd ~/deployment/stacks/staging
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d --force-recreate
register: restart_result
ignore_errors: yes
- name: Display restart result
debug:
msg: "{{ restart_result.stdout_lines }}"
- name: Wait for containers to start
pause:
seconds: 10
- name: Check container status after fix
shell: |
cd ~/deployment/stacks/staging
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
register: container_status
ignore_errors: yes
- name: Display container status
debug:
msg: "{{ container_status.stdout_lines }}"
- name: Verify REDIS_PASSWORD_FILE in container
shell: |
echo "=== Checking REDIS_PASSWORD_FILE in staging-app container ==="
docker exec staging-app env | grep REDIS_PASSWORD || echo "REDIS_PASSWORD variables not found"
docker exec staging-app env | grep "_FILE" | grep REDIS || echo "REDIS_PASSWORD_FILE not found"
echo ""
echo "=== Checking /run/secrets/redis_password ==="
docker exec staging-app ls -la /run/secrets/redis_password 2>&1 || echo "Secret file not found"
register: container_check
ignore_errors: yes
- name: Display container verification
debug:
msg: "{{ container_check.stdout_lines }}"
- name: Summary
debug:
msg:
- "========================================"
- "FIX SUMMARY"
- "========================================"
- "1. Updated docker-compose.staging.yml on server"
- "2. Restarted staging containers"
- "3. Verified REDIS_PASSWORD_FILE configuration"
- ""
- "Next steps:"
- "- Check staging-app logs: docker logs staging-app"
- "- Test Redis connection from staging-app container"
- "- Verify no more NOAUTH errors in logs"

View File

@@ -1,83 +0,0 @@
---
- name: Fix Staging Issues (502 Error + Git Ownership)
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get recent nginx error logs
shell: |
cd ~/deployment/stacks/staging && docker compose logs --tail=100 staging-nginx 2>&1
args:
executable: /bin/bash
register: nginx_all_logs
changed_when: false
- name: Test PHP-FPM connection with curl
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-nginx sh -c "curl -v http://staging-app:9000 2>&1 | head -20" || echo "Connection test completed"
args:
executable: /bin/bash
register: curl_test
changed_when: false
ignore_errors: yes
- name: Check PHP-FPM configuration
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-app sh -c "cat /usr/local/etc/php-fpm.d/www.conf | grep -E '(listen|listen.allowed_clients)' | head -10" || echo "Could not read PHP-FPM config"
args:
executable: /bin/bash
register: php_fpm_config
changed_when: false
ignore_errors: yes
- name: Fix Git ownership issue in staging-app
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-app sh -c "cd /var/www/html && git config --global --add safe.directory /var/www/html && echo 'Git safe.directory configured'" || echo "Git config failed"
args:
executable: /bin/bash
register: git_fix
changed_when: false
ignore_errors: yes
- name: Test if nginx can reach PHP-FPM
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-nginx sh -c "echo 'GET /index.php HTTP/1.0' | nc staging-app 9000 2>&1 | head -10" || docker compose exec -T staging-nginx sh -c "timeout 2 bash -c '</dev/tcp/staging-app/9000' && echo 'Port 9000 is reachable' || echo 'Port 9000 not reachable'" || echo "Could not test connection"
args:
executable: /bin/bash
register: port_test
changed_when: false
ignore_errors: yes
- name: Check if nginx can read public directory
shell: |
cd ~/deployment/stacks/staging && docker compose exec -T staging-nginx sh -c "ls -la /var/www/html/public/ | head -10" || echo "Could not list public directory"
args:
executable: /bin/bash
register: public_dir_check
changed_when: false
ignore_errors: yes
- name: Display all nginx logs
debug:
msg: "{{ nginx_all_logs.stdout_lines[-30:] }}"
- name: Display curl test
debug:
msg: "{{ curl_test.stdout_lines }}"
- name: Display PHP-FPM config
debug:
msg: "{{ php_fpm_config.stdout_lines }}"
- name: Display git fix result
debug:
msg: "{{ git_fix.stdout_lines }}"
- name: Display port test
debug:
msg: "{{ port_test.stdout_lines }}"
- name: Display public directory check
debug:
msg: "{{ public_dir_check.stdout_lines }}"

View File

@@ -1,82 +0,0 @@
---
- name: Fix Staging Secrets Permissions
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check secrets file permissions in staging-app container
shell: |
echo "=== Checking /run/secrets/redis_password permissions ==="
docker exec staging-app ls -la /run/secrets/redis_password 2>&1 || echo "File not found"
echo ""
echo "=== Checking /run/secrets directory permissions ==="
docker exec staging-app ls -la /run/secrets/ | head -10
echo ""
echo "=== Current user ==="
docker exec staging-app whoami
echo ""
echo "=== Testing file read access ==="
docker exec staging-app cat /run/secrets/redis_password 2>&1 | head -c 20 || echo "Cannot read file"
echo "..."
register: permissions_check
ignore_errors: yes
- name: Display permissions check
debug:
msg: "{{ permissions_check.stdout_lines }}"
- name: Try to fix permissions via entrypoint modification
shell: |
cd ~/deployment/stacks/staging
# Check if staging-app has an entrypoint that can be modified
grep -A 5 "staging-app:" docker-compose.staging.yml | grep -A 10 "entrypoint:" | head -5
register: entrypoint_check
ignore_errors: yes
- name: Display entrypoint check
debug:
msg: "{{ entrypoint_check.stdout_lines }}"
- name: Check if we can read secrets as root in container
shell: |
echo "=== Reading secret as root ==="
docker exec -u root staging-app cat /run/secrets/redis_password 2>&1 | head -c 20 || echo "Cannot read even as root"
echo "..."
echo ""
echo "=== Checking file owner ==="
docker exec -u root staging-app stat -c "%U:%G %a" /run/secrets/redis_password 2>&1 || echo "Cannot stat"
register: root_check
ignore_errors: yes
- name: Display root check
debug: "{{ root_check.stdout_lines }}"
debug:
msg: "{{ root_check.stdout_lines }}"
- name: Check container user configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== staging-app user configuration ==="
grep -A 20 "staging-app:" docker-compose.staging.yml | grep -E "(user:|USER)" || echo "No user specified (defaults to www-data)"
register: user_config
ignore_errors: yes
- name: Display user configuration
debug:
msg: "{{ user_config.stdout_lines }}"
- name: Summary and Recommendations
debug:
msg:
- "========================================"
- "PERMISSIONS ISSUE ANALYSIS"
- "========================================"
- "The secret file exists but is not readable by the PHP process."
- ""
- "Possible solutions:"
- "1. Run PHP-FPM as root (NOT RECOMMENDED for security)"
- "2. Create a wrapper script that reads secrets as root and exports them"
- "3. Modify entrypoint to chmod/chown secrets (may not work on /run/secrets)"
- "4. Use environment variables instead of file-based secrets"
- "5. Modify docker-compose to use a different secrets mount path with proper permissions"

View File

@@ -1,142 +0,0 @@
---
- name: Fix Traefik Configuration
hosts: production
gather_facts: no
become: no
tasks:
- name: Backup current traefik.yml
shell: |
cd ~/deployment/stacks/traefik
cp traefik.yml traefik.yml.backup.$(date +%Y%m%d_%H%M%S)
args:
executable: /bin/bash
- name: Create correct traefik.yml
copy:
content: |
# Static Configuration for Traefik
# Global Configuration
global:
checkNewVersion: true
sendAnonymousUsage: false
# API and Dashboard
# Note: insecure: false means API is only accessible via HTTPS (through Traefik itself)
# No port 8080 needed - dashboard accessible via HTTPS at traefik.michaelschiemer.de
api:
dashboard: true
insecure: false
# Dashboard accessible via HTTPS router (no separate HTTP listener needed)
# Entry Points
entryPoints:
web:
address: ":80"
# No global redirect - ACME challenges need HTTP access
# Redirects are handled per-router via middleware
websecure:
address: ":443"
http:
tls:
certResolver: letsencrypt
domains:
- main: michaelschiemer.de
sans:
- "*.michaelschiemer.de"
# Certificate Resolvers
certificatesResolvers:
letsencrypt:
acme:
email: kontakt@michaelschiemer.de
storage: /acme.json
caServer: https://acme-v02.api.letsencrypt.org/directory
# Use HTTP-01 challenge (requires port 80 accessible)
httpChallenge:
entryPoint: web
# Uncomment for DNS challenge (requires DNS provider)
# dnsChallenge:
# provider: cloudflare
# delayBeforeCheck: 30
# Providers
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
# Network mode is 'host', so we don't specify a network here
# Traefik can reach containers directly via their IPs in host network mode
watch: true
file:
directory: /dynamic
watch: true
# Forwarded Headers Configuration
# This ensures Traefik correctly identifies the real client IP
# Important for VPN access where requests come from WireGuard interface
forwardedHeaders:
trustedIPs:
- "10.8.0.0/24" # WireGuard VPN network
- "127.0.0.1/32" # Localhost
- "172.17.0.0/16" # Docker bridge network
- "172.18.0.0/16" # Docker user-defined networks
insecure: false
# Logging
log:
level: INFO
filePath: /logs/traefik.log
format: json
# Access Logs
accessLog:
filePath: /logs/access.log
format: json
bufferingSize: 100
filters:
statusCodes:
- "400-499"
- "500-599"
# Metrics
metrics:
prometheus:
addEntryPointsLabels: true
addRoutersLabels: true
addServicesLabels: true
# Ping
ping:
entryPoint: web
dest: ~/deployment/stacks/traefik/traefik.yml
mode: '0644'
- name: Validate YAML syntax
command: python3 -c "import yaml; yaml.safe_load(open('traefik.yml')); print('YAML valid')"
args:
chdir: ~/deployment/stacks/traefik
changed_when: false
- name: Restart Traefik
command: docker compose up -d traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_restart
- name: Wait for Traefik to start
pause:
seconds: 5
- name: Check Traefik status
command: docker compose ps traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_status
- name: Display Traefik status
debug:
msg: "{{ traefik_status.stdout_lines }}"

View File

@@ -1,95 +0,0 @@
---
- name: Live Monitor Grafana Access - Watch Traefik Logs in Real-Time
hosts: production
gather_facts: no
become: no
tasks:
- name: Clear previous Grafana access attempts count
shell: |
echo "Starting live monitoring. Make a request to https://grafana.michaelschiemer.de now!"
echo "Waiting 10 seconds for you to make a request..."
sleep 10
args:
executable: /bin/bash
- name: Show recent Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Last 5 Grafana Access Attempts ==="
tail -100 logs/access.log | grep -i grafana | tail -5
args:
executable: /bin/bash
register: recent_access
ignore_errors: yes
failed_when: false
- name: Display recent access attempts
debug:
msg: "{{ recent_access.stdout_lines }}"
- name: Check current client IP pattern
shell: |
cd ~/deployment/stacks/traefik
echo "=== Client IPs in recent Grafana requests ==="
tail -50 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | head -5
args:
executable: /bin/bash
register: client_ips
ignore_errors: yes
failed_when: false
- name: Display client IPs
debug:
msg: "{{ client_ips.stdout_lines }}"
- name: Extract and check client IPs
shell: |
cd ~/deployment/stacks/traefik
echo "=== Checking if client IPs are in VPN range (10.8.0.0/24) ==="
tail -20 logs/access.log | grep -i grafana | tail -3 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | while read ip; do
if [[ "$ip" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "$ip -> In VPN range (10.8.0.0/24): YES"
else
echo "$ip -> In VPN range (10.8.0.0/24): NO (this is the problem!)"
fi
done
args:
executable: /bin/bash
register: vpn_check
ignore_errors: yes
failed_when: false
- name: Display VPN range check
debug:
msg: "{{ vpn_check.stdout_lines }}"
- name: Show Traefik middleware errors
shell: |
cd ~/deployment/stacks/traefik
echo "=== Traefik Middleware Errors (if any) ==="
tail -50 logs/traefik.log | grep -iE "(grafana|ipallowlist|403|middleware)" | tail -10 || echo "No middleware errors found"
args:
executable: /bin/bash
register: middleware_errors
ignore_errors: yes
failed_when: false
- name: Display middleware errors
debug:
msg: "{{ middleware_errors.stdout_lines }}"
- name: Verify middleware configuration
shell: |
cd ~/deployment/stacks/traefik/dynamic
echo "=== Current grafana-vpn-only Middleware ==="
grep -A 6 "grafana-vpn-only:" middlewares.yml
args:
executable: /bin/bash
register: middleware_config
ignore_errors: yes
failed_when: false
- name: Display middleware configuration
debug: |
msg: "{{ middleware_config.stdout_lines }}"

View File

@@ -1,75 +0,0 @@
---
- name: Monitor Grafana Client IP - Wait for Next Request
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions
debug:
msg:
- "=== LIVE MONITORING ==="
- "Bitte mache JETZT einen Zugriff auf https://grafana.michaelschiemer.de im Browser"
- "Ich warte 30 Sekunden und pr?fe dann die Logs..."
- ""
- name: Wait for access attempt
pause:
seconds: 30
- name: Check recent Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Last 10 Grafana Access Attempts ==="
tail -100 logs/access.log | grep -i grafana | tail -10
args:
executable: /bin/bash
register: recent_access
ignore_errors: yes
failed_when: false
- name: Extract client IPs
shell: |
cd ~/deployment/stacks/traefik
echo "=== Client IPs in recent requests ==="
tail -20 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: client_ips
ignore_errors: yes
failed_when: false
- name: Display client IPs
debug:
msg: "{{ client_ips.stdout_lines }}"
- name: Check if IP is VPN IP
shell: |
cd ~/deployment/stacks/traefik
tail -10 logs/access.log | grep -i grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | while read ip; do
if [[ "$ip" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "? $ip -> VPN IP (10.8.0.0/24) - Traffic kommt ?ber VPN!"
else
echo "? $ip -> Public IP (nicht VPN) - Traffic kommt NICHT ?ber VPN"
fi
done
args:
executable: /bin/bash
register: vpn_check
ignore_errors: yes
failed_when: false
- name: Display VPN check
debug:
msg: "{{ vpn_check.stdout_lines }}"
- name: Recommendations
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Wenn ClientHost: 10.8.0.7 (VPN-IP) ? Traffic kommt ?ber VPN ?"
- "Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "Wenn ClientHost: 89.246.96.244 (?ffentliche IP) ? Traffic kommt NICHT ?ber VPN ?"
- "Dann m?ssen wir VPN-Routing noch weiter fixen"

View File

@@ -1,80 +0,0 @@
---
- name: Monitor Grafana Access Live - Check Latest Request
hosts: production
gather_facts: no
become: no
tasks:
- name: Get timestamp of last log entry
shell: |
cd ~/deployment/stacks/traefik
tail -1 logs/access.log | grep -oP '"time":"[^"]*"'
args:
executable: /bin/bash
register: last_log_time
ignore_errors: yes
failed_when: false
- name: Display last log time
debug:
msg: "{{ last_log_time.stdout }}"
- name: Get last 30 Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
tail -1000 logs/access.log | grep -i grafana | tail -30
args:
executable: /bin/bash
register: grafana_logs
ignore_errors: yes
failed_when: false
- name: Extract client IPs from last 10 Grafana requests
shell: |
cd ~/deployment/stacks/traefik
tail -200 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: client_ips
ignore_errors: yes
failed_when: false
- name: Display client IPs found
debug:
msg: "{{ client_ips.stdout_lines }}"
- name: Analyze last 5 Grafana requests
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -5 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if [[ "$client" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP"
elif [[ "$client" == "89.246.96.244" ]]; then
echo "$time | ClientHost: $client | Status: $status ? Public IP"
else
echo "$time | ClientHost: $client | Status: $status ? Unknown"
fi
done
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"
- name: Recommendations
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN ?"
- "? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN ?"
- "? Dann m?ssen wir weiter debuggen (Route, AllowedIPs, etc.)"

View File

@@ -1,86 +0,0 @@
---
- name: Monitor Grafana Access After Firewall Changes
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions
debug:
msg:
- "=== LIVE MONITORING NACH FIREWALL-?NDERUNGEN ==="
- "Firewall-Regeln wurden erstellt ?"
- "WireGuard ist verbunden ?"
- ""
- "Bitte mache JETZT einen neuen Zugriff auf https://grafana.michaelschiemer.de im Browser"
- "Ich warte 20 Sekunden und pr?fe dann die Logs..."
- ""
- name: Get current log timestamp
shell: |
cd ~/deployment/stacks/traefik
tail -1 logs/access.log | grep -oP '"time":"[^"]*"'
args:
executable: /bin/bash
register: current_timestamp
failed_when: false
- name: Display current timestamp
debug:
msg: "Letztes Log: {{ current_timestamp.stdout }}"
- name: Wait for access attempt
pause:
seconds: 20
- name: Check for new Grafana access
shell: |
cd ~/deployment/stacks/traefik
tail -200 logs/access.log | grep -i grafana | tail -20 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1 | cut -d':' -f1-2)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if echo "$client" | grep -q '^10\.8\.0\.'; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [ "$client" = "89.246.96.244" ]; then
echo "$time | ClientHost: $client | Status: $status ? Public IP (Traffic kommt NICHT ?ber VPN)"
else
echo "$time | ClientHost: $client | Status: $status ? Unknown IP"
fi
done
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"
- name: Get unique client IPs from last 10 requests
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Final verdict
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Pr?fe die obigen Zeilen:"
- ""
- "? Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN!"
- " ? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "? Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN"
- " ? Dann m?ssen wir weitere Optionen probieren (Route explizit setzen, etc.)"

View File

@@ -1,70 +0,0 @@
---
- name: Monitor Live Grafana Access
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions
debug:
msg:
- "=== LIVE MONITORING ==="
- "Bitte mache JETZT einen Zugriff auf https://grafana.michaelschiemer.de im Browser"
- "Ich warte 20 Sekunden und pr?fe dann die Logs..."
- ""
- name: Get current log timestamp
shell: |
cd ~/deployment/stacks/traefik
tail -1 logs/access.log | grep -oP '"time":"[^"]*"'
args:
executable: /bin/bash
register: current_timestamp
failed_when: false
- name: Display current timestamp
debug:
msg: "Letztes Log: {{ current_timestamp.stdout }}"
- name: Wait for access attempt
pause:
seconds: 20
- name: Check for new Grafana access
shell: |
cd ~/deployment/stacks/traefik
tail -500 logs/access.log | grep -i grafana | tail -10 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if echo "$client" | grep -q "^10\.8\.0\."; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [ "$client" = "89.246.96.244" ]; then
echo "$time | ClientHost: $client | Status: $status ? Public IP (Traffic kommt NICHT ?ber VPN)"
else
echo "$time | ClientHost: $client | Status: $status ? Unknown"
fi
done
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"
- name: Get unique client IPs
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"

View File

@@ -1,28 +0,0 @@
---
- name: Quick Staging Logs Check
hosts: production
gather_facts: yes
become: no
tasks:
- name: Quick container status check
shell: |
echo "=== Method 1: Direct docker ps ==="
docker ps --filter "name=staging" || echo "No staging containers found"
echo ""
echo "=== Method 2: Docker compose ps ==="
cd ~/deployment/stacks/staging && docker compose ps 2>&1 || echo "Docker compose failed"
echo ""
echo "=== Method 3: Direct logs access ==="
docker logs staging-app --tail=30 2>&1 | tail -20 || echo "Could not get staging-app logs"
args:
executable: /bin/bash
register: quick_check
ignore_errors: yes
failed_when: false
- name: Display results
debug:
msg: "{{ (quick_check.stdout_lines | default([])) + (quick_check.stderr_lines | default([])) }}"
when: quick_check is defined
failed_when: false

View File

@@ -1,206 +0,0 @@
---
- name: Regenerate WireGuard Client - Fresh Config
hosts: production
become: yes
gather_facts: yes
vars:
wireguard_interface: "wg0"
wireguard_config_path: "/etc/wireguard"
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
wireguard_client_configs_path: "/etc/wireguard/clients"
wireguard_local_client_configs_dir: "{{ playbook_dir }}/../wireguard-clients"
tasks:
- name: Validate client name
fail:
msg: "client_name is required. Usage: ansible-playbook ... -e 'client_name=myclient'"
when: client_name is not defined or client_name == ""
- name: Check if old client config exists
stat:
path: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
register: old_client_config
failed_when: false
- name: Backup old client config
copy:
src: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
dest: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf.backup-{{ ansible_date_time.epoch }}"
remote_src: yes
when: old_client_config.stat.exists
register: backup_result
failed_when: false
- name: Display backup info
debug:
msg: "Alte Config wurde gesichert als: {{ backup_result.dest | default('N/A') }}"
when: old_client_config.stat.exists
- name: Remove old client from WireGuard server config
shell: |
# Entferne den [Peer] Block f?r den Client aus wg0.conf
sed -i '/# BEGIN ANSIBLE MANAGED BLOCK - Client: {{ client_name }}/,/^# END ANSIBLE MANAGED BLOCK - Client: {{ client_name }}/d' {{ wireguard_config_file }}
# Fallback: Entferne auch ohne Marker
sed -i '/# Client: {{ client_name }}/,/{/d' {{ wireguard_config_file }}
sed -i '/PublicKey = .*/d' {{ wireguard_config_file }} || true
sed -i '/AllowedIPs = .*\/32$/d' {{ wireguard_config_file }} || true
args:
executable: /bin/bash
register: remove_result
failed_when: false
changed_when: false
- name: Set WireGuard network
set_fact:
wireguard_network: "{{ wireguard_network | default('10.8.0.0/24') }}"
- name: Set WireGuard other variables with defaults
set_fact:
wireguard_port: "{{ wireguard_port | default(51820) }}"
client_ip: "{{ client_ip | default('') }}"
allowed_ips: "{{ allowed_ips | default(wireguard_network) }}"
- name: Get server external IP address
uri:
url: https://api.ipify.org
return_content: yes
register: server_external_ip
changed_when: false
failed_when: false
- name: Set server external IP
set_fact:
server_external_ip_content: "{{ ansible_host | default(server_external_ip.content | default('')) }}"
- name: Read WireGuard server config
slurp:
src: "{{ wireguard_config_file }}"
register: wireguard_server_config_read
- name: Extract server IP from config
set_fact:
server_vpn_ip: "{{ (wireguard_server_config_read.content | b64decode | regex_findall('Address\\s*=\\s*([0-9.]+)') | first) | default('10.8.0.1') }}"
failed_when: false
- name: Extract WireGuard server IP octets
set_fact:
wireguard_server_ip_octets: "{{ (server_vpn_ip | default('')).split('.') }}"
when: client_ip == ""
- name: Fail if server VPN IP is invalid
fail:
msg: "Server VPN IP '{{ server_vpn_ip }}' ist ungültig bitte wg0.conf prüfen."
when: client_ip == "" and (wireguard_server_ip_octets | length) < 4
- name: Gather existing client addresses
set_fact:
existing_client_ips: "{{ (wireguard_server_config_read.content | b64decode | regex_findall('AllowedIPs = ([0-9A-Za-z.]+)/32', '\\\\1')) }}"
when: client_ip == ""
- name: Calculate client IP if not provided
vars:
existing_last_octets: "{{ (existing_client_ips | default([])) | map('regex_replace', '^(?:\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.)', '') | select('match', '^[0-9]+$') | map('int') | list }}"
server_last_octet: "{{ wireguard_server_ip_octets[3] | int }}"
next_octet_candidate: "{{ (existing_last_octets + [server_last_octet]) | map('int') | list | max + 1 if (existing_client_ips | default([]) | length > 0) else server_last_octet + 1 }}"
set_fact:
client_ip: "{{ [
wireguard_server_ip_octets[0],
wireguard_server_ip_octets[1],
wireguard_server_ip_octets[2],
next_octet_candidate
] | join('.') }}"
when: client_ip == "" and (wireguard_server_ip_octets | length) >= 4
- name: Generate NEW client private key
command: "wg genkey"
register: client_private_key
changed_when: true
no_log: yes
- name: Generate NEW client public key
command: "wg pubkey"
args:
stdin: "{{ client_private_key.stdout }}"
register: client_public_key
changed_when: false
no_log: yes
- name: Add NEW client to WireGuard server config
blockinfile:
path: "{{ wireguard_config_file }}"
block: |
# Client: {{ client_name }}
[Peer]
PublicKey = {{ client_public_key.stdout }}
AllowedIPs = {{ client_ip }}/32
marker: "# {mark} ANSIBLE MANAGED BLOCK - Client: {{ client_name }}"
register: wireguard_client_block
- name: Ensure client configs directory exists
file:
path: "{{ wireguard_client_configs_path }}"
state: directory
mode: '0700'
owner: root
group: root
- name: Ensure local client configs directory exists
file:
path: "{{ wireguard_local_client_configs_dir }}"
state: directory
mode: '0700'
delegate_to: localhost
become: no
run_once: true
- name: Get server public key
shell: "cat {{ wireguard_config_path }}/{{ wireguard_interface }}_private.key | wg pubkey"
register: server_public_key_cmd
changed_when: false
no_log: yes
failed_when: false
- name: Create NEW client configuration file
template:
src: "{{ playbook_dir }}/../templates/wireguard-client.conf.j2"
dest: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
mode: '0600'
owner: root
group: root
- name: Download NEW client configuration to control machine
fetch:
src: "{{ wireguard_client_configs_path }}/{{ client_name }}.conf"
dest: "{{ wireguard_local_client_configs_dir }}/{{ client_name }}.conf"
flat: yes
mode: '0600'
- name: Restart WireGuard service
systemd:
name: "wg-quick@{{ wireguard_interface }}"
state: restarted
- name: Display NEW client configuration
debug:
msg: |
========================================
WireGuard Client REGENERATED: {{ client_name }}
========================================
Neue Client-IP: {{ client_ip }}
Server Endpoint: {{ server_external_ip_content }}:{{ wireguard_port }}
Neue Client-Konfiguration:
{{ wireguard_local_client_configs_dir }}/{{ client_name }}.conf
WICHTIG:
1. Lade die neue Config-Datei herunter
2. Importiere sie in WireGuard (ersetze die alte!)
3. Verbinde mit dem VPN
4. Teste: ping 10.8.0.1
5. Teste: https://grafana.michaelschiemer.de
Alte Config gesichert als:
{{ backup_result.dest | default('N/A') }}
========================================

View File

@@ -1,165 +0,0 @@
---
- name: Remove framework-production Stack from Production Server
hosts: production
become: no
gather_facts: yes
vars:
stack_name: framework-production
stack_path: "~/framework-production"
tasks:
- name: Check if Docker is running
systemd:
name: docker
state: started
register: docker_service
become: yes
- name: Fail if Docker is not running
fail:
msg: "Docker service is not running"
when: docker_service.status.ActiveState != 'active'
- name: Check if framework-production stack directory exists
stat:
path: "{{ stack_path }}"
register: stack_dir
- name: Check if framework-production containers exist (all states)
shell: |
docker ps -a --filter "name={{ stack_name }}" --format "{{ '{{' }}.Names{{ '}}' }}"
args:
executable: /bin/bash
register: all_containers
changed_when: false
failed_when: false
- name: Display all containers found
debug:
msg: "Found containers: {{ all_containers.stdout_lines if all_containers.stdout_lines | length > 0 else 'None' }}"
- name: List all containers to find framework-production related ones
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}\t{{ '{{' }}.Status{{ '}}' }}"
args:
executable: /bin/bash
register: all_containers_list
changed_when: false
failed_when: false
- name: Display all containers
debug:
msg: "{{ all_containers_list.stdout_lines }}"
- name: Check for containers with framework-production in name or image
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -iE "(framework-production|^db$|^php$|^web$)" || echo ""
args:
executable: /bin/bash
register: matching_containers
changed_when: false
failed_when: false
- name: Check for containers with framework-production images
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" | grep -i "framework-production" | cut -f1 || echo ""
args:
executable: /bin/bash
register: image_based_containers
changed_when: false
failed_when: false
- name: Display found containers
debug:
msg:
- "Containers by name pattern: {{ matching_containers.stdout_lines if matching_containers.stdout_lines | length > 0 else 'None' }}"
- "Containers by image: {{ image_based_containers.stdout_lines if image_based_containers.stdout_lines | length > 0 else 'None' }}"
- name: Stop and remove containers using docker-compose if stack directory exists
shell: |
cd {{ stack_path }}
docker-compose down -v
args:
executable: /bin/bash
when: stack_dir.stat.exists
register: compose_down_result
changed_when: true
ignore_errors: yes
- name: Stop and remove containers by name pattern and image
shell: |
REMOVED_CONTAINERS=""
# Method 1: Remove containers with framework-production in image name
while IFS=$'\t' read -r container image; do
if [[ "$image" == *"framework-production"* ]]; then
echo "Stopping and removing container '$container' (image: $image)"
docker stop "$container" 2>/dev/null || true
docker rm "$container" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
fi
done < <(docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}")
# Method 2: Remove containers with specific names that match the pattern
for container_name in "db" "php" "web"; do
# Check if container exists and has framework-production image
container_info=$(docker ps -a --filter "name=^${container_name}$" --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" 2>/dev/null || echo "")
if [[ -n "$container_info" ]]; then
image=$(echo "$container_info" | cut -f2)
if [[ "$image" == *"framework-production"* ]] || [[ "$image" == *"mariadb"* ]]; then
echo "Stopping and removing container '$container_name' (image: $image)"
docker stop "$container_name" 2>/dev/null || true
docker rm "$container_name" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container_name"
fi
fi
done
# Method 3: Remove containers with framework-production in name
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production | while read container; do
if [ ! -z "$container" ]; then
echo "Stopping and removing container '$container'"
docker stop "$container" 2>/dev/null || true
docker rm "$container" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
fi
done
# Output removed containers
if [[ -n "$REMOVED_CONTAINERS" ]]; then
echo "Removed containers:$REMOVED_CONTAINERS"
else
echo "No containers were removed"
fi
args:
executable: /bin/bash
register: direct_remove_result
changed_when: "'Removed containers' in direct_remove_result.stdout"
failed_when: false
- name: Remove stack directory if it exists
file:
path: "{{ stack_path }}"
state: absent
when: stack_dir.stat.exists
register: dir_removed
- name: Verify all framework-production containers are removed
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production || echo ""
args:
executable: /bin/bash
register: remaining_containers
changed_when: false
failed_when: false
- name: Display removal status
debug:
msg:
- "=== framework-production Stack Removal Complete ==="
- "Stack directory removed: {{ 'Yes' if dir_removed.changed else 'No (did not exist)' }}"
- "Containers removed: {{ 'Yes' if (compose_down_result.changed or direct_remove_result.changed) else 'No (none found)' }}"
- "Remaining containers: {{ remaining_containers.stdout if remaining_containers.stdout else 'None' }}"
- ""
- "Stack '{{ stack_name }}' has been successfully removed."

View File

@@ -1,119 +0,0 @@
---
- name: Remove Temporary IP Allowlist from Grafana - Make VPN-Only
hosts: production
gather_facts: no
become: no
tasks:
- name: Check recent Grafana access attempts
shell: |
cd ~/deployment/stacks/traefik
echo "=== Recent Grafana Access (Last 5 attempts) ==="
tail -30 logs/access.log | grep grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | head -5
args:
executable: /bin/bash
register: recent_ips
ignore_errors: yes
failed_when: false
- name: Display recent client IPs
debug:
msg: "{{ recent_ips.stdout_lines }}"
- name: Check if any traffic comes from VPN
shell: |
cd ~/deployment/stacks/traefik
tail -20 logs/access.log | grep grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | while read ip; do
if [[ "$ip" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "? Found VPN IP: $ip"
else
echo "? Found public IP: $ip (not VPN)"
fi
done
args:
executable: /bin/bash
register: vpn_check
ignore_errors: yes
failed_when: false
- name: Display VPN check
debug:
msg: "{{ vpn_check.stdout_lines }}"
- name: Backup middlewares.yml
shell: |
cd ~/deployment/stacks/traefik/dynamic
cp middlewares.yml middlewares.yml.backup.before-remove-temp-ip.$(date +%Y%m%d_%H%M%S)
echo "Backup created"
args:
executable: /bin/bash
- name: Remove temporary IP from grafana-vpn-only middleware
shell: |
cd ~/deployment/stacks/traefik/dynamic
sed -i '/89.246.96.244\/32/d' middlewares.yml
echo "Temporary IP removed"
args:
executable: /bin/bash
- name: Verify middleware configuration
shell: |
cd ~/deployment/stacks/traefik/dynamic
echo "=== Updated grafana-vpn-only Middleware ==="
grep -A 6 "grafana-vpn-only:" middlewares.yml
args:
executable: /bin/bash
register: updated_middleware
ignore_errors: yes
failed_when: false
- name: Display updated middleware
debug:
msg: "{{ updated_middleware.stdout_lines }}"
- name: Validate YAML syntax
command: python3 -c "import yaml; yaml.safe_load(open('middlewares.yml')); print('YAML valid')"
args:
chdir: ~/deployment/stacks/traefik/dynamic
register: yaml_validation
ignore_errors: yes
failed_when: false
- name: Display YAML validation
debug:
msg: "{{ yaml_validation.stdout_lines }}"
- name: Restart Traefik to apply changes
command: docker compose restart traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_restart
- name: Wait for Traefik to restart
pause:
seconds: 5
- name: Verify Traefik status
command: docker compose ps traefik
args:
chdir: ~/deployment/stacks/traefik
register: traefik_status
- name: Display Traefik status
debug:
msg: "{{ traefik_status.stdout_lines }}"
- name: Final instructions
debug:
msg:
- "=== TEMPORARY IP REMOVED ==="
- "Grafana should now be VPN-only"
- ""
- "Test:"
- "1. With VPN: https://grafana.michaelschiemer.de should work ?"
- "2. Without VPN: https://grafana.michaelschiemer.de should give 403 ?"
- ""
- "If it doesn't work:"
- "- Check that VPN routing works (DNS = 10.8.0.1 or use hosts file)"
- "- Check Traefik logs: tail -f ~/deployment/stacks/traefik/logs/access.log | grep grafana"
- "- Restore backup if needed: cp middlewares.yml.backup.* middlewares.yml"

View File

@@ -1,150 +0,0 @@
---
- name: Deploy Infrastructure Stacks on Production Server
hosts: production
become: no
gather_facts: yes
vars:
# All deployment variables are now defined in group_vars/production.yml
# Variables can be overridden via -e flag if needed
tasks:
- name: Debug - Show variables
debug:
msg:
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
- "deploy_user_home: {{ deploy_user_home | default('NOT SET') }}"
when: false # Only enable for debugging
- name: Check if deployment stacks directory exists
stat:
path: "{{ stacks_base_path }}"
register: stacks_dir
- name: Fail if stacks directory doesn't exist
fail:
msg: "Deployment stacks directory not found at {{ stacks_base_path }}"
when: not stacks_dir.stat.exists
- name: Ensure system packages are up to date
include_role:
name: system
when: system_update_packages | bool
# Create external networks required by all stacks
- name: Create traefik-public network
community.docker.docker_network:
name: traefik-public
driver: bridge
state: present
- name: Create app-internal network
community.docker.docker_network:
name: app-internal
driver: bridge
state: present
# 1. Deploy Traefik (Reverse Proxy & SSL)
- name: Deploy Traefik stack
import_role:
name: traefik
# 2. Deploy PostgreSQL (Database)
- name: Deploy PostgreSQL stack
import_role:
name: postgresql
# 3. Deploy Docker Registry (Private Registry)
- name: Deploy Docker Registry stack
import_role:
name: registry
# 4. Deploy DNS (CoreDNS for WireGuard clients)
- name: Deploy DNS stack
import_role:
name: dns
# 5. Deploy MinIO (Object Storage)
- name: Deploy MinIO stack
import_role:
name: minio
# 6. Deploy Gitea (CRITICAL - Git Server + MySQL + Redis)
- name: Deploy Gitea stack
import_role:
name: gitea
# 7. Deploy Monitoring (Portainer + Grafana + Prometheus)
- name: Deploy Monitoring stack
import_role:
name: monitoring
# Verification
- name: List all running containers
command: >
docker ps --format 'table {{ "{{" }}.Names{{ "}}" }}\t{{ "{{" }}.Status{{ "}}" }}\t{{ "{{" }}.Ports{{ "}}" }}'
register: docker_ps_output
- name: Display running containers
debug:
msg: "{{ docker_ps_output.stdout_lines }}"
- name: Verify Gitea accessibility via HTTPS
uri:
url: "https://{{ gitea_domain }}"
method: GET
validate_certs: no
status_code: 200
timeout: 10
register: gitea_http_check
ignore_errors: yes
- name: Display Gitea accessibility status
debug:
msg: "Gitea HTTPS check: {{ 'SUCCESS' if gitea_http_check.status == 200 else 'FAILED - Status: ' + (gitea_http_check.status|string) }}"
# 8. Deploy Application Stack
- name: Deploy Application Stack
import_role:
name: application
- name: Display application health status
debug:
msg: "Application health: {{ application_health_output if application_health_output != '' else 'All services healthy or starting' }}"
- name: Display migration result
debug:
msg: |
Migration Result:
{{ application_migration_stdout if application_migration_stdout != '' else 'Migration may have failed - check logs with: docker compose -f ' + application_stack_dest + '/docker-compose.yml logs app' }}
when: application_stack_changed and application_run_migrations
- name: Display application accessibility status
debug:
msg: >-
Application health check: {{
'SUCCESS (HTTP ' + (application_healthcheck_status | string) + ')'
if application_healthcheck_status == 200 else
'FAILED or not ready yet (HTTP ' + (application_healthcheck_status | string) + ')'
}}
when: application_stack_changed and application_healthcheck_url | length > 0
- name: Summary
debug:
msg:
- "=== Infrastructure Deployment Complete ==="
- "Traefik: {{ 'Deployed' if traefik_stack_changed else 'Already running' }}"
- "PostgreSQL: {{ 'Deployed' if postgresql_stack_changed else 'Already running' }}"
- "Docker Registry: {{ 'Deployed' if registry_stack_changed else 'Already running' }}"
- "DNS: {{ 'Deployed' if dns_stack_changed else 'Already running' }}"
- "MinIO: {{ 'Deployed' if minio_stack_changed else 'Already running' }}"
- "Gitea: {{ 'Deployed' if gitea_stack_changed else 'Already running' }}"
- "Monitoring: {{ 'Deployed' if monitoring_stack_changed else 'Already running' }}"
- "Application: {{ 'Deployed' if application_stack_changed else 'Already running' }}"
- ""
- "Next Steps:"
- "1. Access Gitea at: https://{{ gitea_domain }}"
- "2. Complete Gitea setup wizard if first-time deployment"
- "3. Navigate to Admin > Actions > Runners to get registration token"
- "4. Continue with Phase 1 - Gitea Runner Setup"
- "5. Access Application at: https://{{ app_domain }}"

View File

@@ -1,86 +0,0 @@
---
- name: Setup Let's Encrypt SSL Certificates via Traefik
hosts: production
become: no
gather_facts: yes
vars:
# ssl_domains and acme_email are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
domains: "{{ ssl_domains | default([gitea_domain, app_domain]) }}"
tasks:
- name: Check if acme.json exists and is a file
stat:
path: "{{ deploy_user_home }}/de iployment/stacks/traefik/acme.json"
register: acme_stat
- name: Remove acme.json if it's a directory
file:
path: "{{ deploy_user_home }}/deployment/stacks/traefik/acme.json"
state: absent
become: yes
when: acme_stat.stat.exists and acme_stat.stat.isdir
- name: Ensure Traefik acme.json exists and has correct permissions
file:
path: "{{ deploy_user_home }}/deployment/stacks/traefik/acme.json"
state: touch
mode: '0600'
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: yes
- name: Verify Traefik is running
command: docker compose -f {{ deploy_user_home }}/deployment/stacks/traefik/docker-compose.yml ps traefik
register: traefik_status
changed_when: false
- name: Fail if Traefik is not running
fail:
msg: "Traefik is not running. Please start it first."
when: traefik_status.rc != 0 or "Up" not in traefik_status.stdout
- name: Force Traefik to reload configuration
command: docker compose -f {{ deploy_user_home }}/deployment/stacks/traefik/docker-compose.yml restart traefik
changed_when: true
- name: Wait for Traefik to be ready
wait_for:
timeout: 10
changed_when: false
- name: Trigger certificate request by accessing each domain
uri:
url: "https://{{ item }}"
method: GET
validate_certs: no
timeout: 5
status_code: [200, 301, 302, 303, 404, 502, 503]
loop: "{{ domains }}"
register: certificate_trigger
changed_when: false
ignore_errors: yes
- name: Wait for ACME certificate generation (30 seconds)
wait_for:
timeout: 30
changed_when: false
- name: Check if acme.json contains certificates
stat:
path: "{{ stacks_base_path }}/traefik/acme.json"
register: acme_file
- name: Display certificate status
debug:
msg: |
Certificate setup triggered.
Traefik will request Let's Encrypt certificates for:
{{ domains | join(', ') }}
ACME Email: {{ acme_email }}
Check Traefik logs to see certificate generation progress:
docker compose -f {{ stacks_base_path }}/traefik/docker-compose.yml logs traefik | grep -i acme
Certificates should be ready within 1-2 minutes.

View File

@@ -1,287 +0,0 @@
---
- name: Setup WireGuard VPN Server
hosts: production
become: yes
gather_facts: yes
vars:
# WireGuard variables are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
wireguard_port: "{{ wireguard_port_default | default(51820) }}"
wireguard_network: "{{ wireguard_network_default | default('10.8.0.0/24') }}"
wireguard_server_ip: "{{ wireguard_server_ip_default | default('10.8.0.1') }}"
pre_tasks:
- name: Optionally load wireguard secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
tasks:
- name: Check if WireGuard is already installed
command: which wg
register: wireguard_installed
changed_when: false
failed_when: false
- name: Update package cache
apt:
update_cache: yes
cache_valid_time: 3600
when: not wireguard_installed.rc == 0
- name: Install WireGuard
apt:
name:
- wireguard
- wireguard-tools
- qrencode
state: present
when: not wireguard_installed.rc == 0
notify: restart wireguard
- name: Ensure WireGuard config directory exists
file:
path: "{{ wireguard_config_path }}"
state: directory
mode: '0700'
owner: root
group: root
- name: Ensure WireGuard client configs directory exists
file:
path: "{{ wireguard_client_configs_path }}"
state: directory
mode: '0700'
owner: root
group: root
- name: Check if WireGuard server keys exist
stat:
path: "{{ wireguard_private_key_file }}"
register: server_private_key_exists
- name: Generate WireGuard server private key
command: "wg genkey"
register: server_private_key
changed_when: true
when: not server_private_key_exists.stat.exists
no_log: yes
- name: Save WireGuard server private key
copy:
content: "{{ server_private_key.stdout }}"
dest: "{{ wireguard_private_key_file }}"
mode: '0600'
owner: root
group: root
when: not server_private_key_exists.stat.exists
no_log: yes
- name: Read WireGuard server private key
slurp:
src: "{{ wireguard_private_key_file }}"
register: server_private_key_content
when: server_private_key_exists.stat.exists
- name: Generate WireGuard server public key
command: "wg pubkey"
args:
stdin: "{{ server_private_key.stdout if not server_private_key_exists.stat.exists else server_private_key_content.content | b64decode | trim }}"
register: server_public_key
changed_when: false
when: not server_private_key_exists.stat.exists
no_log: yes
- name: Get existing server public key
shell: "cat {{ wireguard_private_key_file }} | wg pubkey"
register: existing_server_public_key
changed_when: false
when: server_private_key_exists.stat.exists
no_log: yes
failed_when: false
- name: Set server public key fact
set_fact:
server_public_key_value: "{{ server_public_key.stdout if not server_private_key_exists.stat.exists else existing_server_public_key.stdout }}"
- name: Save WireGuard server public key
copy:
content: "{{ server_public_key_value }}"
dest: "{{ wireguard_public_key_file }}"
mode: '0644'
owner: root
group: root
- name: Enable IP forwarding
sysctl:
name: net.ipv4.ip_forward
value: '1'
state: present
sysctl_set: yes
reload: yes
when: wireguard_enable_ip_forwarding
- name: Make IP forwarding persistent
lineinfile:
path: /etc/sysctl.conf
regexp: '^net\.ipv4\.ip_forward'
line: 'net.ipv4.ip_forward=1'
state: present
when: wireguard_enable_ip_forwarding
- name: Get server external IP address
uri:
url: https://api.ipify.org
return_content: yes
register: server_external_ip
changed_when: false
failed_when: false
- name: Set server external IP from inventory if API fails
set_fact:
server_external_ip_content: "{{ ansible_host | default(server_external_ip.content | default('')) }}"
when: server_external_ip.content is defined
- name: Get server external IP from ansible_host
set_fact:
server_external_ip_content: "{{ ansible_host }}"
when: server_external_ip.content is not defined
- name: Read server private key for config
slurp:
src: "{{ wireguard_private_key_file }}"
register: server_private_key_file_content
when: server_private_key_exists.stat.exists
- name: Set server private key for template (new key)
set_fact:
server_private_key_for_config: "{{ server_private_key.stdout }}"
when: not server_private_key_exists.stat.exists
- name: Set server private key for template (existing key)
set_fact:
server_private_key_for_config: "{{ server_private_key_file_content.content | b64decode | trim }}"
when: server_private_key_exists.stat.exists
- name: Get network interface name
shell: "ip route | grep default | awk '{print $5}' | head -1"
register: default_interface
changed_when: false
failed_when: false
- name: Set default interface
set_fact:
wireguard_interface_name: "{{ default_interface.stdout | default('eth0') }}"
- name: Check if WireGuard config exists
stat:
path: "{{ wireguard_config_file }}"
register: wireguard_config_exists
- name: Create WireGuard server configuration
template:
src: "{{ playbook_dir }}/../templates/wireguard-server.conf.j2"
dest: "{{ wireguard_config_file }}"
mode: '0600'
owner: root
group: root
notify: restart wireguard
- name: Check if WireGuard service is enabled
systemd:
name: "wg-quick@{{ wireguard_interface }}"
register: wireguard_service_status
failed_when: false
changed_when: false
- name: Enable WireGuard service
systemd:
name: "wg-quick@{{ wireguard_interface }}"
enabled: yes
daemon_reload: yes
when: not wireguard_service_status.status.ActiveState is defined or wireguard_service_status.status.ActiveState != 'active'
- name: Start WireGuard service
systemd:
name: "wg-quick@{{ wireguard_interface }}"
state: started
notify: restart wireguard
- name: Check if UFW firewall is installed
command: which ufw
register: ufw_installed
changed_when: false
failed_when: false
- name: Verify SSH access is allowed in UFW
command: "ufw status | grep -q '22/tcp' || echo 'SSH not found'"
register: ssh_ufw_check
changed_when: false
failed_when: false
when: ufw_installed.rc == 0
- name: Warn if SSH is not explicitly allowed
debug:
msg: |
?? WARNING: SSH (port 22) might not be explicitly allowed in UFW!
Please ensure SSH access is configured before proceeding.
Run: sudo ufw allow 22/tcp
when: ufw_installed.rc == 0 and 'SSH not found' in ssh_ufw_check.stdout
- name: Allow WireGuard port in UFW firewall
ufw:
rule: allow
port: "{{ wireguard_port }}"
proto: udp
comment: "WireGuard VPN"
when: ufw_installed.rc == 0
- name: Allow WireGuard port in UFW firewall (alternative)
shell: "ufw allow {{ wireguard_port }}/udp comment 'WireGuard VPN'"
when: ufw_installed.rc == 0
failed_when: false
changed_when: false
- name: Check WireGuard status
command: "wg show {{ wireguard_interface }}"
register: wireguard_status
changed_when: false
failed_when: false
- name: Display WireGuard status
debug:
msg: |
WireGuard Status:
{{ wireguard_status.stdout if wireguard_status.rc == 0 else 'WireGuard interface not active' }}
- name: Display server public key
debug:
msg: |
========================================
WireGuard Server Setup Complete!
========================================
Server Public Key:
{{ server_public_key_value }}
Server IP: {{ wireguard_server_ip }}
Server Endpoint: {{ server_external_ip_content }}:{{ wireguard_port }}
Network: {{ wireguard_network }}
To add a client, run:
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml -e "client_name=myclient"
Client configs are stored in:
{{ wireguard_client_configs_path }}/
========================================
handlers:
- name: restart wireguard
systemd:
name: "wg-quick@{{ wireguard_interface }}"
state: restarted

View File

@@ -1,102 +0,0 @@
---
- name: Sync Code from Git Repository to Application Container
hosts: production
gather_facts: yes
become: no
vars:
# git_repository_url and git_branch are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
git_repository_url: "{{ git_repo_url | default(git_repository_url_default) }}"
git_branch: "{{ git_branch | default(git_branch_default) }}"
pre_tasks:
- name: Optionally load secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
tasks:
- name: Verify application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Check if docker-compose.yml exists
stat:
path: "{{ app_stack_path }}/docker-compose.yml"
register: compose_file_exists
- name: Fail if docker-compose.yml doesn't exist
fail:
msg: "docker-compose.yml not found. Run setup-infrastructure.yml first."
when: not compose_file_exists.stat.exists
- name: Read current .env file
slurp:
src: "{{ app_stack_path }}/.env"
register: env_file_content
failed_when: false
changed_when: false
- name: Check if Git configuration exists in .env
set_fact:
has_git_config: "{{ env_file_content.content | b64decode | regex_search('GIT_REPOSITORY_URL=') is not none }}"
when: env_file_content.content is defined
- name: Update .env with Git configuration
lineinfile:
path: "{{ app_stack_path }}/.env"
regexp: "{{ item.regex }}"
line: "{{ item.line }}"
state: present
loop:
- { regex: '^GIT_REPOSITORY_URL=', line: 'GIT_REPOSITORY_URL={{ git_repository_url }}' }
- { regex: '^GIT_BRANCH=', line: 'GIT_BRANCH={{ git_branch }}' }
- { regex: '^GIT_TOKEN=', line: 'GIT_TOKEN={{ git_token | default("") }}' }
- { regex: '^GIT_USERNAME=', line: 'GIT_USERNAME={{ git_username | default("") }}' }
- { regex: '^GIT_PASSWORD=', line: 'GIT_PASSWORD={{ git_password | default("") }}' }
when: not has_git_config | default(true)
- name: Restart application container to trigger Git pull
shell: |
cd {{ app_stack_path }}
docker compose restart app
args:
executable: /bin/bash
register: container_restart
- name: Wait for container to be ready
wait_for:
timeout: 60
when: container_restart.changed
- name: Check container logs for Git operations
shell: |
cd {{ app_stack_path }}
docker compose logs app --tail 50 | grep -E "(Git|Clone|Pull|✅|❌)" || echo "No Git-related logs found"
args:
executable: /bin/bash
register: git_logs
changed_when: false
- name: Display Git sync result
debug:
msg:
- "=== Code Sync Summary ==="
- "Repository: {{ git_repository_url }}"
- "Branch: {{ git_branch }}"
- "Container restarted: {{ 'Yes' if container_restart.changed else 'No' }}"
- ""
- "Git Logs:"
- "{{ git_logs.stdout }}"
- ""
- "Next: Check application logs to verify code was synced"

View File

@@ -1,62 +0,0 @@
---
# Check Container Health Status
- name: Check nginx container logs
shell: |
docker logs nginx --tail 50 2>&1
args:
executable: /bin/bash
register: nginx_logs
failed_when: false
- name: Display nginx logs
debug:
msg: "{{ nginx_logs.stdout_lines }}"
- name: Test nginx health check manually
shell: |
docker exec nginx wget --spider -q http://localhost/health 2>&1 || echo "Health check failed"
args:
executable: /bin/bash
register: nginx_health_test
failed_when: false
- name: Display nginx health check result
debug:
msg: "{{ nginx_health_test.stdout }}"
- name: Check queue-worker container logs
shell: |
docker logs queue-worker --tail 50 2>&1
args:
executable: /bin/bash
register: queue_worker_logs
failed_when: false
- name: Display queue-worker logs
debug:
msg: "{{ queue_worker_logs.stdout_lines }}"
- name: Check scheduler container logs
shell: |
docker logs scheduler --tail 50 2>&1
args:
executable: /bin/bash
register: scheduler_logs
failed_when: false
- name: Display scheduler logs
debug:
msg: "{{ scheduler_logs.stdout_lines }}"
- name: Check container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: container_status
failed_when: false
- name: Display container status
debug:
msg: "{{ container_status.stdout_lines }}"

View File

@@ -1,73 +0,0 @@
---
# Diagnose 404 Errors
- name: Check nginx logs for errors
shell: |
docker logs nginx --tail 100 2>&1
args:
executable: /bin/bash
register: nginx_logs
failed_when: false
- name: Display nginx logs
debug:
msg: "{{ nginx_logs.stdout_lines }}"
- name: Check app container logs
shell: |
docker logs app --tail 100 2>&1
args:
executable: /bin/bash
register: app_logs
failed_when: false
- name: Display app container logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Test nginx health endpoint directly
shell: |
docker exec nginx wget -q -O - http://127.0.0.1/health 2>&1 || echo "Health check failed"
args:
executable: /bin/bash
register: nginx_health_test
failed_when: false
- name: Display nginx health check result
debug:
msg: "{{ nginx_health_test.stdout }}"
- name: Check nginx configuration
shell: |
docker exec nginx cat /etc/nginx/conf.d/default.conf 2>&1
args:
executable: /bin/bash
register: nginx_config
failed_when: false
- name: Display nginx configuration
debug:
msg: "{{ nginx_config.stdout_lines }}"
- name: Check if app container has files in /var/www/html
shell: |
docker exec app ls -la /var/www/html/ 2>&1 | head -20
args:
executable: /bin/bash
register: app_files
failed_when: false
- name: Display app container files
debug:
msg: "{{ app_files.stdout_lines }}"
- name: Check container network connectivity
shell: |
docker exec nginx ping -c 1 app 2>&1 | head -5
args:
executable: /bin/bash
register: network_check
failed_when: false
- name: Display network connectivity
debug:
msg: "{{ network_check.stdout }}"

View File

@@ -1,71 +0,0 @@
---
# Fix Container Health Checks
- name: Check if application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Copy updated docker-compose.yml to production
copy:
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
dest: "{{ app_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
register: compose_updated
- name: Recreate containers with new health checks
shell: |
cd {{ app_stack_path }}
docker compose up -d --force-recreate nginx queue-worker scheduler
args:
executable: /bin/bash
when: compose_updated.changed
register: containers_recreated
- name: Wait for containers to be healthy
shell: |
cd {{ app_stack_path }}
timeout=120
elapsed=0
while [ $elapsed -lt $timeout ]; do
healthy=$(docker compose ps --format json | jq -r '[.[] | select(.Name=="nginx" or .Name=="queue-worker" or .Name=="scheduler") | .Health] | all(.=="healthy" or .=="")')
if [ "$healthy" = "true" ]; then
echo "All containers are healthy"
exit 0
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "Timeout waiting for containers to become healthy"
docker compose ps
exit 1
args:
executable: /bin/bash
register: health_wait
failed_when: false
changed_when: false
- name: Check final container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: final_status
- name: Display final container status
debug:
msg: "{{ final_status.stdout_lines }}"
- name: Display summary
debug:
msg:
- "=== Health Check Fix Complete ==="
- "Containers recreated: {{ 'Yes' if containers_recreated.changed else 'No (no changes)' }}"
- "Health wait result: {{ 'SUCCESS' if health_wait.rc == 0 else 'TIMEOUT or ERROR - check logs' }}"

View File

@@ -1,71 +0,0 @@
---
# Fix Nginx 404 by setting up shared app-code volume
- name: Check if application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Copy updated docker-compose.yml to production
copy:
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
dest: "{{ app_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
register: compose_updated
- name: Initialize app-code volume with files from app image
shell: |
# Stop containers first
cd {{ app_stack_path }}
docker compose down nginx || true
# Create and initialize app-code volume
docker volume create app-code || true
# Copy files from app image to volume using temporary container
docker run --rm \
-v app-code:/target \
{{ app_image_external }}:latest \
sh -c "cp -r /var/www/html/* /target/ 2>/dev/null || true"
args:
executable: /bin/bash
register: volume_init
changed_when: true
failed_when: false
- name: Start containers
shell: |
cd {{ app_stack_path }}
docker compose up -d
args:
executable: /bin/bash
register: containers_started
- name: Wait for containers to be healthy
pause:
seconds: 15
- name: Check container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: final_status
- name: Display container status
debug:
msg: "{{ final_status.stdout_lines }}"
- name: Display summary
debug:
msg:
- "=== Nginx 404 Fix Complete ==="
- "Volume initialized: {{ 'Yes' if volume_init.changed else 'No' }}"
- "Containers restarted: {{ 'Yes' if containers_started.changed else 'No' }}"

View File

@@ -1,46 +0,0 @@
---
- name: Test FastCGI Connection Directly
hosts: production
gather_facts: yes
become: no
tasks:
- name: Install cgi-fcgi in nginx container and test FastCGI
shell: |
cd ~/deployment/stacks/staging
echo "=== Test FastCGI with a simple request ==="
# Create a simple test script
docker compose exec -T staging-nginx sh -c "echo -e 'REQUEST_METHOD=GET\nSCRIPT_FILENAME=/var/www/html/public/index.php\nSERVER_NAME=test\n' > /tmp/fcgi-test.txt && cat /tmp/fcgi-test.txt" || echo "Could not create test file"
echo ""
echo "=== Check nginx error log directly ==="
docker compose exec -T staging-nginx tail -100 /var/log/nginx/error.log 2>&1 || echo "Error log not found at /var/log/nginx/error.log"
echo ""
echo "=== Check nginx access log ==="
docker compose exec -T staging-nginx tail -50 /var/log/nginx/access.log 2>&1 | tail -20 || echo "Access log not found"
args:
executable: /bin/bash
register: fcgi_test
ignore_errors: yes
failed_when: false
- name: Display FastCGI test results
debug:
msg: "{{ fcgi_test.stdout_lines }}"
- name: Make a real HTTP request and capture full response
shell: |
cd ~/deployment/stacks/staging
echo "=== Make HTTP request from nginx container ==="
docker compose exec -T staging-nginx curl -v http://127.0.0.1/ 2>&1 || echo "Request failed"
echo ""
echo "=== Check what nginx sees ==="
docker compose exec -T staging-nginx sh -c "timeout 2 tail -f /var/log/nginx/error.log 2>&1 & sleep 1 && curl -s http://127.0.0.1/ > /dev/null && sleep 1" || echo "Log check failed"
args:
executable: /bin/bash
register: http_test
ignore_errors: yes
failed_when: false
- name: Display HTTP test results
debug:
msg: "{{ http_test.stdout_lines }}"

View File

@@ -1,99 +0,0 @@
---
- name: Test Grafana VPN Access - Final Verification
hosts: production
gather_facts: no
become: no
tasks:
- name: Clear instruction
debug:
msg:
- "=== WICHTIG ==="
- "Bitte f?hre diese Tests auf Windows aus (mit VPN verbunden):"
- ""
- "1. Pr?fe ob Hosts-Datei korrekt ist:"
- " type C:\\Windows\\System32\\drivers\\etc\\hosts | findstr grafana"
- " Sollte zeigen: 10.8.0.1 grafana.michaelschiemer.de"
- ""
- "2. Pr?fe ob VPN verbunden ist:"
- " ping 10.8.0.1"
- " Sollte funktionieren (wenn VPN verbunden ist)"
- ""
- "3. Teste Grafana-Zugriff:"
- " ?ffne https://grafana.michaelschiemer.de im Browser"
- ""
- "Dann pr?fe ich die Logs..."
- name: Wait for test
pause:
seconds: 15
- name: Check Traefik logs for Grafana access
shell: |
cd ~/deployment/stacks/traefik
echo "=== Last 10 Grafana Access Attempts ==="
tail -200 logs/access.log | grep -i grafana | tail -10 | while read line; do
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | head -1)
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | head -1)
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | head -1)
echo "$time | $client | $status"
done
args:
executable: /bin/bash
register: recent_logs
ignore_errors: yes
failed_when: false
- name: Display recent logs
debug:
msg: "{{ recent_logs.stdout_lines }}"
- name: Extract unique client IPs
shell: |
cd ~/deployment/stacks/traefik
tail -50 logs/access.log | grep -i grafana | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique client IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Analyze client IPs
shell: |
cd ~/deployment/stacks/traefik
tail -20 logs/access.log | grep -i grafana | tail -5 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | while read ip; do
if [[ -z "$ip" ]]; then
continue
fi
if [[ "$ip" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "? $ip -> VPN IP (10.8.0.0/24) - Traffic kommt ?ber VPN!"
elif [[ "$ip" == "89.246.96.244" ]]; then
echo "? $ip -> Deine ?ffentliche IP - Traffic kommt NICHT ?ber VPN (Hosts-Datei funktioniert, aber VPN-Routing nicht)"
else
echo "? $ip -> Unbekannte IP"
fi
done
args:
executable: /bin/bash
register: ip_analysis
ignore_errors: yes
failed_when: false
- name: Display IP analysis
debug:
msg: "{{ ip_analysis.stdout_lines }}"
- name: Final recommendations
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN ?"
- "? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN ?"
- "? Dann m?ssen wir pr?fen warum Hosts-Datei nicht ?ber VPN-Routing funktioniert"

View File

@@ -1,78 +0,0 @@
---
- name: Test Grafana VPN Access - Final Check
hosts: production
gather_facts: no
become: no
tasks:
- name: Instructions
debug:
msg:
- "=== FINALER TEST ==="
- "AllowedIPs ist korrekt (10.8.0.0/24) ?"
- ""
- "Bitte f?hre diese Schritte auf Windows aus:"
- "1. DNS-Cache leeren: ipconfig /flushdns"
- "2. Browser-Cache leeren oder Inkognito-Modus nutzen"
- "3. Stelle sicher, dass VPN verbunden ist"
- "4. ?ffne: https://grafana.michaelschiemer.de"
- "5. Warte 10 Sekunden"
- ""
- "Dann pr?fe ich die Logs..."
- name: Wait for access attempt
pause:
seconds: 20
- name: Check latest Grafana access
shell: |
cd ~/deployment/stacks/traefik
echo "=== Letzte 10 Grafana-Zugriffe ==="
tail -500 logs/access.log | grep -i grafana | tail -10 | while IFS= read -r line; do
time=$(echo "$line" | grep -oP '"time":"[^"]*"' | sed 's/"time":"//;s/"//' | cut -d'T' -f2 | cut -d'+' -f1 | cut -d':' -f1-2)
client=$(echo "$line" | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//')
status=$(echo "$line" | grep -oP '"DownstreamStatus":[0-9]+' | sed 's/"DownstreamStatus"://')
if [[ "$client" =~ ^10\.8\.0\.[0-9]+$ ]]; then
echo "$time | ClientHost: $client | Status: $status ? VPN-IP (Traffic kommt ?ber VPN!)"
elif [[ "$client" == "89.246.96.244" ]]; then
echo "$time | ClientHost: $client | Status: $status ? ?ffentliche IP (Traffic kommt NICHT ?ber VPN)"
else
echo "$time | ClientHost: $client | Status: $status ? Unbekannt"
fi
done
args:
executable: /bin/bash
register: analysis
ignore_errors: yes
failed_when: false
- name: Display analysis
debug:
msg: "{{ analysis.stdout_lines }}"
- name: Get unique client IPs from last 10 requests
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
args:
executable: /bin/bash
register: unique_ips
ignore_errors: yes
failed_when: false
- name: Display unique IPs
debug:
msg: "{{ unique_ips.stdout_lines }}"
- name: Final verdict
debug:
msg:
- ""
- "=== ERGEBNIS ==="
- "Pr?fe die obigen Zeilen:"
- ""
- "? Wenn ClientHost: 10.8.0.7 ? Traffic kommt ?ber VPN!"
- " ? Dann k?nnen wir die tempor?re IP-Erlaubnis entfernen!"
- ""
- "? Wenn ClientHost: 89.246.96.244 ? Traffic kommt NICHT ?ber VPN"
- " ? Dann m?ssen wir weitere Debugging-Schritte durchf?hren"

View File

@@ -1,121 +0,0 @@
---
- name: Test Redis Connection Directly with php-redis Extension
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if php-redis extension is loaded
shell: |
docker exec app php -r "
if (extension_loaded('redis')) {
echo 'php-redis extension: LOADED' . PHP_EOL;
echo 'Redis class available: ' . (class_exists('Redis') ? 'YES' : 'NO') . PHP_EOL;
} else {
echo 'php-redis extension: NOT LOADED' . PHP_EOL;
}
"
args:
executable: /bin/bash
register: extension_check
ignore_errors: yes
failed_when: false
- name: Display extension check
debug:
msg: "{{ extension_check.stdout_lines }}"
- name: Test Redis connection with actual php-redis
shell: |
docker exec app php -r "
// Get environment variables
\$redis_host = getenv('REDIS_HOST') ?: 'redis';
\$redis_port = (int)(getenv('REDIS_PORT') ?: 6379);
\$redis_password = getenv('REDIS_PASSWORD');
echo '=== Redis Connection Test ===' . PHP_EOL;
echo 'REDIS_HOST: ' . \$redis_host . PHP_EOL;
echo 'REDIS_PORT: ' . \$redis_port . PHP_EOL;
echo 'REDIS_PASSWORD: ' . (\$redis_password ? 'SET (length: ' . strlen(\$redis_password) . ')' : 'NOT SET') . PHP_EOL;
echo PHP_EOL;
if (!extension_loaded('redis')) {
echo 'ERROR: php-redis extension is not loaded!' . PHP_EOL;
exit(1);
}
if (!class_exists('Redis')) {
echo 'ERROR: Redis class is not available!' . PHP_EOL;
exit(1);
}
try {
\$redis = new Redis();
echo 'Created Redis instance' . PHP_EOL;
// Connect
\$success = \$redis->connect(\$redis_host, \$redis_port, 2.0);
if (!\$success) {
echo 'ERROR: Failed to connect to Redis server' . PHP_EOL;
exit(1);
}
echo 'Connected to Redis server' . PHP_EOL;
// Authenticate if password is provided
if (\$redis_password) {
\$auth_result = \$redis->auth(\$redis_password);
if (!\$auth_result) {
echo 'ERROR: Redis authentication failed' . PHP_EOL;
echo 'Password used: ' . substr(\$redis_password, 0, 5) . '...' . PHP_EOL;
exit(1);
}
echo 'Authenticated with Redis' . PHP_EOL;
}
// Test PING
\$ping_result = \$redis->ping();
echo 'Redis PING: ' . \$ping_result . PHP_EOL;
// Test SET/GET
\$test_key = 'test_connection_' . time();
\$test_value = 'test_value';
\$set_result = \$redis->set(\$test_key, \$test_value);
echo 'SET test: ' . (\$set_result ? 'OK' : 'FAILED') . PHP_EOL;
\$get_result = \$redis->get(\$test_key);
echo 'GET test: ' . (\$get_result === \$test_value ? 'OK' : 'FAILED') . PHP_EOL;
// Cleanup
\$redis->del(\$test_key);
\$redis->close();
echo PHP_EOL . '? All tests passed!' . PHP_EOL;
} catch (Exception \$e) {
echo 'ERROR: ' . \$e->getMessage() . PHP_EOL;
echo 'Exception type: ' . get_class(\$e) . PHP_EOL;
exit(1);
}
"
args:
executable: /bin/bash
register: redis_test
ignore_errors: yes
failed_when: false
- name: Display Redis connection test results
debug:
msg: "{{ redis_test.stdout_lines }}"
- name: Check for Redis errors in application logs
shell: |
cd ~/deployment/stacks/application
docker compose logs app --tail=100 2>&1 | grep -i -E "(redis|connection|error)" | tail -20 || echo "No Redis errors found in logs"
args:
executable: /bin/bash
register: error_logs
ignore_errors: yes
failed_when: false
- name: Display error logs
debug:
msg: "{{ error_logs.stdout_lines }}"

View File

@@ -1,79 +0,0 @@
---
- name: Test Staging Container Access and Logs
hosts: production
gather_facts: yes
become: no
tasks:
- name: Check if we can access containers directly
shell: |
# Try direct docker commands first
echo "=== Direct Docker Container Check ==="
docker ps -a --filter "name=staging"
echo ""
echo "=== Try docker logs directly ==="
docker logs staging-app --tail=20 2>&1 || echo "Could not access staging-app logs"
echo ""
echo "=== Try docker logs for staging-nginx ==="
docker logs staging-nginx --tail=20 2>&1 || echo "Could not access staging-nginx logs"
args:
executable: /bin/bash
register: direct_access
ignore_errors: yes
- name: Display direct access results
debug:
msg: "{{ (direct_access.stdout_lines | default([])) + (direct_access.stderr_lines | default([])) }}"
- name: Try docker compose from different locations
shell: |
echo "=== Current directory ==="
pwd
echo ""
echo "=== Try from home directory ==="
cd ~ && docker compose -f ~/deployment/stacks/staging/docker-compose.yml ps 2>&1 || echo "Failed from home"
echo ""
echo "=== Try from staging directory ==="
cd ~/deployment/stacks/staging && docker compose ps 2>&1 || echo "Failed from staging directory"
args:
executable: /bin/bash
register: compose_check
ignore_errors: yes
- name: Display compose check results
debug:
msg: "{{ (compose_check.stdout_lines | default([])) + (compose_check.stderr_lines | default([])) }}"
- name: Check docker compose configuration
shell: |
cd ~/deployment/stacks/staging
echo "=== Docker Compose file exists? ==="
test -f docker-compose.yml && echo "? docker-compose.yml exists" || echo "? docker-compose.yml missing"
echo ""
echo "=== Docker Compose file first 30 lines ==="
head -30 docker-compose.yml
echo ""
echo "=== Check service names ==="
docker compose config --services 2>&1 || echo "Could not get service names"
args:
executable: /bin/bash
register: compose_config
ignore_errors: yes
- name: Display compose configuration
debug:
msg: "{{ (compose_config.stdout_lines | default([])) + (compose_config.stderr_lines | default([])) }}"
- name: Try getting logs with explicit file path
shell: |
docker compose -f ~/deployment/stacks/stacks/staging/docker-compose.yml logs --tail=20 staging-app 2>&1 || \
docker compose -f ~/deployment/stacks/staging/docker-compose.yml logs --tail=20 staging-app 2>&1 || \
echo "All methods failed"
args:
executable: /bin/bash
register: explicit_logs
ignore_errors: yes
- name: Display explicit logs attempt
debug:
msg: "{{ (explicit_logs.stdout_lines | default([])) + (explicit_logs.stderr_lines | default([])) }}"

View File

@@ -1,168 +0,0 @@
---
- name: Test WireGuard Connection from Docker Container
hosts: production
become: yes
gather_facts: yes
vars:
test_container_name: "wireguard-test-client"
wireguard_config_path: "/tmp/wireguard-test"
tasks:
- name: Validate client name
fail:
msg: "client_name is required. Usage: ansible-playbook ... -e 'client_name=grafana-test'"
when: client_name is not defined or client_name == ""
- name: Check if WireGuard client config exists
stat:
path: "{{ playbook_dir }}/../wireguard-clients/{{ client_name }}.conf"
register: client_config_exists
delegate_to: localhost
become: no
- name: Fail if client config not found
fail:
msg: "Client config not found: {{ playbook_dir }}/../wireguard-clients/{{ client_name }}.conf"
when: not client_config_exists.stat.exists
- name: Read client config
slurp:
src: "{{ playbook_dir }}/../wireguard-clients/{{ client_name }}.conf"
register: client_config_content
delegate_to: localhost
become: no
- name: Extract client IP from config
set_fact:
client_vpn_ip: "{{ (client_config_content.content | b64decode | regex_findall('Address\\s*=\\s*([0-9.]+)') | first) | default('10.8.0.7') }}"
failed_when: false
- name: Display extracted client IP
debug:
msg: "Client VPN IP: {{ client_vpn_ip }}"
- name: Stop and remove existing test container
shell: |
docker stop {{ test_container_name }} || true
docker rm {{ test_container_name }} || true
args:
executable: /bin/bash
ignore_errors: yes
failed_when: false
- name: Create temporary directory for WireGuard config
file:
path: "{{ wireguard_config_path }}"
state: directory
mode: '0700'
- name: Copy client config to server
copy:
content: "{{ client_config_content.content | b64decode }}"
dest: "{{ wireguard_config_path }}/{{ client_name }}.conf"
mode: '0600'
- name: Start WireGuard test container
shell: |
docker run -d \
--name {{ test_container_name }} \
--cap-add=NET_ADMIN \
--cap-add=SYS_MODULE \
--sysctl net.ipv4.conf.all.src_valid_mark=1 \
-v {{ wireguard_config_path }}/{{ client_name }}.conf:/etc/wireguard/{{ client_name }}.conf:ro \
--device /dev/net/tun \
ghcr.io/linuxserver/wireguard:latest
args:
executable: /bin/bash
register: container_result
ignore_errors: yes
- name: Wait for container to start
pause:
seconds: 5
- name: Check container status
shell: docker ps -a --filter "name={{ test_container_name }}" --format "{{ '{{' }}.Status{{ '}}' }}"
register: container_status
failed_when: false
- name: Display container status
debug:
msg: "Container Status: {{ container_status.stdout }}"
- name: Get container logs
shell: docker logs {{ test_container_name }} --tail 50
register: container_logs
failed_when: false
- name: Display container logs
debug:
msg: "{{ container_logs.stdout_lines }}"
- name: Test ping to VPN server from container
shell: |
docker exec {{ test_container_name }} ping -c 4 10.8.0.1 || true
register: ping_result
failed_when: false
- name: Display ping result
debug:
msg: "{{ ping_result.stdout_lines }}"
- name: Test curl to Grafana from container
shell: |
docker exec {{ test_container_name }} curl -s -o /dev/null -w "%{http_code}" --max-time 10 https://grafana.michaelschiemer.de/ || echo "FAILED"
register: curl_result
failed_when: false
- name: Display curl result
debug:
msg: "HTTP Status Code: {{ curl_result.stdout }}"
- name: Get container IP
shell: |
docker exec {{ test_container_name }} ip addr show wg0 | grep "inet " | awk '{print $2}' | cut -d/ -f1 || echo "No WireGuard IP"
register: container_wg_ip
failed_when: false
- name: Display container WireGuard IP
debug:
msg: "Container WireGuard IP: {{ container_wg_ip.stdout }}"
- name: Test DNS resolution from container
shell: |
docker exec {{ test_container_name }} nslookup grafana.michaelschiemer.de || true
register: dns_result
failed_when: false
- name: Display DNS result
debug: "{{ dns_result.stdout_lines }}"
- name: Check Traefik logs for container access
shell: |
cd ~/deployment/stacks/traefik
tail -100 logs/access.log | grep -i grafana | tail -10 | grep -oP '"ClientHost":"[^"]*"' | sed 's/"ClientHost":"//;s/"//' | sort -u
register: traefik_client_ips
failed_when: false
- name: Display Traefik client IPs
debug:
msg: "{{ traefik_client_ips.stdout_lines }}"
- name: Cleanup instructions
debug:
msg: |
========================================
TEST ABGESCHLOSSEN
========================================
Container-Name: {{ test_container_name }}
Um Container zu entfernen:
docker stop {{ test_container_name }}
docker rm {{ test_container_name }}
Um Config zu entfernen:
rm -rf {{ wireguard_config_path }}
========================================

View File

@@ -1,49 +0,0 @@
---
- name: Update Gitea Configuration and Restart
hosts: production
become: no
gather_facts: yes
vars:
gitea_stack_path: "{{ stacks_base_path }}/gitea"
tasks:
- name: Copy updated docker-compose.yml to production server
copy:
src: "{{ playbook_dir }}/../../stacks/gitea/docker-compose.yml"
dest: "{{ gitea_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
- name: Restart Gitea stack with updated configuration
community.docker.docker_compose_v2:
project_src: "{{ gitea_stack_path }}"
state: present
pull: never
recreate: always
remove_orphans: no
register: gitea_restart
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: gitea_restart.changed
- name: Verify Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
register: gitea_config
changed_when: false
ignore_errors: yes
- name: Display Gitea Actions configuration
debug:
msg:
- "=== Gitea Configuration Update Complete ==="
- "Container restarted: {{ 'Yes' if gitea_restart.changed else 'No' }}"
- ""
- "Current Actions configuration:"
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config (container may still be starting)' }}"
- ""
- "The DEFAULT_ACTIONS_URL should now point to your Gitea instance instead of GitHub."

View File

@@ -1,44 +0,0 @@
---
- name: View Staging Logs in Readable Format
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get and format staging-app logs
shell: |
cd ~/deployment/stacks/staging
echo "=== Staging App Logs (Last 50 lines, formatted) ==="
docker compose logs --tail=200 staging-app 2>&1 | \
while IFS= read -r line; do
# Try to parse JSON logs and format them nicely
if echo "$line" | grep -qE '^{|^\{'; then
echo "$line" | python3 -m json.tool 2>/dev/null || echo "$line"
else
echo "$line"
fi
done | tail -50
args:
executable: /bin/bash
register: formatted_logs
ignore_errors: yes
failed_when: false
- name: Display formatted logs
debug:
msg: "{{ formatted_logs.stdout_lines }}"
- name: Show simple tail of logs (for quick view)
shell: |
cd ~/deployment/stacks/staging
echo "=== Quick View: Last 30 lines ==="
docker compose logs --tail=30 staging-app 2>&1
args:
executable: /bin/bash
register: quick_logs
ignore_errors: yes
failed_when: false
- name: Display quick logs
debug:
msg: "{{ quick_logs.stdout_lines }}"

View File

@@ -1,46 +0,0 @@
---
# Source path for application stack files on the control node
# Use playbook_dir as base, then go to ../stacks/application
# This assumes playbooks are in deployment/ansible/playbooks
application_stack_src: "{{ playbook_dir | default(role_path + '/..') }}/../stacks/application"
# Destination path on the target host (defaults to configured app_stack_path)
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/application') }}"
# Template used to generate the application .env file
application_env_template: "{{ role_path }}/../../templates/application.env.j2"
# Optional vault file containing secrets (loaded if present)
application_vault_file: "{{ role_path }}/../../secrets/production.vault.yml"
# Whether to synchronize stack files from repository
application_sync_files: true
# Compose recreate strategy ("auto", "always", "never")
application_compose_recreate: "auto"
# Whether to remove orphaned containers during compose up
application_remove_orphans: false
# Whether to run database migrations after (re)deploying the stack
application_run_migrations: true
# Optional health check URL to verify after deployment
application_healthcheck_url: "{{ health_check_url | default('') }}"
# Timeout used for waits in this role
application_wait_timeout: "{{ wait_timeout | default(60) }}"
application_wait_interval: 5
# Command executed inside the app container to run migrations
application_migration_command: "php console.php db:migrate"
# Environment (production, staging, local)
# Determines which compose files to use and service names
application_environment: "{{ APP_ENV | default('production') }}"
# Compose file suffix based on environment
application_compose_suffix: "{{ 'staging.yml' if application_environment == 'staging' else 'production.yml' }}"
# Service names based on environment
application_service_name: "{{ 'staging-app' if application_environment == 'staging' else 'php' }}"

View File

@@ -1,69 +0,0 @@
---
- name: Deploy application stack
community.docker.docker_compose_v2:
project_src: "{{ application_stack_dest }}"
state: present
pull: always
recreate: "{{ application_compose_recreate }}"
remove_orphans: "{{ application_remove_orphans | bool }}"
register: application_compose_result
- name: Wait for application container to report Up
shell: |
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps {{ application_service_name }} | grep -Eiq "Up|running"
register: application_app_running
changed_when: false
until: application_app_running.rc == 0
retries: "{{ ((application_wait_timeout | int) + (application_wait_interval | int) - 1) // (application_wait_interval | int) }}"
delay: "{{ application_wait_interval | int }}"
when: application_compose_result.changed
- name: Ensure app container is running before migrations
shell: |
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps {{ application_service_name }} | grep -Eiq "Up|running"
args:
executable: /bin/bash
register: application_app_container_running
changed_when: false
failed_when: false
when: application_compose_result.changed
- name: Run database migrations
shell: |
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T {{ application_service_name }} {{ application_migration_command }}
args:
executable: /bin/bash
register: application_migration_result
changed_when: true
failed_when: false
ignore_errors: yes
when:
- application_run_migrations
- application_compose_result.changed
- application_app_container_running.rc == 0
- name: Collect application container status
shell: docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps
register: application_ps
changed_when: false
ignore_errors: yes
- name: Perform application health check
uri:
url: "{{ application_healthcheck_url }}"
method: GET
validate_certs: no
status_code: [200, 404, 502, 503]
timeout: 10
register: application_healthcheck_result
ignore_errors: yes
when:
- application_healthcheck_url | length > 0
- application_compose_result.changed
- name: Set application role summary facts
set_fact:
application_stack_changed: "{{ application_compose_result.changed | default(false) }}"
application_health_output: "{{ application_ps.stdout | default('') }}"
application_healthcheck_status: "{{ application_healthcheck_result.status | default('unknown') }}"
application_migration_stdout: "{{ application_migration_result.stdout | default('') }}"

View File

@@ -1,160 +0,0 @@
---
- name: Ensure application stack destination directory exists
file:
path: "{{ application_stack_dest }}"
state: directory
mode: '0755'
- name: Check if vault file exists locally
stat:
path: "{{ application_vault_file }}"
delegate_to: localhost
register: application_vault_stat
become: no
- name: Optionally load application secrets from vault
include_vars:
file: "{{ application_vault_file }}"
when: application_vault_stat.stat.exists
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Check if PostgreSQL .env exists on target host
stat:
path: "{{ stacks_base_path }}/postgresql/.env"
register: application_postgres_env_file
changed_when: false
- name: Extract PostgreSQL password from .env file
shell: "grep '^POSTGRES_PASSWORD=' {{ stacks_base_path }}/postgresql/.env 2>/dev/null | cut -d'=' -f2- || echo ''"
register: application_postgres_password
changed_when: false
failed_when: false
when: application_postgres_env_file.stat.exists
no_log: yes
- name: Determine application database password
set_fact:
application_db_password: >-
{{ (application_postgres_env_file.stat.exists and application_postgres_password.stdout != '') |
ternary(application_postgres_password.stdout,
vault_db_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation'))) }}
no_log: yes
- name: Determine application redis password
set_fact:
application_redis_password: "{{ redis_password | default(vault_redis_password | default('')) }}"
no_log: yes
- name: Ensure redis password provided via vault
fail:
msg: >-
Redis credentials are missing. Define vault_redis_password in
{{ application_vault_file }} (encrypted with ansible-vault) or pass
redis_password via extra vars.
when: (application_redis_password | string | trim) == ''
- name: Determine application app key
set_fact:
application_app_key: "{{ app_key | default(vault_app_key | default('')) }}"
no_log: yes
- name: Ensure application app key provided via vault
fail:
msg: >-
Application key missing. Define vault_app_key in
{{ application_vault_file }} (ansible-vault) or pass app_key via extra vars.
when: (application_app_key | string | trim) == ''
- name: Determine encryption key (optional)
set_fact:
application_encryption_key: "{{ encryption_key | default(vault_encryption_key | default('')) }}"
no_log: yes
- name: Check if application docker-compose.base.yml source exists locally
stat:
path: "{{ application_stack_src }}/docker-compose.base.yml"
delegate_to: localhost
register: application_compose_base_src
become: no
- name: Check if application docker-compose override file exists locally (production or staging)
stat:
path: "{{ application_stack_src }}/../../../docker-compose.{{ application_compose_suffix }}"
delegate_to: localhost
register: application_compose_override_src
become: no
- name: Copy application docker-compose.base.yml to target host
copy:
src: "{{ application_stack_src }}/docker-compose.base.yml"
dest: "{{ application_stack_dest }}/docker-compose.base.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
when: application_compose_base_src.stat.exists
- name: Copy application docker-compose override file to target host (production or staging)
copy:
src: "{{ application_stack_src }}/../../../docker-compose.{{ application_compose_suffix }}"
dest: "{{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }}"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
when: application_compose_override_src.stat.exists
- name: Check if legacy docker-compose.yml exists (fallback)
stat:
path: "{{ application_stack_src }}/docker-compose.yml"
delegate_to: localhost
register: application_compose_src
become: no
when: not (application_compose_base_src.stat.exists | default(false))
- name: Copy application docker-compose.yml to target host (fallback for legacy)
copy:
src: "{{ application_stack_src }}/docker-compose.yml"
dest: "{{ application_stack_dest }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
when:
- application_compose_src is defined
- application_compose_src.stat.exists | default(false)
- not (application_compose_base_src.stat.exists | default(false))
- name: Check if nginx configuration exists locally
stat:
path: "{{ application_stack_src }}/nginx"
delegate_to: localhost
register: application_nginx_src
become: no
- name: Synchronize nginx configuration
copy:
src: "{{ application_stack_src }}/nginx/"
dest: "{{ application_stack_dest }}/nginx/"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
when: application_nginx_src.stat.exists
- name: Expose secrets for template rendering
set_fact:
db_password: "{{ application_db_password }}"
redis_password: "{{ application_redis_password }}"
app_key: "{{ application_app_key }}"
encryption_key: "{{ application_encryption_key }}"
db_username: "{{ db_user | default(db_user_default) }}"
db_name: "{{ db_name | default(db_name_default) }}"
no_log: yes
- name: Render application environment file
template:
src: "{{ application_env_template }}"
dest: "{{ application_stack_dest }}/.env"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0600'

Some files were not shown because too many files have changed in this diff Show More