16 KiB
Production Logging Configuration
Comprehensive logging configuration und best practices für Production Deployment des Custom PHP Frameworks.
Logging Architecture Overview
Application Logs → Framework Logger → Log Handlers → Destinations
↓ ↓ ↓
Log Processors Formatters Files/Syslog/
(Metadata, (JSON, External
Context, Line) Services)
Performance)
Log Levels
Das Framework verwendet PSR-3 Log Levels:
| Level | Severity | Production Use | Description |
|---|---|---|---|
| emergency | 0 | Always | System is unusable |
| alert | 1 | Always | Action must be taken immediately |
| critical | 2 | Always | Critical conditions |
| error | 3 | Always | Error conditions |
| warning | 4 | Recommended | Warning conditions |
| notice | 5 | Optional | Normal but significant condition |
| info | 6 | Minimal | Informational messages |
| debug | 7 | Never | Debug-level messages |
Production Recommendation: LOG_LEVEL=warning
Framework Logger Configuration
Core Logger Setup
Das Framework nutzt ein hierarchisches Logging-System mit mehreren Kanälen:
Available Channels:
app- Application-level logssecurity- Security events (OWASP, authentication, authorization)performance- Performance metrics and slow queriesdatabase- Database queries and errorsqueue- Background job processingcache- Cache operationshttp- HTTP requests and responses
Log Configuration via Environment
# .env.production
# Primary log level (emergency, alert, critical, error, warning, notice, info, debug)
LOG_LEVEL=warning
# Channel-specific log levels (optional)
LOG_LEVEL_SECURITY=info
LOG_LEVEL_PERFORMANCE=warning
LOG_LEVEL_DATABASE=error
LOG_LEVEL_QUEUE=warning
# Log destination (file, syslog, stderr, or combination)
LOG_DESTINATION=file,syslog
# Log file path (relative to project root)
LOG_FILE_PATH=storage/logs/application.log
# Log rotation
LOG_ROTATION_ENABLED=true
LOG_ROTATION_MAX_FILES=14
LOG_ROTATION_MAX_SIZE=100M
# JSON logging (recommended for production)
LOG_FORMAT=json
# Include stack traces in error logs
LOG_INCLUDE_STACKTRACE=true
# Sanitize sensitive data in logs
LOG_SANITIZE_SENSITIVE=true
Log Handlers
1. File Handler (Default)
Location: storage/logs/
Configuration:
// src/Framework/Logging/LoggerInitializer.php
use App\Framework\Logging\Handlers\JsonFileHandler;
$fileHandler = new JsonFileHandler(
filename: $logPath,
level: LogLevel::WARNING,
maxFiles: 14,
maxSize: 100 * 1024 * 1024 // 100MB
);
Log Files Structure:
storage/logs/
├── application.log # Current application log
├── application-2024-01-15.log
├── application-2024-01-14.log
├── security.log # Security events
├── performance.log # Performance metrics
├── database.log # Database queries
└── error.log # Error-only log
2. Syslog Handler
Integration with System Syslog:
use App\Framework\Logging\Handlers\SyslogHandler;
$syslogHandler = new SyslogHandler(
ident: 'michaelschiemer-app',
facility: LOG_USER,
level: LogLevel::WARNING
);
Syslog Configuration (/etc/rsyslog.d/50-app.conf):
# Application logs
local0.* /var/log/michaelschiemer/app.log
# Security logs
local1.* /var/log/michaelschiemer/security.log
# Performance logs
local2.* /var/log/michaelschiemer/performance.log
3. External Services (Optional)
Sentry Integration
# .env.production
SENTRY_DSN=https://xxx@sentry.io/xxx
SENTRY_ENVIRONMENT=production
SENTRY_TRACES_SAMPLE_RATE=0.1
ELK Stack (Elasticsearch, Logstash, Kibana)
Logstash Configuration (/etc/logstash/conf.d/app.conf):
input {
file {
path => "/var/log/michaelschiemer/application.log"
type => "app-logs"
codec => "json"
}
}
filter {
if [type] == "app-logs" {
json {
source => "message"
}
date {
match => ["timestamp", "ISO8601"]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "app-logs-%{+YYYY.MM.dd}"
}
}
Log Processors
Log Processors enrichen log entries mit zusätzlichem Context.
Available Processors
1. Performance Processor:
use App\Framework\Logging\Processors\PerformanceProcessor;
// Adds execution time, memory usage, peak memory
$performanceProcessor = new PerformanceProcessor();
2. Web Info Processor:
use App\Framework\Logging\Processors\WebInfoProcessor;
// Adds request ID, IP, user agent, URL
$webInfoProcessor = new WebInfoProcessor($request);
3. Exception Enrichment Processor:
use App\Framework\Logging\Processors\ExceptionEnrichmentProcessor;
// Converts Throwables to ExceptionContext with enriched metadata
// - Exception hash for pattern recognition
// - Severity categorization
// - Short stack trace for quick overview
// - Exception chain length
$exceptionEnrichmentProcessor = new ExceptionEnrichmentProcessor();
4. Interpolation Processor:
use App\Framework\Logging\Processors\InterpolationProcessor;
// Replaces placeholders in message with context values
$interpolationProcessor = new InterpolationProcessor();
Custom Processor Example
final readonly class TraceIdProcessor
{
public function __invoke(LogRecord $record): LogRecord
{
$traceId = $this->getDistributedTraceId();
return $record->withContext([
'trace_id' => $traceId,
'span_id' => $this->getCurrentSpanId(),
]);
}
}
Log Formatters
1. JSON Formatter (Recommended for Production)
Output Format:
{
"timestamp": "2024-01-15T14:32:45.123456Z",
"level": "error",
"channel": "app",
"message": "Database connection failed",
"context": {
"exception": "PDOException",
"file": "/app/src/Database/Connection.php",
"line": 42,
"trace": "..."
},
"extra": {
"request_id": "req_abc123xyz",
"ip": "203.0.113.42",
"user_agent": "Mozilla/5.0...",
"memory_usage": 12582912,
"execution_time_ms": 234
}
}
Benefits:
- Structured data for log aggregation tools
- Easy parsing and filtering
- Machine-readable format
- Standardized timestamps
Usage:
use App\Framework\Logging\Formatter\JsonFormatter;
$jsonFormatter = new JsonFormatter(
prettyPrint: false, // Compact JSON for production
includeContext: true,
includeExtra: true
);
2. Line Formatter (Human-Readable)
Output Format:
[2024-01-15 14:32:45] app.ERROR: Database connection failed {"exception":"PDOException","file":"/app/src/Database/Connection.php","line":42} {"request_id":"req_abc123xyz","ip":"203.0.113.42"}
Usage:
use App\Framework\Logging\Formatter\LineFormatter;
$lineFormatter = new LineFormatter(
format: "[%datetime%] %channel%.%level_name%: %message% %context% %extra%\n",
dateFormat: "Y-m-d H:i:s",
allowInlineLineBreaks: false
);
Security Event Logging
OWASP Security Events
Das Framework nutzt OWASP Application Security Logging für Sicherheitsereignisse:
Available Event Types:
use App\Framework\Security\OWASPEventIdentifier;
// Authentication
OWASPEventIdentifier::AUTHN_LOGIN_SUCCESS
OWASPEventIdentifier::AUTHN_LOGIN_FAILURE
OWASPEventIdentifier::AUTHN_LOGOUT_SUCCESS
OWASPEventIdentifier::AUTHN_SESSION_EXPIRED
// Authorization
OWASPEventIdentifier::AUTHZ_PERMISSION_DENIED
OWASPEventIdentifier::AUTHZ_PRIVILEGE_ESCALATION
// Input Validation
OWASPEventIdentifier::INPUT_VALIDATION_FAILURE
OWASPEventIdentifier::INPUT_XSS_DETECTED
OWASPEventIdentifier::INPUT_SQL_INJECTION_DETECTED
// Security Events
OWASPEventIdentifier::SECURITY_INTRUSION_DETECTED
Usage:
use App\Framework\Security\OWASPSecurityLogger;
$this->owaspLogger->logSecurityEvent(
new SecurityEventType(OWASPEventIdentifier::AUTHN_LOGIN_FAILURE),
request: $request,
context: [
'username' => $credentials->username,
'ip_address' => $request->server->getRemoteAddr(),
'failure_reason' => 'Invalid credentials'
]
);
Security Log Format:
{
"timestamp": "2024-01-15T14:32:45Z",
"event_type": "authn_login_failure",
"severity": "warning",
"user": "john@example.com",
"ip_address": "203.0.113.42",
"user_agent": "Mozilla/5.0...",
"request_id": "req_abc123xyz",
"context": {
"username": "john@example.com",
"failure_reason": "Invalid credentials",
"attempt_count": 3
}
}
Performance Logging
Slow Query Logging
Database Slow Queries:
// Automatically logged via ProfilingConnection
// Threshold: 500ms (configurable)
// Log Entry:
{
"timestamp": "2024-01-15T14:32:45Z",
"channel": "database",
"level": "warning",
"message": "Slow query detected",
"context": {
"query": "SELECT * FROM users WHERE ...",
"execution_time_ms": 1234,
"affected_rows": 15000,
"trace": "..."
}
}
N+1 Query Detection:
// Automatically detected via N+1 detection system
{
"timestamp": "2024-01-15T14:32:45Z",
"channel": "performance",
"level": "warning",
"message": "N+1 query pattern detected",
"context": {
"parent_query": "SELECT * FROM posts",
"repeated_query": "SELECT * FROM comments WHERE post_id = ?",
"repetition_count": 50,
"total_time_ms": 2500,
"suggestion": "Use eager loading or JOIN"
}
}
Request Performance Logging
// Automatically logged via PerformanceMiddleware
// Threshold: 500ms (configurable)
{
"timestamp": "2024-01-15T14:32:45Z",
"channel": "performance",
"level": "warning",
"message": "Slow request detected",
"context": {
"method": "GET",
"path": "/api/users",
"execution_time_ms": 1234,
"memory_usage_mb": 45,
"peak_memory_mb": 52,
"db_queries": 23,
"cache_hits": 15,
"cache_misses": 8
}
}
Log Rotation
File-Based Rotation
Configuration:
// Automatic rotation via JsonFileHandler
new JsonFileHandler(
filename: 'application.log',
maxFiles: 14, // Keep 14 days of logs
maxSize: 100 * 1024 * 1024 // 100MB per file
);
Rotation Trigger:
- Daily at midnight (new date in filename)
- When file size exceeds maxSize
Filename Pattern: application-YYYY-MM-DD.log
Logrotate Configuration (System-Level)
/etc/logrotate.d/michaelschiemer:
/var/log/michaelschiemer/*.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 www-data www-data
sharedscripts
postrotate
# Reload application if needed
# docker-compose -f /path/to/docker-compose.yml kill -s USR1 php
endscript
}
Log Monitoring & Alerting
Real-Time Log Monitoring
1. Docker Logs:
# Follow all logs
docker-compose logs -f --tail=100
# Filter by service
docker-compose logs -f php
# Filter by log level
docker-compose logs -f | grep -E "(ERROR|CRITICAL|ALERT|EMERGENCY)"
# Search for specific pattern
docker-compose logs -f | grep "Database connection failed"
2. File-Based Monitoring (tail):
# Follow application log
tail -f storage/logs/application.log | jq '.'
# Filter errors only
tail -f storage/logs/application.log | jq 'select(.level == "error")'
# Filter security events
tail -f storage/logs/security.log | jq 'select(.event_type | startswith("authn"))'
Alerting Configuration
1. Log-Based Alerts (with Logstash/Elasticsearch):
Watcher Alert Example (Elasticsearch):
{
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": ["app-logs-*"],
"body": {
"query": {
"bool": {
"must": [
{ "range": { "@timestamp": { "gte": "now-5m" } } },
{ "terms": { "level": ["error", "critical", "alert", "emergency"] } }
]
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gt": 10
}
}
},
"actions": {
"send_email": {
"email": {
"to": "ops@example.com",
"subject": "High Error Rate Detected",
"body": "More than 10 errors in the last 5 minutes"
}
}
}
}
2. Custom Alert Script:
#!/bin/bash
# /usr/local/bin/log-alert-check.sh
LOG_FILE="/var/log/michaelschiemer/application.log"
ALERT_EMAIL="ops@example.com"
ERROR_THRESHOLD=10
# Count errors in last 5 minutes
ERROR_COUNT=$(grep -c '"level":"error"' <(tail -n 1000 "$LOG_FILE"))
if [ "$ERROR_COUNT" -gt "$ERROR_THRESHOLD" ]; then
echo "High error rate detected: $ERROR_COUNT errors" | \
mail -s "ALERT: High Error Rate" "$ALERT_EMAIL"
fi
Cron Job:
# Check every 5 minutes
*/5 * * * * /usr/local/bin/log-alert-check.sh
Log Sanitization
Sensitive Data Protection
Das Framework sanitiert automatisch sensitive Daten in Logs:
Automatically Redacted Keys:
password,passwd,pwdtoken,access_token,refresh_tokensecret,api_key,api_secretauthorization,authcredit_card,cc_number,cvvssn,social_security
Example:
$logger->info('User authentication', [
'username' => 'john@example.com',
'password' => 'secret123', // Will be logged as '[REDACTED]'
'api_key' => 'sk_live_xxx' // Will be logged as '[REDACTED]'
]);
Custom Sanitization:
use App\Framework\Logging\LogSanitizer;
$sanitizer = new LogSanitizer(
sensitiveKeys: ['custom_secret', 'internal_token']
);
$sanitizedContext = $sanitizer->sanitize($context);
Production Logging Best Practices
1. Log Level Management
DO:
- Use
warninglevel in production for operational awareness - Use
errorfor failures that need investigation - Use
critical/alert/emergencyfor immediate action needed
DON'T:
- Never use
debuglevel in production (performance impact) - Avoid
infolevel for high-frequency events - Don't log sensitive data (passwords, tokens, PII)
2. Structured Logging
DO:
- Use JSON format for machine parsing
- Include consistent context (request_id, user_id, ip)
- Add relevant metadata (execution_time, memory_usage)
- Use consistent field names across all logs
DON'T:
- Avoid unstructured string concatenation
- Don't mix log formats (JSON vs plain text)
- Avoid dynamic field names
3. Performance Considerations
DO:
- Implement asynchronous logging for high-traffic systems
- Use log rotation to prevent disk space issues
- Monitor log volume and adjust thresholds
- Cache logger instances
DON'T:
- Never perform expensive operations in log statements
- Avoid logging in tight loops
- Don't log entire objects (use specific fields)
4. Security
DO:
- Log all authentication and authorization events
- Log security exceptions (SQL injection, XSS attempts)
- Include request context for security analysis
- Implement log integrity checks
DON'T:
- Never log passwords, tokens, or sensitive PII
- Avoid logging entire request/response bodies
- Don't expose internal system paths in production logs
Troubleshooting
Problem: Logs not being written
# Check log directory permissions
ls -la storage/logs/
# Fix permissions
chmod 755 storage/logs/
chmod 644 storage/logs/*.log
# Verify logger configuration
php console.php config:check --key=LOG_LEVEL
# Test logging
php console.php log:test --level=error
Problem: Log files growing too large
# Check log file sizes
du -sh storage/logs/*
# Manually rotate logs
mv storage/logs/application.log storage/logs/application-$(date +%Y%m%d).log
# Compress old logs
gzip storage/logs/application-2024*.log
# Force logrotate
logrotate -f /etc/logrotate.d/michaelschiemer
Problem: Performance degradation due to logging
# Increase log level to reduce volume
# .env: LOG_LEVEL=error
# Disable verbose processors
# Remove WebInfoProcessor, PerformanceProcessor in production
# Use asynchronous logging
# Implement QueuedLogHandler for high-traffic scenarios
See Also
- Security Patterns:
docs/claude/security-patterns.md - Performance Monitoring:
docs/claude/performance-monitoring.md - Error Handling:
docs/claude/error-handling.md - Deployment Guide:
docs/deployment/production-prerequisites.md