BREAKING CHANGE: Requires PHP 8.5.0RC3 Changes: - Update Docker base image from php:8.4-fpm to php:8.5.0RC3-fpm - Enable ext-uri for native WHATWG URL parsing support - Update composer.json PHP requirement from ^8.4 to ^8.5 - Add ext-uri as required extension in composer.json - Move URL classes from Url.php85/ to Url/ directory (now compatible) - Remove temporary PHP 8.4 compatibility workarounds Benefits: - Native URL parsing with Uri\WhatWg\Url class - Better performance for URL operations - Future-proof with latest PHP features - Eliminates PHP version compatibility issues
Performance Testing
Comprehensive performance testing infrastructure for the Custom PHP Framework.
Overview
Performance testing suite including:
- Benchmarks - Statistical performance measurement with P95/P99 percentiles
- Load Tests - Concurrent request simulation
- Reports - HTML, JSON, and Markdown report generation
Directory Structure
tests/Performance/
├── Benchmarks/ # Performance benchmarks
│ ├── RoutingBenchmark.php
│ ├── DatabaseBenchmark.php
│ └── CacheBenchmark.php
├── LoadTests/ # Load testing utilities
│ ├── LoadTestRunner.php
│ └── LoadTestResult.php
├── Reports/ # Generated reports
│ └── PerformanceReportGenerator.php
├── PerformanceTestCase.php # Base class for benchmarks
└── PerformanceBenchmarkResult.php # Result value object
Running Performance Tests
All Performance Tests
# Run all performance benchmarks
npm run test:perf
# Or directly with Pest
docker exec php ./vendor/bin/pest tests/Performance
Specific Benchmarks
# Routing benchmarks
npm run test:perf:routing
# Database benchmarks
npm run test:perf:database
# Cache benchmarks
npm run test:perf:cache
Generate Reports
# Generate performance report
npm run test:perf:report
Benchmark Categories
1. Routing Performance
File: tests/Performance/Benchmarks/RoutingBenchmark.php
Tests:
- Static route matching
- Dynamic route matching (with parameters)
- Complex route matching (multiple parameters)
- Route matching with query parameters
- Route not found scenarios
- POST route matching
Thresholds:
- Static routes: <0.1ms average
- Dynamic routes: <0.5ms average
- Complex routes: <1.0ms average
2. Database Performance
File: tests/Performance/Benchmarks/DatabaseBenchmark.php
Tests:
- Simple SELECT queries
- SELECT with JOIN operations
- Bulk INSERT operations (100 records)
- Transaction performance
- Aggregate queries (COUNT, AVG, SUM)
- Complex WHERE conditions
Thresholds:
- Simple queries: <1.0ms average
- JOIN queries: <5.0ms average
- Bulk inserts: <50.0ms average (100 records)
- Transactions: <10.0ms average
3. Cache Performance
File: tests/Performance/Benchmarks/CacheBenchmark.php
Tests:
- Cache SET operations
- Cache GET operations (hit)
- Cache GET operations (miss)
- Cache remember pattern
- Batch SET operations (100 items)
- Large data caching (1MB)
- Cache DELETE operations
- Cache HAS operations
Thresholds:
- Cache SET: <0.5ms average
- Cache GET (hit): <0.2ms average
- Cache GET (miss): <0.3ms average
- Batch operations: <20.0ms average (100 items)
Load Testing
LoadTestRunner
Simulates concurrent users making requests to test performance under load.
Example Usage:
use Tests\Performance\LoadTests\LoadTestRunner;
use App\Framework\Core\ValueObjects\Duration;
$runner = new LoadTestRunner(baseUrl: 'https://localhost');
$result = $runner->run(
endpoint: '/api/users',
concurrentUsers: 50,
requestsPerUser: 100,
rampUpTime: Duration::fromSeconds(10)
);
echo $result->toString();
Metrics:
- Total requests
- Successful requests
- Error rate
- Response time (avg, min, max, median, P95, P99)
- Throughput (requests/second)
Report Generation
HTML Reports
use Tests\Performance\Reports\PerformanceReportGenerator;
$generator = new PerformanceReportGenerator();
$html = $generator->generateHtmlReport(
benchmarkResults: $results,
loadTestResult: $loadTestResult
);
$generator->saveReport($html, 'performance-report.html');
JSON Reports
$json = $generator->generateJsonReport(
benchmarkResults: $results,
loadTestResult: $loadTestResult
);
$generator->saveReport($json, 'performance-report.json');
Markdown Reports
$markdown = $generator->generateMarkdownReport(
benchmarkResults: $results,
loadTestResult: $loadTestResult
);
$generator->saveReport($markdown, 'performance-report.md');
Creating Custom Benchmarks
Step 1: Extend PerformanceTestCase
<?php
use Tests\Performance\PerformanceTestCase;
use Tests\Performance\PerformanceBenchmarkResult;
use App\Framework\Performance\Contracts\PerformanceCollectorInterface;
use App\Framework\Performance\PerformanceCategory;
final readonly class MyCustomBenchmark extends PerformanceTestCase
{
public function __construct(
PerformanceCollectorInterface $collector,
private MyService $service
) {
parent::__construct($collector);
}
public function benchmarkMyOperation(): PerformanceBenchmarkResult
{
$result = $this->benchmark(
operation: fn() => $this->service->performOperation(),
iterations: 1000,
name: 'My Custom Operation'
);
// Assert performance threshold
$this->assertPerformanceThreshold(
$result,
maxAvgTimeMs: 10.0,
maxMemoryBytes: 1024 * 1024 // 1MB
);
// Record metrics
$this->recordBenchmark($result, PerformanceCategory::CUSTOM);
return $result;
}
}
Step 2: Use benchmark() Method
The benchmark() method provides:
- Warmup run - Prevents cold start skewing results
- High-resolution timing - Nanosecond precision with
hrtime() - Statistical analysis - avg, min, max, median, P95, P99
- Memory tracking - Per-iteration memory usage
- Throughput calculation - Operations per second
Step 3: Set Performance Thresholds
// Time threshold only
$this->assertPerformanceThreshold($result, maxAvgTimeMs: 5.0);
// Time and memory thresholds
$this->assertPerformanceThreshold(
$result,
maxAvgTimeMs: 10.0,
maxMemoryBytes: 10 * 1024 * 1024 // 10MB
);
Performance Metrics Explained
Timing Metrics
- Average Time: Mean execution time across all iterations
- Min Time: Fastest single execution
- Max Time: Slowest single execution
- Median Time: Middle value when sorted
- P95 Time: 95th percentile - 95% of requests faster than this
- P99 Time: 99th percentile - 99% of requests faster than this
Memory Metrics
- Avg Memory: Average memory used per iteration
- Peak Memory: Maximum memory increase during benchmark
Throughput Metrics
- Operations/Second: How many operations can be completed per second
- Requests/Second: For load tests, total requests per second
Best Practices
1. Benchmark Design
- Sufficient Iterations: Use enough iterations for statistical significance (min 100, prefer 1000+)
- Warmup: The
benchmark()method includes automatic warmup - Isolation: Test one operation at a time
- Realistic Data: Use production-like data volumes
2. Performance Thresholds
- Set Realistic Thresholds: Based on actual requirements, not arbitrary numbers
- Consider Percentiles: P95/P99 more important than average for user experience
- Monitor Trends: Track performance over time, not just absolute values
3. Load Testing
- Ramp-Up: Use gradual ramp-up to avoid overwhelming the system
- Realistic Concurrency: Match expected production load
- Error Handling: Monitor error rates under load
- Resource Monitoring: Track CPU, memory, database connections
4. Reporting
- Regular Reports: Generate reports regularly to track trends
- Multiple Formats: Use HTML for humans, JSON for automation
- Version Tracking: Include version/commit info in reports
- Historical Comparison: Compare against previous runs
Performance Targets
Response Time Targets
- API Endpoints: <100ms P95
- Database Queries: <10ms P95
- Cache Operations: <1ms P95
- Route Matching: <0.5ms P95
Throughput Targets
- API Throughput: >1000 req/sec
- Database Throughput: >500 queries/sec
- Cache Throughput: >10,000 ops/sec
Resource Targets
- Memory: <512MB for typical requests
- CPU: <30% utilization at normal load
- Database Connections: <20 concurrent connections
Troubleshooting
Slow Benchmarks
Issue: Benchmarks taking too long
Solutions:
- Reduce iterations for initial testing
- Check for N+1 query problems
- Profile with Xdebug or Blackfire
- Optimize database indexes
High Memory Usage
Issue: Memory usage exceeds thresholds
Solutions:
- Use batch processing for large datasets
- Implement pagination
- Clear large arrays/objects when done
- Check for memory leaks
Inconsistent Results
Issue: High variance in benchmark results
Solutions:
- Increase iteration count
- Run on dedicated hardware
- Disable background processes
- Use P95/P99 instead of max
Integration with PerformanceCollector
All benchmarks automatically integrate with the framework's PerformanceCollector:
$this->recordBenchmark($result, PerformanceCategory::ROUTING);
This allows:
- Centralized performance tracking
- Real-time monitoring dashboards
- Historical performance analysis
- Alerting on performance degradation
CI/CD Integration
GitHub Actions Example
name: Performance Tests
on: [push, pull_request]
jobs:
performance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Docker
run: make up
- name: Run Performance Tests
run: npm run test:perf
- name: Generate Report
run: npm run test:perf:report
- name: Upload Report
uses: actions/upload-artifact@v3
with:
name: performance-report
path: tests/Performance/Reports/
Resources
- PerformanceTestCase: Base class for all benchmarks
- PerformanceBenchmarkResult: Result value object
- LoadTestRunner: Concurrent load testing
- PerformanceReportGenerator: Report generation in multiple formats
- PerformanceCollector: Framework's performance tracking system
Support
For performance testing questions or issues:
- Check this README
- Review existing benchmarks for examples
- Check framework performance documentation
- Create GitHub issue if problem persists