- Add comprehensive health check system with multiple endpoints - Add Prometheus metrics endpoint - Add production logging configurations (5 strategies) - Add complete deployment documentation suite: * QUICKSTART.md - 30-minute deployment guide * DEPLOYMENT_CHECKLIST.md - Printable verification checklist * DEPLOYMENT_WORKFLOW.md - Complete deployment lifecycle * PRODUCTION_DEPLOYMENT.md - Comprehensive technical reference * production-logging.md - Logging configuration guide * ANSIBLE_DEPLOYMENT.md - Infrastructure as Code automation * README.md - Navigation hub * DEPLOYMENT_SUMMARY.md - Executive summary - Add deployment scripts and automation - Add DEPLOYMENT_PLAN.md - Concrete plan for immediate deployment - Update README with production-ready features All production infrastructure is now complete and ready for deployment.
537 lines
14 KiB
Markdown
537 lines
14 KiB
Markdown
# LiveComponents Performance Benchmarks
|
|
|
|
Comprehensive performance testing suite comparing Fragment-based rendering vs Full HTML rendering in LiveComponents.
|
|
|
|
## Overview
|
|
|
|
This benchmark suite measures and compares:
|
|
- **Rendering Speed**: Fragment updates vs Full HTML re-renders
|
|
- **Network Payload**: Data transfer size for different update strategies
|
|
- **DOM Manipulation Overhead**: Client-side update performance
|
|
- **Memory Consumption**: Memory footprint during updates
|
|
- **Cache Effectiveness**: Performance improvements from caching
|
|
- **Scalability**: Performance under different load scenarios
|
|
|
|
## Quick Start
|
|
|
|
### Prerequisites
|
|
|
|
```bash
|
|
# Ensure Playwright is installed
|
|
npm install
|
|
|
|
# Install browser binaries
|
|
npx playwright install chromium
|
|
|
|
# Ensure development server is running
|
|
make up
|
|
```
|
|
|
|
### Running Benchmarks
|
|
|
|
```bash
|
|
# Run all performance benchmarks
|
|
npx playwright test performance-benchmarks.spec.js
|
|
|
|
# Run specific benchmark
|
|
npx playwright test performance-benchmarks.spec.js --grep "Single small fragment"
|
|
|
|
# Run with visible browser (for debugging)
|
|
npx playwright test performance-benchmarks.spec.js --headed
|
|
|
|
# Run and generate report
|
|
npx playwright test performance-benchmarks.spec.js && node tests/e2e/livecomponents/generate-performance-report.js
|
|
```
|
|
|
|
### Generating Reports
|
|
|
|
```bash
|
|
# Generate both HTML and Markdown reports
|
|
node tests/e2e/livecomponents/generate-performance-report.js
|
|
|
|
# Generate only HTML report
|
|
node tests/e2e/livecomponents/generate-performance-report.js --format=html
|
|
|
|
# Generate only Markdown report
|
|
node tests/e2e/livecomponents/generate-performance-report.js --format=markdown
|
|
```
|
|
|
|
Reports are generated in `test-results/`:
|
|
- `performance-report.html` - Interactive HTML report with styling
|
|
- `performance-report.md` - Markdown report for documentation
|
|
- `benchmark-results.json` - Raw benchmark data
|
|
|
|
## Benchmark Scenarios
|
|
|
|
### 1. Single Small Fragment Update
|
|
|
|
**Tests:** Counter increment with single fragment vs full render
|
|
|
|
**Metrics:**
|
|
- Fragment update time (expected: <50ms)
|
|
- Full render time (expected: <150ms)
|
|
- Speedup percentage
|
|
|
|
**Use Case:** Small, frequent updates like notification badges, counters
|
|
|
|
### 2. Multiple Fragment Updates (5 fragments)
|
|
|
|
**Tests:** Updating 5 independent fragments simultaneously vs full render
|
|
|
|
**Metrics:**
|
|
- Fragment update time (expected: <100ms)
|
|
- Full render time (expected: <300ms)
|
|
- Speedup percentage
|
|
|
|
**Use Case:** Dashboard widgets, multi-section updates
|
|
|
|
### 3. Large Component Update (100 items)
|
|
|
|
**Tests:** Updating large list component with 100 items
|
|
|
|
**Metrics:**
|
|
- Fragment update time (expected: <200ms)
|
|
- Full render time (expected: <500ms)
|
|
- Speedup percentage
|
|
|
|
**Use Case:** Product lists, search results, data tables
|
|
|
|
### 4. Network Payload Size Comparison
|
|
|
|
**Tests:** Comparing data transfer sizes
|
|
|
|
**Metrics:**
|
|
- Fragment payload size (expected: <5KB)
|
|
- Full HTML payload size (expected: <50KB)
|
|
- Reduction percentage
|
|
|
|
**Use Case:** Bandwidth optimization, mobile performance
|
|
|
|
### 5. Rapid Successive Updates (10 updates)
|
|
|
|
**Tests:** 10 consecutive updates as fast as possible
|
|
|
|
**Metrics:**
|
|
- Total fragment update time (expected: <500ms)
|
|
- Total full render time (expected: <1500ms)
|
|
- Average per-update time
|
|
- Speedup multiplier
|
|
|
|
**Use Case:** Real-time data updates, live feeds, typing indicators
|
|
|
|
### 6. DOM Manipulation Overhead
|
|
|
|
**Tests:** Breaking down update time into network/server vs DOM manipulation
|
|
|
|
**Metrics:**
|
|
- Pure DOM update time (expected: <5ms)
|
|
- Network + server time (expected: <100ms)
|
|
- Total fragment update time (expected: <150ms)
|
|
|
|
**Use Case:** Understanding performance bottlenecks
|
|
|
|
### 7. Memory Consumption Comparison
|
|
|
|
**Tests:** Memory usage over 50 updates (Chromium only - uses `performance.memory`)
|
|
|
|
**Metrics:**
|
|
- Fragment updates memory delta (expected: <1MB)
|
|
- Full renders memory delta (expected: <2MB)
|
|
- Memory reduction percentage
|
|
|
|
**Use Case:** Long-running applications, memory leak detection
|
|
|
|
### 8. Cache Effectiveness
|
|
|
|
**Tests:** Performance improvement from caching
|
|
|
|
**Metrics:**
|
|
- First update time (cold cache) (expected: <100ms)
|
|
- Average cached update time (expected: <80ms)
|
|
- Cache improvement percentage
|
|
|
|
**Use Case:** Repeated operations, frequently accessed data
|
|
|
|
## Performance Thresholds
|
|
|
|
```javascript
|
|
const THRESHOLDS = {
|
|
fragmentRender: {
|
|
small: 50, // ms for single small fragment
|
|
medium: 100, // ms for 5-10 fragments
|
|
large: 200 // ms for complex component
|
|
},
|
|
fullRender: {
|
|
small: 150, // ms for full render (small component)
|
|
medium: 300, // ms for full render (medium component)
|
|
large: 500 // ms for full render (large component)
|
|
},
|
|
networkPayload: {
|
|
fragmentMax: 5000, // bytes for fragment response
|
|
fullMax: 50000 // bytes for full HTML response
|
|
}
|
|
};
|
|
```
|
|
|
|
Thresholds are based on:
|
|
- **50ms**: Perceived as instant (Google Core Web Vitals)
|
|
- **100ms**: Feels responsive
|
|
- **200ms**: Noticeable but acceptable
|
|
- **500ms**: Maximum acceptable for interactive operations
|
|
|
|
## Understanding Results
|
|
|
|
### Benchmark Output
|
|
|
|
```
|
|
Fragment speedup: 67.3% faster than full render
|
|
Fragment: 32.45ms, Full: 98.76ms
|
|
|
|
✅ Benchmark: Single small fragment update
|
|
Fragment Update Time: 32.45ms ≤ 50ms (threshold) ✅
|
|
Full Render Time: 98.76ms ≤ 150ms (threshold) ✅
|
|
```
|
|
|
|
### Interpreting Metrics
|
|
|
|
**Speed Metrics (milliseconds)**:
|
|
- **<50ms**: Excellent - Perceived as instant
|
|
- **50-100ms**: Good - Feels responsive
|
|
- **100-200ms**: Acceptable - Noticeable but smooth
|
|
- **>200ms**: Needs improvement - User-noticeable delay
|
|
|
|
**Payload Size (bytes)**:
|
|
- **<1KB**: Excellent - Minimal network overhead
|
|
- **1-5KB**: Good - Acceptable for frequent updates
|
|
- **5-10KB**: Fair - Consider optimization
|
|
- **>10KB**: Large - May impact performance on slow connections
|
|
|
|
**Memory Delta (KB)**:
|
|
- **<100KB**: Excellent - Minimal memory footprint
|
|
- **100-500KB**: Good - Acceptable for normal operations
|
|
- **500KB-1MB**: Fair - Monitor for leaks
|
|
- **>1MB**: High - Investigate potential memory leaks
|
|
|
|
### Performance Report
|
|
|
|
The generated HTML report includes:
|
|
|
|
1. **Executive Summary**
|
|
- Average performance improvement percentage
|
|
- Best case scenario
|
|
- Worst case scenario
|
|
|
|
2. **Detailed Results**
|
|
- All benchmark metrics with pass/fail status
|
|
- Grouped by scenario
|
|
- Threshold comparisons
|
|
|
|
3. **Recommendations**
|
|
- When to use fragments
|
|
- When to use full render
|
|
- Performance optimization tips
|
|
|
|
4. **Metrics Glossary**
|
|
- Explanation of each metric
|
|
- How to interpret results
|
|
|
|
## Customizing Benchmarks
|
|
|
|
### Adding New Benchmarks
|
|
|
|
```javascript
|
|
test('Benchmark: Your custom scenario', async ({ page }) => {
|
|
// Measure fragment update
|
|
const fragmentTime = await measureActionTime(
|
|
page,
|
|
'component:id',
|
|
'actionName',
|
|
{ param: 'value' },
|
|
{ fragments: ['#fragment-id'] }
|
|
);
|
|
|
|
// Measure full render
|
|
const fullTime = await measureActionTime(
|
|
page,
|
|
'component:id',
|
|
'actionName',
|
|
{ param: 'value' }
|
|
);
|
|
|
|
// Store results
|
|
storeBenchmarkResult(
|
|
'Your Scenario',
|
|
'Fragment Update Time',
|
|
fragmentTime,
|
|
100 // threshold in ms
|
|
);
|
|
|
|
storeBenchmarkResult(
|
|
'Your Scenario',
|
|
'Full Render Time',
|
|
fullTime,
|
|
300
|
|
);
|
|
|
|
// Assertions
|
|
expect(fragmentTime).toBeLessThan(100);
|
|
expect(fragmentTime).toBeLessThan(fullTime);
|
|
});
|
|
```
|
|
|
|
### Modifying Thresholds
|
|
|
|
Edit the `THRESHOLDS` constant in `performance-benchmarks.spec.js`:
|
|
|
|
```javascript
|
|
const THRESHOLDS = {
|
|
fragmentRender: {
|
|
small: 30, // Stricter threshold
|
|
medium: 80,
|
|
large: 150
|
|
},
|
|
// ...
|
|
};
|
|
```
|
|
|
|
### Adding Custom Metrics
|
|
|
|
```javascript
|
|
// Custom metric measurement
|
|
const customMetric = await page.evaluate(() => {
|
|
// Your custom measurement logic
|
|
const startTime = performance.now();
|
|
// ... perform operation ...
|
|
const endTime = performance.now();
|
|
return endTime - startTime;
|
|
});
|
|
|
|
// Store custom metric
|
|
storeBenchmarkResult(
|
|
'Custom Scenario',
|
|
'Custom Metric Name',
|
|
customMetric,
|
|
threshold,
|
|
'ms' // or 'bytes', or custom unit
|
|
);
|
|
```
|
|
|
|
## Test Page Requirements
|
|
|
|
Benchmarks assume the following test page exists:
|
|
|
|
**URL:** `https://localhost/livecomponents/test/performance`
|
|
|
|
**Required Components:**
|
|
|
|
1. **Counter Component** (`counter:benchmark`)
|
|
- Actions: `increment`, `reset`
|
|
- Fragments: `#counter-value`
|
|
|
|
2. **List Component** (`list:benchmark`)
|
|
- Actions: `updateItems({ count: number })`
|
|
- Fragments: `#item-1`, `#item-2`, etc.
|
|
|
|
3. **Product List Component** (`product-list:benchmark`)
|
|
- Actions: `loadItems({ count: number })`
|
|
- Fragments: `#item-list`
|
|
|
|
Example test page structure:
|
|
|
|
```html
|
|
<div data-component-id="counter:benchmark">
|
|
<div data-lc-fragment="counter-value">0</div>
|
|
<button data-action="increment">Increment</button>
|
|
<button data-action="reset">Reset</button>
|
|
</div>
|
|
|
|
<div data-component-id="list:benchmark">
|
|
<div data-lc-fragment="item-1">Item 1</div>
|
|
<div data-lc-fragment="item-2">Item 2</div>
|
|
<!-- ... more items ... -->
|
|
</div>
|
|
|
|
<div data-component-id="product-list:benchmark">
|
|
<div data-lc-fragment="item-list">
|
|
<!-- Product list items -->
|
|
</div>
|
|
</div>
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
### GitHub Actions Integration
|
|
|
|
```yaml
|
|
name: Performance Benchmarks
|
|
|
|
on:
|
|
push:
|
|
branches: [main, develop]
|
|
pull_request:
|
|
branches: [main]
|
|
|
|
jobs:
|
|
benchmark:
|
|
runs-on: ubuntu-latest
|
|
|
|
steps:
|
|
- uses: actions/checkout@v3
|
|
|
|
- uses: actions/setup-node@v3
|
|
with:
|
|
node-version: '18'
|
|
|
|
- name: Install dependencies
|
|
run: npm ci
|
|
|
|
- name: Install Playwright
|
|
run: npx playwright install --with-deps chromium
|
|
|
|
- name: Start dev server
|
|
run: make up
|
|
|
|
- name: Run performance benchmarks
|
|
run: npx playwright test performance-benchmarks.spec.js
|
|
|
|
- name: Generate report
|
|
if: always()
|
|
run: node tests/e2e/livecomponents/generate-performance-report.js
|
|
|
|
- name: Upload benchmark results
|
|
if: always()
|
|
uses: actions/upload-artifact@v3
|
|
with:
|
|
name: performance-benchmarks
|
|
path: |
|
|
test-results/benchmark-results.json
|
|
test-results/performance-report.html
|
|
test-results/performance-report.md
|
|
|
|
- name: Comment PR with results
|
|
if: github.event_name == 'pull_request'
|
|
uses: actions/github-script@v6
|
|
with:
|
|
script: |
|
|
const fs = require('fs');
|
|
const report = fs.readFileSync('test-results/performance-report.md', 'utf8');
|
|
github.rest.issues.createComment({
|
|
issue_number: context.issue.number,
|
|
owner: context.repo.owner,
|
|
repo: context.repo.repo,
|
|
body: report
|
|
});
|
|
```
|
|
|
|
### Performance Regression Detection
|
|
|
|
Track benchmark results over time and fail CI if performance degrades:
|
|
|
|
```javascript
|
|
// In your CI script
|
|
const currentResults = JSON.parse(fs.readFileSync('test-results/benchmark-results.json'));
|
|
const baselineResults = JSON.parse(fs.readFileSync('baseline-results.json'));
|
|
|
|
const regressions = detectRegressions(currentResults, baselineResults, {
|
|
threshold: 0.1 // 10% regression tolerance
|
|
});
|
|
|
|
if (regressions.length > 0) {
|
|
console.error('Performance regressions detected:');
|
|
regressions.forEach(r => console.error(` ${r.scenario}: ${r.metric} - ${r.change}% slower`));
|
|
process.exit(1);
|
|
}
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Benchmarks Timing Out
|
|
|
|
```javascript
|
|
// Increase timeout for specific test
|
|
test('slow benchmark', async ({ page }) => {
|
|
test.setTimeout(60000); // 60 seconds
|
|
// ...
|
|
});
|
|
```
|
|
|
|
### Inconsistent Results
|
|
|
|
**Causes:**
|
|
- Network latency variations
|
|
- Server load fluctuations
|
|
- Background processes
|
|
- Browser cache state
|
|
|
|
**Solutions:**
|
|
- Run multiple iterations and average results
|
|
- Disable browser cache: `await page.context().clearCookies()`
|
|
- Use `--workers=1` for serial execution
|
|
- Run on dedicated test infrastructure
|
|
|
|
### Memory API Not Available
|
|
|
|
Firefox and Safari don't support `performance.memory`. Memory benchmarks will be skipped on these browsers with a console log:
|
|
|
|
```
|
|
Memory API not available (Firefox/Safari)
|
|
```
|
|
|
|
To test memory consumption, use Chromium:
|
|
|
|
```bash
|
|
npx playwright test performance-benchmarks.spec.js --project=chromium
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Running Benchmarks
|
|
|
|
1. **Consistent Environment**
|
|
- Run on same hardware for comparability
|
|
- Close unnecessary applications
|
|
- Use stable network connection
|
|
- Avoid running during system updates
|
|
|
|
2. **Multiple Runs**
|
|
- Run benchmarks 3-5 times
|
|
- Average results for stability
|
|
- Discard outliers (>2 standard deviations)
|
|
|
|
3. **Baseline Tracking**
|
|
- Save baseline results for comparison
|
|
- Track trends over time
|
|
- Alert on significant regressions
|
|
|
|
### Analyzing Results
|
|
|
|
1. **Focus on Trends**
|
|
- Single outliers may be noise
|
|
- Consistent patterns indicate real issues
|
|
- Compare relative improvements, not absolute numbers
|
|
|
|
2. **Context Matters**
|
|
- Different devices have different capabilities
|
|
- Network conditions affect results
|
|
- Browser engines perform differently
|
|
|
|
3. **Actionable Insights**
|
|
- Identify biggest bottlenecks
|
|
- Prioritize high-impact optimizations
|
|
- Validate improvements with re-runs
|
|
|
|
## Resources
|
|
|
|
- [Playwright Performance Testing](https://playwright.dev/docs/test-advanced#measuring-performance)
|
|
- [Web Performance Metrics](https://web.dev/metrics/)
|
|
- [LiveComponents Performance Guide](../../../src/Framework/LiveComponents/docs/PERFORMANCE-GUIDE.md)
|
|
- [Chrome DevTools Performance](https://developer.chrome.com/docs/devtools/performance/)
|
|
|
|
## Support
|
|
|
|
For issues or questions:
|
|
1. Review this documentation
|
|
2. Check test output for specific errors
|
|
3. Consult LiveComponents performance guide
|
|
4. Create GitHub issue with benchmark results attached
|