What It Is
Every recommendation in Korvex carries two key metrics: confidence (how certain we are that this recommendation is correct, 0-100) and impact (how much improvement we predict if implemented, expressed as an ROI multiple). These aren't static — they're continuously updated as new evidence arrives and calibrated by real outcomes from previously implemented recommendations.
Why It Matters for Your SEO
Not all recommendations are equal. A high-confidence, high-impact recommendation should be prioritised over a speculative suggestion. Confidence and impact scoring helps you:
- Focus resources on recommendations most likely to succeed
- Avoid implementing changes based on thin evidence
- Build a track record that improves future predictions
- Calculate expected ROI before committing resources
How Korvex Measures It
Confidence Score (0-100)
Confidence starts at 50 (neutral) and is adjusted by evidence from 7 sources:
| Evidence Source | Potential Boost | Potential Penalty |
|---|---|---|
| Technical issues confirmed | Up to +20 | — |
| GSC data supports | Up to +15 | — |
| Historical success patterns | Up to +20 | Up to -15 |
| Competitor gap confirms | Up to +15 | — |
| Gap analysis severity | Up to +15 | — |
| Outcome track record | Up to +20 | Up to -15 |
Impact Prediction
Impact is expressed as an ROI multiple based on:
- Estimated traffic change from the improvement
- Historical outcome data from similar recommendations
- CTR-by-position curves for ranking change predictions
- Revenue attribution where GA4 conversion data is available
Winner Selection
Recommendations above 70% confidence are surfaced to the Strategy Actions page, sorted by confidence × ROI_multiple.
How to Improve Your Score
- Implement high-confidence recommendations — each outcome (positive or negative) calibrates future predictions
- Provide revenue data — connecting GA4 conversion tracking improves ROI accuracy
- Track outcomes — mark recommendations as completed/failed to feed the calibration loop
- Review confidence trends — rising confidence on a recommendation means accumulating evidence
- Use the Simulator — test "what if" scenarios before committing to implementation
Confidence Delta Table
| Source | Condition | Delta |
|---|---|---|
| page_scores | Per technical issue found | +2.5 (max +20) |
| gsc | High impressions (>100), low CTR (<2%) | +10 |
| gsc | Cannibalisation confirmed | +15 |
| winning_patterns | Success rate ≥ 80% | +20 |
| winning_patterns | Success rate 60-79% | +10 |
| winning_patterns | Success rate 40-59% | +5 |
| competitors | Entity gap ratio ≥ 2.0x | +15 |
| competitors | Entity gap ratio 1.5-1.99x | +10 |
| competitors | Entity gap ratio 1.2-1.49x | +5 |
| gap_analysis | Severity = critical | +15 |
| gap_analysis | Severity = high | +10 |
| gap_analysis | Severity = medium | +5 |
| outcome_patterns | 10+ implementations, 80%+ success | +20 |
| outcome_patterns | 5+ implementations, 70%+ success | +15 |
| outcome_patterns | 3+ implementations, 60%+ success | +10 |
| outcome_patterns | 5+ implementations, <40% success | -15 |
| outcome_patterns | 3+ implementations, <50% success | -10 |
Calibration
Confidence calibration uses services.scis.confidence_calibrator:
- Compares predicted outcomes to actual outcomes for implemented recommendations
- Adjusts the confidence scoring model to reduce prediction error over time
- Minimum dataset: 10+ implemented recommendations for meaningful calibration
CTR by Position (for traffic impact estimation)
Position 1: 28%, 2: 15%, 3: 11%, 4: 8%, 5: 6%, 6: 5%, 7: 4%, 8: 3%, 9: 2.5%, 10: 2%
Related Concepts
- SCIS — confidence scoring is Stage 3 of the SCIS pipeline
- The Closed Loop — outcomes feed calibration
- The Tier System — tier gates include completion rates that influence confidence