Skip to main content

What It Is

Every recommendation in Korvex carries two key metrics: confidence (how certain we are that this recommendation is correct, 0-100) and impact (how much improvement we predict if implemented, expressed as an ROI multiple). These aren't static — they're continuously updated as new evidence arrives and calibrated by real outcomes from previously implemented recommendations.

Why It Matters for Your SEO

Not all recommendations are equal. A high-confidence, high-impact recommendation should be prioritised over a speculative suggestion. Confidence and impact scoring helps you:

  • Focus resources on recommendations most likely to succeed
  • Avoid implementing changes based on thin evidence
  • Build a track record that improves future predictions
  • Calculate expected ROI before committing resources

How Korvex Measures It

Confidence Score (0-100)

Confidence starts at 50 (neutral) and is adjusted by evidence from 7 sources:

Evidence SourcePotential BoostPotential Penalty
Technical issues confirmedUp to +20
GSC data supportsUp to +15
Historical success patternsUp to +20Up to -15
Competitor gap confirmsUp to +15
Gap analysis severityUp to +15
Outcome track recordUp to +20Up to -15

Impact Prediction

Impact is expressed as an ROI multiple based on:

  • Estimated traffic change from the improvement
  • Historical outcome data from similar recommendations
  • CTR-by-position curves for ranking change predictions
  • Revenue attribution where GA4 conversion data is available

Winner Selection

Recommendations above 70% confidence are surfaced to the Strategy Actions page, sorted by confidence × ROI_multiple.

How to Improve Your Score

  1. Implement high-confidence recommendations — each outcome (positive or negative) calibrates future predictions
  2. Provide revenue data — connecting GA4 conversion tracking improves ROI accuracy
  3. Track outcomes — mark recommendations as completed/failed to feed the calibration loop
  4. Review confidence trends — rising confidence on a recommendation means accumulating evidence
  5. Use the Simulator — test "what if" scenarios before committing to implementation
<details> <summary>Technical Deep Dive</summary>

Confidence Delta Table

SourceConditionDelta
page_scoresPer technical issue found+2.5 (max +20)
gscHigh impressions (>100), low CTR (<2%)+10
gscCannibalisation confirmed+15
winning_patternsSuccess rate ≥ 80%+20
winning_patternsSuccess rate 60-79%+10
winning_patternsSuccess rate 40-59%+5
competitorsEntity gap ratio ≥ 2.0x+15
competitorsEntity gap ratio 1.5-1.99x+10
competitorsEntity gap ratio 1.2-1.49x+5
gap_analysisSeverity = critical+15
gap_analysisSeverity = high+10
gap_analysisSeverity = medium+5
outcome_patterns10+ implementations, 80%+ success+20
outcome_patterns5+ implementations, 70%+ success+15
outcome_patterns3+ implementations, 60%+ success+10
outcome_patterns5+ implementations, <40% success-15
outcome_patterns3+ implementations, <50% success-10

Calibration

Confidence calibration uses services.scis.confidence_calibrator:

  • Compares predicted outcomes to actual outcomes for implemented recommendations
  • Adjusts the confidence scoring model to reduce prediction error over time
  • Minimum dataset: 10+ implemented recommendations for meaningful calibration

CTR by Position (for traffic impact estimation)

Position 1: 28%, 2: 15%, 3: 11%, 4: 8%, 5: 6%, 6: 5%, 7: 4%, 8: 3%, 9: 2.5%, 10: 2%

  • SCIS — confidence scoring is Stage 3 of the SCIS pipeline
  • The Closed Loop — outcomes feed calibration
  • The Tier System — tier gates include completion rates that influence confidence
</details>
Last updated: 2026-03-20