Skip to main content
Methodology

Every number traces back to data.

Transparent, evidence-based scoring. No black boxes. No single-model bias. Every score in Korvex is calculated from real data, validated by multiple LLMs, and verified against actual outcomes.

Scoring pipeline12 daily phases
1

Data Collection

GSCGA4DataForSEOBrightLocalNLP APIs
2

Processing

Entity extractionContent analysisTechnical auditLink graph
3

Scoring

40+ factors3-LLM consensusKoray methodology
4

Output

Page scoresRecommendationsPredictionsReports

Most SEO scores are opinions. Ours are evidence.

If you cannot see the methodology, you cannot trust the score.

Black-box scores are not actionable

Most SEO tools give you a number without explaining how they got it. You cannot improve what you do not understand. If the methodology is opaque, you are optimising blindly.

Single-model bias is real

Tools that rely on a single AI model inherit that model's biases and blind spots. GPT-4 and Claude have different strengths. Using just one gives you a skewed perspective on content quality.

Predictions without accountability are guesses

Any tool can predict ranking improvements. Few measure whether those predictions were right. Without a feedback loop, prediction models never improve and you have no way to know which recommendations actually work.

Six data sources. One unified score.

Korvex pulls from official APIs, not scraped data or estimates. Every input is verifiable.

Google Search Console

Direct API connection to your verified GSC property. Korvex collects impressions, clicks, CTR, and average position across 12 dimension types including queries, pages, devices, and countries. This is official Google data, not estimates.

Google Analytics 4

Full GA4 integration across 11 dimension types. Sessions, users, engagement rate, conversions, and revenue broken down by landing page, traffic source, device, and geography. Batch API calls keep data collection efficient.

Multi-LLM consensus

Claude, GPT-4, and Gemini independently score content quality. The final score is a weighted consensus, not a single model's opinion. This eliminates the bias any individual LLM introduces and produces more stable, reliable scores.

Entity extraction pipeline

Google's Natural Language API extracts entities from every page. Claude and Gemini validate salience scores. Results are stored in a Neo4j knowledge graph with Qdrant vector embeddings for semantic search and gap analysis.

Ranking predictions

LambdaMART and XGBoost models trained on your historical data predict future ranking movements. Every prediction includes a confidence score. Models are retrained as new data arrives, so accuracy improves over time.

Outcome tracking

Every recommendation's impact is measured. Baseline metrics are captured at implementation, outcomes are tracked over time, and prediction models are recalibrated based on actual results. This is a true closed-loop system.

Koray Scoring

40+ factors. Four categories. Complete coverage.

Each page is scored against the ranking factors that matter most, weighted by their correlation with actual search performance.

30%

Content Quality

Topic depthEntity coverageReadabilityContent freshnessWord countHeading structure
20%

Technical Health

Page speedCore Web VitalsMobile usabilityCrawlabilitySchema markup
25%

Authority Signals

Referring domainsInternal linksAnchor textDomain authorityCitation flow
25%

E-E-A-T

Experience signalsExpertise depthAuthor attributionTrust indicatorsExternal citations
Multi-LLM Consensus

Three models. One unbiased score.

Claude, GPT-4, and Gemini each score your content independently against the same criteria. None sees the others’ assessments. The final score is a weighted consensus that eliminates single-model bias.

Why does this matter? Each LLM has different strengths and blind spots. GPT-4 tends to be generous with technical content. Claude is stricter on E-E-A-T signals. Gemini evaluates entity coverage differently. By combining all three, Korvex produces scores that are more stable, more reliable, and more predictive of actual ranking performance.

Multi-LLM consensus scoringContent quality
Claude78
GPT-474
Gemini81

Consensus score

Weighted average — no single model dominates

77
Data sourcesAll connected
SourceTypeFreshness
Google Search ConsoleOfficial APIDaily
Google Analytics 4Official APIDaily
DataForSEORankings APIDaily
BrightLocalLocal SEO APIWeekly
Google NLP APIEntity extractionOn demand
CMS PlatformsWordPress, Shopify, WebflowReal-time
Daily Collection

12 phases. Every 24 hours.

Korvex runs a 12-phase data collection pipeline every day starting at 01:00 UTC. Search Console data is collected first, followed by analytics, rankings, site health, page scoring, competitor analysis, gap analysis, and ranking predictions.

By 10:00 UTC, your dashboard reflects yesterday’s complete data. No metric is ever more than 24 hours old. Daily emails summarise performance changes so you start every morning knowing exactly where things stand.

Closed-Loop Verification

Every prediction is tested against reality.

When you implement a recommendation, Korvex captures baseline metrics: Koray score, traffic, average position, and entity coverage. The outcome tracker then monitors these metrics daily and compares actual results against predicted improvements.

This is not just reporting. The feedback from measured outcomes is used to recalibrate prediction models and adjust future recommendation confidence scores. The system gets smarter with every recommendation you implement. That is what a closed loop means.

Outcome tracker — closed loopPredictions calibrating

Add entity coverage for 'change management'

Exceeded

Baseline

42

Current

61

Predicted

58

Improve page speed on /blog/guide

Exceeded

Baseline

55

Current

68

Predicted

65

Add author bio to product pages

Tracking

Baseline

67

Current

72

Predicted

75

Strengthen internal linking to /features

On track

Baseline

38

Current

51

Predicted

52

0+

ranking factors

0

daily phases

0

LLMs in consensus

24hr

data freshness

Frequently asked questions.

See the data behind every score.

14-day free trial. Your first scores delivered by tomorrow morning. Every number is transparent and verifiable.