Every number traces back to data.
Transparent, evidence-based scoring. No black boxes. No single-model bias. Every score in Korvex is calculated from real data, validated by multiple LLMs, and verified against actual outcomes.
Data Collection
Processing
Scoring
Output
Most SEO scores are opinions. Ours are evidence.
If you cannot see the methodology, you cannot trust the score.
Black-box scores are not actionable
Most SEO tools give you a number without explaining how they got it. You cannot improve what you do not understand. If the methodology is opaque, you are optimising blindly.
Single-model bias is real
Tools that rely on a single AI model inherit that model's biases and blind spots. GPT-4 and Claude have different strengths. Using just one gives you a skewed perspective on content quality.
Predictions without accountability are guesses
Any tool can predict ranking improvements. Few measure whether those predictions were right. Without a feedback loop, prediction models never improve and you have no way to know which recommendations actually work.
Six data sources. One unified score.
Korvex pulls from official APIs, not scraped data or estimates. Every input is verifiable.
Google Search Console
Direct API connection to your verified GSC property. Korvex collects impressions, clicks, CTR, and average position across 12 dimension types including queries, pages, devices, and countries. This is official Google data, not estimates.
Google Analytics 4
Full GA4 integration across 11 dimension types. Sessions, users, engagement rate, conversions, and revenue broken down by landing page, traffic source, device, and geography. Batch API calls keep data collection efficient.
Multi-LLM consensus
Claude, GPT-4, and Gemini independently score content quality. The final score is a weighted consensus, not a single model's opinion. This eliminates the bias any individual LLM introduces and produces more stable, reliable scores.
Entity extraction pipeline
Google's Natural Language API extracts entities from every page. Claude and Gemini validate salience scores. Results are stored in a Neo4j knowledge graph with Qdrant vector embeddings for semantic search and gap analysis.
Ranking predictions
LambdaMART and XGBoost models trained on your historical data predict future ranking movements. Every prediction includes a confidence score. Models are retrained as new data arrives, so accuracy improves over time.
Outcome tracking
Every recommendation's impact is measured. Baseline metrics are captured at implementation, outcomes are tracked over time, and prediction models are recalibrated based on actual results. This is a true closed-loop system.
40+ factors. Four categories. Complete coverage.
Each page is scored against the ranking factors that matter most, weighted by their correlation with actual search performance.
Content Quality
Technical Health
Authority Signals
E-E-A-T
Three models. One unbiased score.
Claude, GPT-4, and Gemini each score your content independently against the same criteria. None sees the others’ assessments. The final score is a weighted consensus that eliminates single-model bias.
Why does this matter? Each LLM has different strengths and blind spots. GPT-4 tends to be generous with technical content. Claude is stricter on E-E-A-T signals. Gemini evaluates entity coverage differently. By combining all three, Korvex produces scores that are more stable, more reliable, and more predictive of actual ranking performance.
Consensus score
Weighted average — no single model dominates
| Source | Type | Freshness |
|---|---|---|
| Google Search Console | Official API | Daily |
| Google Analytics 4 | Official API | Daily |
| DataForSEO | Rankings API | Daily |
| BrightLocal | Local SEO API | Weekly |
| Google NLP API | Entity extraction | On demand |
| CMS Platforms | WordPress, Shopify, Webflow | Real-time |
12 phases. Every 24 hours.
Korvex runs a 12-phase data collection pipeline every day starting at 01:00 UTC. Search Console data is collected first, followed by analytics, rankings, site health, page scoring, competitor analysis, gap analysis, and ranking predictions.
By 10:00 UTC, your dashboard reflects yesterday’s complete data. No metric is ever more than 24 hours old. Daily emails summarise performance changes so you start every morning knowing exactly where things stand.
Every prediction is tested against reality.
When you implement a recommendation, Korvex captures baseline metrics: Koray score, traffic, average position, and entity coverage. The outcome tracker then monitors these metrics daily and compares actual results against predicted improvements.
This is not just reporting. The feedback from measured outcomes is used to recalibrate prediction models and adjust future recommendation confidence scores. The system gets smarter with every recommendation you implement. That is what a closed loop means.
Add entity coverage for 'change management'
ExceededBaseline
42
Current
61
Predicted
58
Improve page speed on /blog/guide
ExceededBaseline
55
Current
68
Predicted
65
Add author bio to product pages
TrackingBaseline
67
Current
72
Predicted
75
Strengthen internal linking to /features
On trackBaseline
38
Current
51
Predicted
52
ranking factors
daily phases
LLMs in consensus
data freshness
Frequently asked questions.
See the data behind every score.
14-day free trial. Your first scores delivered by tomorrow morning. Every number is transparent and verifiable.