Ascent Logo
Back to Blog
Engineering·Ascent Team

How Ascent Score Works

A behind-the-scenes look at how Ascent Score estimates player impact using win probability, role-based baselines, and minute-by-minute game states.

Why We Built This

Most player scores are built from stats: KDA, damage, farm, gold, and so on. This makes sense at first. But if you think about it, a player can end a game with great-looking stats and still not have consistently moved their team closer to winning.

We wanted to do something different: measure how much a player actually changed their team's chances of winning.

The basic idea is simple: take the real game, replace you with a typical player in your role, and see how much the team's win probability changes. Do that minute by minute across the whole game, combine it, then compare that result against other players in the same role. That becomes your Ascent Score.

How It Works

First: Estimate Win Probability

But how do we estimate win probability?

To make any of this work, we first needed a model that could look at a game state and estimate a team's chance to win from there.

We trained a League win probability model on per-minute game snapshots. Each snapshot includes information about the current state of the game, including:

  • the game minute
  • objective state for both teams, like towers, inhibitors, dragons, barons, heralds, horde, and Atakhan
  • economy and combat state for every player, like gold, XP, CS, damage, and KDA
  • player positions on the map
  • a rank feature

So instead of only looking at the final scoreboard, the model sees the state of the game at minute 5, minute 12, minute 21, and so on, and learns to predict how likely that game state is to eventually lead to a win.

How good was the model?

We trained this on roughly 300,000 games, and on our held-out test set of 889,733 minute snapshots, we got:

  • AUC: 0.8241. This is a ranking metric. In plain English: when the model looks at two game states, one that ends in a win and one that ends in a loss, how often does it rank the winning state above the losing one? 0.5 is random. 1.0 is perfect.
  • Accuracy at a 0.5 threshold: 0.7279. If we draw a hard line and say "above 50% means predict win, below 50% means predict loss," the model is right about 72.8% of the time.
  • Log loss: 0.5025. This measures how good the probability estimates are, not just whether the final yes or no prediction was right. It punishes being confidently wrong.
  • Brier score: 0.1702. This is another probability-quality metric. Lower is better. It looks at how far the predicted win probability is from what actually happened.

We also checked calibration, which matters a lot here. If the model says a team has a 70% chance to win, the team has to actually win 70% of the time.

On calibration, the model looked pretty good:

  • Average calibration error: 0.0103
  • Maximum calibration error across bins: 0.0213

That gave us more confidence using win probability as the foundation for the score.

Then: Build the Typical Player Baseline

Now we want to replace you with the "typical player."

So we built role-specific, minute-specific baselines from a large set of historical games. For each role and each minute, we estimate what a typical player's stats look like.

A couple things are important here:

  • the baseline is by role
  • the baseline is by minute
  • the baseline is built from historical game data

So a typical support at 8 minutes is not being compared to the same baseline as a typical ADC at 8 minutes.

The baseline itself is built by role and minute. Rank still matters in the overall win probability model, but the replacement player baseline is not sliced out separately by rank.

Also, the current baseline is built using the median player for each role and minute, so "typical player" is probably better wording than "average player."

Then: Calculate Win Contribution

Now that we have:

  • a model that takes a game snapshot and returns win probability
  • a collection of typical role baselines we can replace you with

we can calculate your win contribution.

For each minute:

  1. we run the real game state through the model
  2. we replace your stats with the typical player in your role at that minute
  3. we run that baseline game state through the model
  4. we compare the two win probabilities

That difference is your contribution for that minute.

Then we add those differences up over the course of the game and normalize by game length.

Then: Turn Raw Contribution into Ascent Score

But giving someone raw win probability deltas is not very meaningful by itself. A number like +0.018 or -0.026 does not tell most players much.

So we took those contribution numbers across a large set of games, grouped them by role, and built a distribution for each role. Then we place an individual player's result inside that distribution and convert it into a percentile.

That percentile becomes the score.

Internally, this is a percentile-style 0-100 score. If we display it on a 0-10 scale on the frontend, that is just a presentation layer on top of the same number.

One Thing We Found Interesting

One thing we found interesting was that those role distributions did not come out as clean normal curves.

That might sound scary at first, but honestly it makes sense. These are single-game impact estimates, and different roles affect games differently.

From the analysis:

  • support had a much tighter distribution
  • jungle and ADC had wider positive tails
  • the distributions were not normal by statistical normality tests

That does not mean the data is broken. If anything, it would have been weirder if every role came out looking like a clean bell curve.

It also reinforced the idea that we should use the actual empirical role distributions rather than force a normal-distribution assumption.

Why We Liked This Approach

This approach is definitely more of a black box than just saying "kills are worth X, vision is worth Y, CS is worth Z." But we still liked it more, because it is closer to the actual question we cared about.

We did not want to build a prettier stat average. We wanted to estimate impact on winning.

That also means the score can feel disconnected from your final stat line sometimes. Since it is measuring minute-by-minute impact on win probability, you can imagine games where:

  • your end-of-game stats looked great, but you were not consistently increasing your team's chances of winning throughout the match
  • your final stat line looked less impressive, but you spent the game quietly moving the game in your team's favor

That disconnect is not necessarily a bug. It is part of the point.

Next Steps

We know that Ascent Score is not perfect.

Over time, we want to:

  • train on more games
  • update the model more often across patches
  • keep recalibrating the score
  • make the role baselines stronger

We also know there are a lot of judgment calls in a system like this. If you have ideas, edge cases, criticisms, or better ways to think about the problem, we'd genuinely love to hear them.

Ready to try Ascent?

Free forever. Zero FPS impact. Works with 10,000+ games.

Download Ascent Free