League of Legends: Worlds 2019

For League of Legends fans it is that time of the year again – Worlds 2019 is here. This year Berlin, Madrid and Paris are hosting the event, where the best LoL teams from around the world contest the most valuable LoL trophy. The Group Stage is already behind us with 8 teams having qualified for the quarterfinals, so it is the right time to have a look at how the tournament has gone so far, who were the big winners and losers and how are my LoL Elo ratings doing so far.

League of Legends Worlds: 2019 World Championship

League of Legends: Worlds 2019

A few words on the biggest LoL-tournament for those unfamiliar with it. The Worlds take place to determine the best LoL team in the world. The five major regions, who are sending the most of their top teams to the tournament are LCK (South Korea), LEC (Europe), LPL (China), LCS (North America) and Hong Kong, Taiwan & Macau (LMS). 

The tournament starts with 4 groups with 4 teams each. The top 2 teams from each group make it to the quarterfinals.

This year China were granted 3 direct spots based on their strong performance from the previous year. The other main regions were granted direct qualification for their top 2 teams and Vietnam also sent their champion directly. The remaining 4 spots were contested among the lower ranked teams of the main regions and the champions of the minor regions during a playoff phase preceding the Group Stage.

Regions ranking

This year, after the playoffs, only teams of the five major regions plus the directly qualified champion of Vietnam remained in the competition. This indicates that the minor regions are somewhat far away from the level of the main ones, as they did not manage to get a single team throughout the playoff phase.

If my ratings are anything to go by, out of the five main regions South Korea takes the first place, followed by Europe, closely followed by China, with the remaining two in the back. Judging by the outcome of the group phase, this ranking is not far off. With the group phase having ended last weekend, in the quarterfinals we have 3 Korean teams (all winners of their respective groups), 3 European ones (having all finished second) and 2 teams from China (one first, one second). The fact that not a single American team has made it to the quarterfinals was deemed a disappointment, even though it is known that they lag behind Korea, Europe and China.

The model gets it (mostly) right

In my Elo ranking, the 8 teams that qualified for the quarterfinals are taking the places 1-6, 10 and 11. The 3 teams between places 7 and 9 (J Team, Royal Never Give Up and Team Liquid) have finished 3rd in their groups and thus did not manage to qualify for the quarter finals. The team placed 12th (Top eSports) did not secure a qualifying spot in the internal Chinese championship.

This is already an encouraging result, since the model seems able to predict reasonably well which are the strongest teams in the world. This is not a given, since even though teams are playing each other in the internal leagues quite a lot, the international games are relatively few in count, so it is hard to be sure that the regional differences are reflected in the ratings sufficiently. But that seems to be the case, which is good. A note I should make here is that I weigh the international games with a higher K-factor, which improves the predictive power of the model and probably contributes to this result.

League of Legends Worlds 2019: The best of the best

So how does my model rank those top teams? Luckily, there is (at least) one other place on the Internet (GosuGamers) where LoL teams are being Elo ranked, which could also serve as a benchmark for my own ratings. Let’s see how it looks like:

Team Rank Elo Gosu Rank Gosu Elo
SK Telecom T1 1 1283 2 1331
FunPlus Phoenix 2 1282 1 1343
Griffin 3 1276 4 1298
G2 Esports 4 1271 5 1280
Fnatic 5 1256 6 1272
DAMWON Gaming 6 1235 3 1324
J Team 7 1189 12 1182
Royal Never Give Up 8 1167 8 1220
Team Liquid 9 1157 11 1200
Splyce 10 1139 10 1204
Invictus Gaming 11 1136 7 1239
Top Esports 12 1134 9 1204


Both rankings see the same teams as top 12, but in a different order. Furthermore, my rankings seem a lot more evened out – which started to be the case since I implemented a gold component into the rating. Still, in general the models resemble each other pretty closely.

However, pretty closely is not as close as you might think. Trying to compile odds for games based on these ratings will give you different results, depending on the set of ratings you use. This difference would then be significant for your betting success, if you are to use the one set of ratings at face value. The devil is in the details.

Rating applicability and the Benter Boost 

In fact, it has been known that you usually shouldn’t take any ratings at face value. Rather, it is beneficial to combine them with the market estimate (odds) to arrive at a price somewhere in the middle. This is an idea famously published by Bill Benter. It is also being covered in some contemporary gambling books such as “Statistical Sports Models in Excel” by Andrew Mack.

Such combined rating would generally have higher predictive power than any rating on its own. Mind you, this doesn’t even say the market prices are better than the model prices (though 99.9% of the time they will be – efficient market hypothesis). It merely says that the combined judgement of two experts would be better than the judgement of any of them looked at in isolation – even if one of them is better than the other. This has wide implications for modeling – not just for combining model results with market prices, but in other areas as well, as we will see later.

The evolution of my LoL Elo-ratings

Let us see how did my ratings change since I last reported on them and how do I measure my progress. 

In my last article, I have calculated Elo-ratings based on game results only, using the last ~4 years of available data. To arrive at these ratings I optimized for three factors – the K-factor, the home advantage factor and the factor determining the importance of the current rating for future results. The top ranked teams according to those ratings were also published in my article above.

Since, as I noted before, no odds data was available at the time of writing the article, I determined the factors with highest predictive power by minimizing two metrics – Mean Squared Error and the Log-Loss Function. These remain my preferred metrics as of now (whereby the Log-Loss function takes priority if the two are in conflict – which happens rarely).

Lucky accident

Now, regarding the lack of odds, there has been some change in the picture. A kind reader (who wishes not to be named) shared some odds data with me. I got a full set of opening and closing odds for most high-profile games of the 2018 season. Thank you kind reader!

I immediately got to work and the first thing I realized was – combining data sets can be a pain in the ass. Different team names, different starting sides (remember, all things equal blue has a higher chance of winning than red), as well as game data in the odds data set as compared to map data in the original data set, meant there was quite a bit to be done before I was able to use data from both sets in a meaningful way.

After taking care of the rest of the problems, adjusting the game odds to the map ratings turned out to be the hardest task. Games can be of the 2 out of 3 or 3 out of 5 format, which means the odds you see in the data set cannot be applied to the maps. In general, if a favourite plays an outsider, the odds on the favourite winning will be lower in a 2 out of 3 format compared to a single map and yet lower in a 3 out of 5 format. This means you need to even out the game odds a bit before applying them to a single map.

Blue/Red Issues

This is easier said than done. In converting game odds to map odds, an important factor to consider is which team will play which side (blue/red) at each of the games BEFORE the game has started. This is straightforward in LEC and LCS where teams are just switching sides every next game. However, in the other leagues and in the international formats there is plenty of different rule sets – ranging from winner picks side on next map, through loser picks side on next map, through higher seeded team picks side on the uneven maps and the other team on the even ones. It is a complete mess! The fact that the rules for LPL and LCK have (to my knowledge) only been published in Korean/Chinese didn’t improve the matter at all.

Finally I was forced to use quite a few assumptions in order to somewhat reasonably adjust the odds. Since I couldn’t arrive at a formula to change game to map odds I resorted to using Solver, which did the rest of the job for me.

My Elo-Ratings vs Pinny

So, I have assigned the odds to the games and the moment of truth has arrived where I could compare the tips of my model with the market. From the few thousand games I had odds available on, my model identified value only in about 200, which was a good sign. Of course, metrics such as p-value didn’t make any sense on a sample of only 200 bets. Clearly, I had to use Closing Line Value to see if my ratings are any good – and so I did.


In my sample, my ratings had some closing line value – around 2% at unit stakes. The odds of the selections my model thought had value have been dropping as the game start was approaching. This encouraged me to develop my model further.

While having CLV with such a basic Elo-model does sound impressive, it also looks a bit suspicious. I have done some transformations on the original odds, and mapping the games between the two data sets wasn’t quite easy, so I cannot exclude the possibility that I have somehow “damaged” the data. Therefore, I continue to use Mean Squared Error and the Log Loss Function as the main indicators for my model’s success, with the CLV as a “support” indicator. 

I still don’t have a method to collect opening and closing odds. For this reason, my sample of ~200 bets is unlikely to get bigger. I surely won’t collect odds manually. Even though I tried at the beginning – that takes too much time. It is more likely that at some point I hire a freelancer to write a scraping script for me. Until then I will work with what I have.

Model improvements

When I released the last article I already optimized for the variables of a general Elo model. However, the input data I was basing these ratings on were results alone (win/lose or 1/0). I wasn’t fully using the resources I have at my disposal.

League of Legends: The importance of gold

I have already written that I intuitively feel that gold should be a very important metric in this game. Everything you do that matters here gives you extra gold. It is also widely regarded as one of the most important metrics by the community as a whole.

However, there was one problem. When I built a model on gold difference as opposed to game results I got much lower predictive power. The ratings were just terrible. So at least for the moment, I dropped gold from the calculation.

Big mistake

Even though gold difference was less telling than the result of the game, it still contained information. It could actually tell me how convincingly a team won a game. Was it a close call or was the losing team trashed beyond expectation? Looking at the binary result data you are in the dark. Having gold data added to it gives you a whole different perspective.

Furthermore, there were some reasons that my early gold model didn’t work, which I identified later. For the most part, I did not convert winning probabilities (derived from the pre-game Elo ratings) to expected gold earned in the cleverest way. There was some smoothing necessary, since a 80% favourite would never win 80% gold share in a game. Even in the most uneven games the share of the winning team would be something like 55%.

Taking that into account, and playing a bit with the weightings of early/late gold lead, the results started improving significantly. Now I had a gold component, that didn’t have the predictive power of my result component, but was getting close. And here comes the cracker.

The Benter Boost

Combining two expert judgments (or two models) seems to be one of the most powerful tools in rating modeling. This is what Andrew Mack in his book “Statistical Sports Models in Excel” refers to as ensemble models*. In my case, combining the two ratings delivered the performance boost I was hoping for. The combined model was better than each of the single models and generally incredibly accurate.

Am I not overfitting?

I started to get worried I am overfitting my model factors to past data. The number of factors has increased quite a bit – results, early and late gold leads, K-factor, home-advantage and many weighting coefficients.

What gave me quite a bit of hope though was the fact that adding new data to the model (as more games are being played) seems, more often than not, to improve the predictive power of the ratings! I can only hope for this trend to continue, but it is indeed a good sign.

Where to go next?

I doubt I will add many more factors to the model, as I am still concerned I might start to overfit and I do believe gold lead captures most of the relevant developments in-game. At most I might add a third factor accounting for territorial advantage (map control) – based on wards placed/destroyed, towers taken down, etc.

I plan to invest some time in further adjusting the coefficients I use. Using league-specific factors didn’t seem to improve results significantly, which was certainly a disappointment. This is probably something to look deeper into.

What gives me a lot of headache is the fact that I calculate the ratings using VBA instead of Excel formulas. While this makes the whole process a lot more stable and quick, it does mean that I cannot use Solver to find the optimal model factors – I must do this manually.

Since this starts to become the main task in front of my model, I have two options. Either 

  • transferring everything to an Excel sheet – which might be a disaster with the data volume I am processing
  • Or try to replicate the Solver logic in VBA – sounds fancy, by I have no clue how to do that

And of course I still have the option to

  • Try out different factors manually

…which is getting less and less acceptable.

Don’t hold your breath as I don’t think I will change much in the near future. There is betting to be done, articles to be written, so I don’t have that much time left for my favourite hobby project. But sooner or later I will get there.

Now, let us all get back to…

League of Legends: Worlds 2019

What was this article actually about? Right, League of Legends.

So, we have the 8 teams in the quarterfinals and the draw has been made. Let us see what do we have:

League of Legends: Worlds 2019 Quarterfinals

SK Telecom T1 v Splyce

So, we have 3 pairs of relatively equally matched teams and 1 that stands out as particularly uneven. I am of course talking about the match-up between the Korean Champions SK Telecom and the 3rd-seeded team from Europe Splyce. Splyce made it through the group phase with an incredible 3-0 run in the last day and certainly deserved their spot in the quarterfinals. However, they don’t stand much of a chance against arguably the strongest LoL team at this point.

However, I must say the markets are overestimating SK Telekom a bit at the moment. Or underestimating Splyce. Odds of 13.43 at Pinnacle (I actually placed them at 16.67 at the beginning of the week) can not be justified by ratings alone. In fact, my Elo says the real odds for Splyce should be in the neighbourhood of 5.0. So this is certainly one game in the quarterfinals where I see value.

Griffin v Invictus Gaming

The second-seeded Korean Team Griffin is facing the current title holder, the Chinese Invictus Gaming. Invictus Gaming is a respectable brand with a lot of past success, including the 2018 Worlds title. That perhaps explains why the market (in my opinion) tends to overrate them. They have dropped quite a bit in form the previous year and I currently have them as the lowest rated team in the quarterfinals. Also, they have hardly qualified out of their group, which was for me the easiest one (most people say that was FunPlus Phoenix’s group, to which I disagree), thanks to a Team Liquid performing below expectation.

Griffin on the other hand have finished first in a tough group (beating MSI 2019 winners and currently 4th ranked team by Elo, G2) and are generally among the favourites for winning the tournament or at least qualifying for the finals. So even though Griffin is priced as a slight favourite at 1.588, I do believe there is quite a bit of value there as my model is pricing them even lower – around 1.30.

FunPlus Phoenix v Fnatic

It is my belief that FunPlus Phoenix (the top Chinese team at the moment) are being constantly underestimated from the market. I currently have them es the second best Elo-team, GosuGamers has them as #1. This is perhaps due to the fact that they are a relatively new team with smaller fan base, but their results in the Chinese championship speak for themselves.

Funnily enough, something similar could be said about Fnatic, who are constantly being undervalued by the market even though they are by all measures a top team and not too far off the title strongest EU org G2.

As a result of the above, I do believe this game is being fairly priced with FunPlus Phoenix being the logical favourite (priced 1.606).

Damwon Gaming v G2

Finally, we have Damwon Gaming v G2. This is the only game where the team who finished first of his group (Damwon) is outsider against a team that didn’t win his (G2). This is hardly surprising – G2 is currently ranked as the strongest second, while Damwon as the weakest first, according to Elo ratings at least.

What’s more, G2 managed to win the MSI (something of a mid-season Worlds) this year, which was a great surprise at the time, but which helped them establish themselves as a top team in the eyes of the community. Damwon on the other hand have entered the tournament through the play-offs as the third-seeded Korean team and finished first in arguably the easiest group in the group stage. That doesn’t take away from the fact that they managed to win their group and are by all means a very dangerous team.

All in all I think the market is pricing this pair correctly with G2 being a slight favourite at 1.84. 

League of Legends: What’s next

So, what comes next for my model and LoL in general?

The quarterfinals are being played this weekend with Griffin and Invictus Gaming opening on Saturday at 10 CET. The game promises to be quite exciting for all the fans. I will have some money on Griffin so I will be following that for sure.

All games will be broadcasted live on riotgames’ channel on Twitch. All in all, Worlds 2019 will certainly be a thrilling competition till the very end.

Next model steps

As far as my model is concerned, there are few main directions I need to work on:

  • Odds scraping
  • Factor optimization problem (find a way to automatically optimize model factors)
  • Implement time decay & player value. The second one would be big both in terms of working hours required and potential improvement to the model

Once I am done with these I would have more confidence that my model can positively beat the market. Right now I am probably not quite there yet.

Anyway, I will keep you posted on the progress and might even drop some picks on my Twitter account. Stay tuned if interested!

Thanks for reading, enjoy the remaining Worlds games and see you soon!

* I am quoting Andrew Mack’s book for the second time and must say I really liked it. I don’t know if I’d find the time to review it, but I can surely recommend it.