These graphics are intended to show when each team’s shots were taken during a match and provide a rough measure of how good the chances they created were. The latter is measured using Expected Goals, where each type of shot is given a value of between 0 and 1 goals based on how many shots of that type are required to score a goal on average.
There’s a really nice summary of the concept of expected goals here:
The point of doing this is to identify not only which team created the better chances overall, but also to show how dominance of the match varied over time. The rationale for looking at shots in this way is that, while shot conversion and save percentage are highly volatile, the quality and number of chances created has been shown to be a much more reliable indicator of long term performance.
A simple way to think about the Expected Goals value is as the number of goals a team “deserved” to score, if we lived in a world where all players were equally skilful and we had perfect information about the shot.
These are obviously both massive over-simplifications. While skill doesn’t vary massively between players in the same division (particularly in lower league where the better players are likely to be promoted or poached) there isn’t any data on defensive positioning or how the ball was delivered at this level. However I have to work with what I can get and to me this still feels intuitively superior to merely counting shots.
The differences between the actual and expected goals scored and conceded by a club in a given match can be explained by three non-exclusive reasons:
- They were lucky (or unlucky)
- They were playing against an unusually weak (or strong) opponent
- They genuinely performed more (or less) effectively than the average club
Therefore I wouldn’t advise trying to draw any conclusions about a team’s overall attributes from a single match, but I suspect that for some clubs we’ll see the same patterns or features repeating themselves in multiple matches.
It’s much easier to explain these using an example, so let’s look at one of the matches from the opening weekend of 2015/16:
So here we have a 1-1 draw between Mansfield and Carlisle. At the top are the teams with their scores and, in brackets, the Expected Goals values of all the shots they took. In this case, both created 1.3 goals’ worth of chances, based on the type and location of each shot. These measures aren’t perfect because the data is sparse at this level, but it’s fairer than treating all shots the same and over the long run it should be fairly accurate. In this case it’s suggesting that the draw – and the number of goals scored – are relatively fair.
The match time (in minutes) runs along the bottom (with a break at half time) and the cumulative quality of each team’s shots (i.e. adding together the Expected Goal values of all their shots) up the side. Every vertical “jump” in the line represents an individual shot*, with the size of the jump proportional to how likely that type of shot is, on average, to result in a goal.
The goals themselves are represented as blobs on the line which appear at the top of the “jump” that represents the shot they were scored from. So here Mansfield take the lead just before the half hour mark with a relatively unpromising effort (taken from outside the box) but Carlisle then peg them back on the stroke of half time.
At half time, Carlisle’s chances have been roughly twice as likely to result in a goal than Mansfield’s – there are four relatively big jumps in the blue line which indicate shots from relatively good positions – but they create little else in the second half compared to Mansfield, who catch up over the course of the second half but can’t find a winner.
Let’s take a more eventful and imbalanced match for a second example:
You can see that while the size of the graphic is the same, the vertical axis now goes up to three Expected Goals because the shots in this game were much more dangerous. Bury’s shots in this match – which were worthy of almost three goals – represent roughly double the goalscoring potential of Doncaster’s, so they can consider themselves unlucky to only leave with a point.
However up until half time it was Doncaster who had more right to feel aggrieved: they’d created over a goal’s worth of shots – around double Bury’s tally – but the match was still goal-less. However they barely troubled Bury’s defence after the interval (their line is almost flat) while Bury create seven good chances (counting the larger leaps in the blue line).
* To keep things simple, I’ve bucketed the data into minutes and rolled injury time into the 45th and 90th minutes. Given that it’s perfectly possible for a team to have more than one shot in the same minute, some jumps consist of more than one shot.
There’s obviously more information that I could have put on these (e.g. substitutions, cards) that would add some additional explanatory power here, but I wanted to keep them compact and simple. I don’t pretend to be an expert on every club – what little knowledge I have is gleaned from highlight reels – but I’ve found that you can tell quite a bit from using a little bit of data in the right way.
Credits: There are several bloggers out there already using Expected Goals to review individual matches (including Michael Caley and Sander IJtsma) who were an inspiration for these. As he often does, Ben Huxley helped me not to make an utter mess of the design.