Model trends: League 1, 2016/17
With the regular League 1 season over, I thought that I should update the graphics which track my model predictions to see how each team performed relative to its predictions.
The way it works is pretty simple: for pre-season and after every round of games it shows the results of my simulations for a specific club in terms of both:
- How their average predicted final league position has changed (the solid line in the top chart)
- How their predicted probability of ending up in each section of the table has changed (the coloured columns in the bottom chart)
The point of this is to show how the model’s assessment of a club’s prospects has changed after each round of games, but I also wanted some idea of how good a predictor it is. I’ve therefore added (as a dashed line) each club’s actual league position and briefly assessed (under each graphic) how well it’s predicted this so far.
I’m not expecting a 100% accuracy rate for a variety of reasons, including:
- As Leicester recently reminded us, football is notoriously difficult to predict and the strongest team doesn’t always win
- The rating system that drives the predictions can take a little while to adjust to sudden changes (e.g. a big tactical shift, replacing the manager or a lot of transfer activity)
- While I’m pretty happy with the rating system and model, there’s limited data available in the lower leagues and therefore it may miss some subtleties in the way certain teams perform
Anyway, onto the graphics. There’s one for every team along with a brief summary and my view of how well the model’s predicted their fortunes.
Based on their disappointing finish to last season, the model thought that the play-offs were a more realistic target for Bolton than the top two even after their strong start. This looked to have been borne out when the Trotters faded, but they found another gear as the season wore one and the model needed until early March to adjust.
Model performance: Not far off, but perhaps a shade too pessimistic
Bradford impressed last season and were therefore expected to challenge for automatic promotion this time around. That looked to be a correct call as their attack improved to match their already strong defence, but a mid-season dip in results saw the model cool on their prospects. However they should be a force to be reckoned with in the play-offs.
Model performance: Adjusted quickly after pre-season optimism
I was surprised that the model was so positive about Rovers in pre-season, given that they’d been promoted twice in succession, but they’ve pushed on and established themselves as a top half team just as predicted. For a while it even looked like they might break into the play-offs but this proved a step too far.
Model performance: Correctly ignored early struggles
Bury have a habit of starting strongly but fading and the model was suspicious of their early good fortune this time around. Even when they were sitting in the top two it expected them to eventually return to the bottom half. A vulnerable defence has undermined a respectable attack and while they improved since January they’ve ended up exactly fulfilling the pre-season prediction.
Model performance: Justifiably suspicious of their good start
Given their poor performances in the Championship, the model expected that mid-table was more likely than an immediate promotion push and so it has proved. Charlton have been hovering around the middle of the division for most of the season and also ended up exactly where the model expected beforehand.
Model performance: Correct not to expect an immediate promotion challenge
Chesterfield were expected to struggle against relegation and that’s unfortunately proved to be the case. The model’s assessment was unaffected by their bright start and since mid-October the Spireites have been sat mostly inside the bottom four with little hope of defying gravity. Performances have continued to spiral and a major rethink will be needed over the summer.
Model performance: Unfortunately spot on
Coventry’s disappointing end to last season meant that they were expected to loiter in lower mid-table and this still looked likely after a poor start was overcome in November. However the wheels soon came off again and since mid-January the Sky Blues have looked destined for League 2. In the model’s defence, a lot has been happening off the field including multiple managerial changes, and they’ve started to look a lot more capable – unfortunately too late – under Mark Robins.
Model performance: Too optimistic, albeit with mitigating circumstances
The model thought that Fleetwood were far better than their lowly finish last season and tipped them for a top half finish this time around, but underestimated just how much would go right for the Cod Army. To the model’s credit, it thought they were better than their league position in the first half of the season and had them entering the play-offs in November, but their surge in the second half of the season came a bit out of the blue.
Model performance: Correct to be optimistic, but perhaps didn’t go far enough
The Gills were assessed as being set for a lower mid-table finish in pre-season and that expectation has barely changed since then. A bright start quickly evaporated and they had fallen in line with the model’s prediction by mid-October, although their managerial change looks to have backfired and saw them skimming the drop zone. While they were challenging for promotion for much of last season, their underlying performances had been ebbing for some time and that’s continued this term.
Model performance: Close for most of the season but didn’t predict the poor finish
If it wasn’t for some inexplicably leaky defending early in the season I’m convinced that Millwall would be sitting a few places higher. They’ve looked consistently solid since the back end of 2015/16 but have underachieved on some impressive performances, so a play-off triumph wouldn’t go amiss.
Model performance: Called a play-off finish correctly, but a few places off
It’s been a frustrating season for MK Dons, who were expected to mount a play-off challenge by the model ahead of the season. They’ve tended to dominate matches without getting the breaks and have consequently become mired in mid-table, although at least managing to secure a top-half finish on the final day. Expectations were quickly adjusted but I expect a better showing next time around.
Model performance: Adjusted relatively quickly to a poor start
Northampton defied the model – and probably most other predictions – in storming to promotion last season but were far easier to assess this time around. Lower mid-table was expected in pre-season and when a promising (and draw-laden) start fizzled out that’s where they’ve ended up.
Model performance: Correctly predicted a lower mid-table finish
Oldham’s attacking output has been worryingly low all season but their defensive stubbornness has compensated sufficiently to keep them afloat. The model expected a relegation scrap and they ended up just four points from the drop zone, although it was a touch too pessimistic in mid-season.
Model performance: Foresaw a struggle against relegation, but not the late escape
The model was sufficiently impressed with Oxford’s strong showing during last season’s promotion campaign to tip them for a play-off spot this time around, although was quick to apply a slight downgrade. It maintained its optimism despite their inconsistent start and began to look prescient as they’ve gradually climbed the table. They ended up leaving it too late but finished just four points outside the top six.
Model performance: Optimism was justified after a tricky start was overcome
Peterborough look to have had “mid-table” stamped across them right from the beginning this season. They’re still searching for a balance between attack and defence with often chaotic results and their play-off incursion was consequently short-lived. They were another side to end up exactly where the model expected in pre-season.
Model performance: Looks to have been pretty accurate throughout
Despite Port Vale sitting in the play-offs in mid-October, they were creating so few chances that their success looked unsustainable and the model’s pre-season expectation of a relegation battle had barely budged. Reality eventually caught up with them, although they could still have survived on the final day.
Model performance: Correctly anticipated a relegation battle despite early success
Rochdale endured a horrible start to the season but their underlying performances remained relatively stable, so the model’s pre-season prediction again looks pretty prescient. While they flirted heavily with the play-offs after an impressive mid-season run, a big dip in both form and performances left them with too much to do.
Model performance: Only briefly swayed by a mid-season surge
While the Iron were sufficiently impressive last season to be tipped as likely play-off contenders, some insanely – and ultimately unsustainably – clinical finishing propelled them into a title challenge. Despite leading the division for a long time the model remained slightly cool on their automatic promotion chances and the pre-season prediction proved correct.
Model performance: Correctly flagged that the title challenge was unlikely to last
The model saw plenty that it liked in Sheffield United even before the season began and consequently didn’t alter its prediction of automatic promotion when the Blades took one point from the first 12 on offer. By the start of November it was 90% certain that they’d finish in the top two and didn’t look back.
Model performance: Called their title challenge early
Shrewsbury went into this season as the division’s worst-rated team and there was little to suggest that this would change until about a month into Paul Hurst’s tenure. Since early December their steady improvement has propelled them upwards to unexpected – but deserved – survival.
Model performance: Didn’t see their impressive turnaround coming
The model didn’t think much of Southend last season and assumed a lower mid-table finish would be the likeliest outcome this time around. However – while it remained optimistic during their poor start – the scale of their improvement from mid-October onwards caught it by surprise. While it’s since tracked their results closely, the Shrimpers have massively exceeded expectations overall.
Model performance: Underestimated the extent of their recovery
The model’s pre-season prediction was that Swindon would have a relegation battle on their hands and that’s proven to be the case. The Robins have been at the wrong end of the table since early October and a belated improvement in their performances didn’t translate into the required recovery.
Model performance: Foresaw their relegation struggle
In the early part of the season, Walsall’s ratings were in free fall as the players brought in to replace those poached over the summer struggled to adapt. Some uncertain displays depressed their predicted finish further into the “lower mid-table” range but things eventually began to click and the Saddlers’ second-half surge ended up making the pre-season prediction look pretty accurate.
Model performance: Ended up looking prescient, but over-reacted to early troubles
Wimbledon looked sufficiently impressive in gaining promotion that the model pegged them as being capable of a mid-table finish. Despite an uncertain start – which didn’t perturb it all that much – that prediction has proved pretty accurate, with even a brief cameo in the top six along the way.
Model performance: Pre-season expectation of mid-table was justified