Model trends: League 2, 2016/17

With the regular League 2 season over, I thought that I should update the graphics which track my model predictions to see how each team performed relative to its predictions.

Explanation

The way it works is pretty simple: for pre-season and after every round of games it shows the results of my simulations for a specific club in terms of both:

  • How their average predicted final league position has changed (the solid line in the top chart)
  • How their predicted probability of ending up in each section of the table has changed (the coloured columns in the bottom chart)

The point of this is to show how the model’s assessment of a club’s prospects has changed after each round of games, but I also wanted some idea of how good a predictor it is. I’ve therefore added (as a dashed line) each club’s actual league position and briefly assessed (under each graphic) how well it’s predicted this so far.

I’m not expecting a 100% accuracy rate for a variety of reasons, including:

  • As Leicester recently reminded us, football is notoriously difficult to predict and the strongest team doesn’t always win
  • The rating system that drives the predictions can take a little while to adjust to sudden changes (e.g. a big tactical shift, replacing the manager or a lot of transfer activity)
  • While I’m pretty happy with the rating system and model, there’s limited data available in the lower leagues and therefore it may miss some subtleties in the way certain teams perform

Anyway, onto the graphics. There’s one for every team along with a brief summary and my view of how well the model’s predicted their fortunes.

After their impressive showing last season the model had high hopes for Accrington. While it didn’t factor in the loss of key players over the summer it would have adjusted pretty quickly if they began performing at a lower level, but the strange thing is that they picked up pretty much where they left off performance-wise. Unfortunately a horrible run from November torpedoed any hope of another promotion challenge and their recovery came too late.

Model performance: Far too optimistic and didn’t see their horrible run coming

The model predicted a lower mid-table finish for Barnet to begin with and that’s where they’ve ended up. It maintained its optimism despite the Bees’ poor start and the extent of their recovery even provided hope of a play-off spot before a poor run-in made that unlikely.

Model performance: Pre-season prediction proved to be pretty accurate

The model expected Blackpool to challenge for a play-off place this season and got their final position spot on. It even kept the faith during their indifferent start, although was caught on the hop a bit by the suddenness of their late improvement.

Model performance: Kept the faith early on and ended up spot on

Cambridge’s disastrous start didn’t dampen the model’s expectation of a mid-table finish too much and so their subsequent recovery wasn’t too surprising. After their hot streak diminished in late January they’ve been plodding along quite happily in upper mid-table with the occasional play-off flirtation.

Model performance: Sustained optimism was justified

In pre-season the model expected a play-off finish from Carlisle and it maintained a degree of restraint when the Cumbrians were riding high in the top three as their defence was nowhere near as strong as their attack. When their bubble burst they went into free-fall, but recovered just in time to salvage a play-off spot with a dramatic final day win, ending up exactly fulfilling the pre-season prediction.

Model performance: Pre-season caution ended up being the correct call

Cheltenham’s convincing promotion campaign led the model to predict a safe mid-table finish, but they don’t appear to have adjusted as well as expected. There’s been a lot more flirting with relegation than anticipated and their ratings have steadily declined, although a respectable record during the run-in was enough to keep them safe.

Model performance: Even accounting for a bit of bad luck, far too optimistic

Colchester were predicted to finish comfortably inside the top half and despite a poor run from mid-September they ended up doing just that. An impressive recovery – and something of a hot streak – took them to the fringes of the play-offs but performances never looked sufficient to propel them over the line. A late surge in a heavily-congested upper mid-table saw them register a respectable finish.

Model performance: More or less bang on and wasn’t overly swayed by a sticky patch

The model didn’t have high hopes for Crawley’s season and remained sceptical despite their good start. They eventually fell but not as far as expected in pre-season and never really looked like being relegated, but it wasn’t a surprise to see a managerial change given that overall performances remained stodgy.

Model performance: Not that far off in the end, but a shade too pessimistic

Despite a promising first half of the season, spent mostly in the top half of the table, the model continued to predict that Crewe would finish their campaign in the lower half. Since mid-December that’s looked increasingly prescient and despite a strong finish they’ve ended up more or less where they were expected to.

Model performance: Pre-season prediction – and ongoing scepticism – was pretty accurate

Doncaster looked unlucky to have been relegated last season and the model’s pre-season prediction took that into account, pegging them for a top three finish. With automatic promotion having looked all but certain since the end of January, a return to the third tier always seemed to be on the cards.

Model performance: Correctly predicted a title challenge

In pre-season the model assessed Exeter as being capable of mounting a play-off challenge and wasn’t unduly swayed by their poor start, which looked relatively unlucky. Their subsequent recovery saw them first fall back in line with the model’s prediction and then go on to surpass it.

Model performance: Was right to be optimistic, but underestimated them slightly

Grimsby’s strong performance in gaining promotion was sufficient for the model to predict a safe mid-table finish and that always looked to be a likely outcome. While the Mariners spent some time in the play-off zone earlier in the campaign, their performances have been more consistent with the mid-table area they ended up occupying.

Model performance: Prediction was pretty accurate throughout

The model suspected that Hartlepool would have a relegation battle on their hands this season and maintained that stance even when some sharp finishing was keeping some distance between them and the bottom two. However they eventually ran out of steam which, combined with Newport’s revival, saw them finally dragged into the drop zone late on.

Model performance: Unfortunately the pre-season assessment ended up being accurate

My model was more pessimistic about Orient than most in pre-season and a lower mid-table finish looked to be a reasonable prediction in October, but the subsequent chaos surrounding the club saw their fortunes worsen considerably. Relegation looked increasingly likely from the beginning of February and had crystallised into near-certainty by mid-March.

Model performance: Too optimistic, although with mitigating circumstances

Luton were expected to challenge for promotion and they haven’t disappointed, although with early performances cooling a play-off finish was always the likelier outcome. They ended up more or less where they were predicted to, with any hopes of a top three finish being extinguished before the end of March.

Model performance: Correctly called a convincing play-off finish

Mansfield were expected to finish in upper mid-table back in pre-season and despite some ups and downs that seems pretty accurate. Performances have improved since their January recruitment but a relatively modest goal difference may count against them in the race for a play-off spot.

Model performance: Ended up pretty close to their pre-season predicted finish

The model’s assessment of Morecambe – particularly their defence – has been pretty negative since last season. It quickly adjusted after their strong early start and correctly flagged it as unsustainable, although the Shrimps continued to grind out enough points to remain comfortably above the relegation fray.

Model performance: Adjusted quickly after their surprisingly good start

Newport were expected to end up near the foot of the table, although to flirt with relegation rather than dramatically leave it at the altar. The model wasn’t fooled by a decent start and soon revised its expectation downwards, but like most people had them pegged as doomed until their great escape.

Model performance: Predicted a less harrowing battle against relegation, but quickly adjusted

When Notts County sat in the play-off zone in late October the model was still forecasting a relegation struggle based on their poor underlying performances. By the turn of the year things were looking bleak as predicted, but the appointment of Kevin Nolan turned things around impressively.

Model performance: Correctly predicted a relegation battle, although not the Nolan revolution

Plymouth were expected to challenge for automatic promotion in pre-season and recovered from a slow start to do just that. They’ve looked likelier than not to finish in the top three since late September, although they’ve never been predicted to claim top spot even when occupying it for long periods. Performances look to have ebbed slightly this season, although results have remained sufficient to keep them comfortably in the top three.

Model performance: Correctly predicted an automatic promotion challenge

Portsmouth were the model’s pre-season title favourites and fulfilled their promise dramatically on the final day after sitting mostly outside the top three until mid-March. Their strong ratings – driven by consistently dominant performances – kept the model optimistic of automatic promotion all season, with the lowest probability estimate grazing 50% at their lowest ebb.

Model performance: Eventually proved correct to label them title favourites

While Stevenage have made some steady improvements this season their late surge into play-off contention has been surprising – at least to anyone looking at their data. Boro currently look like a solid mid-table side, which aligned with their league position as recently as February. The bubble eventually burst but still left them sitting in a top half position that I suspect most fans would have gladly taken at the start of the campaign.

Model performance: Didn’t see the late “hot streak” coming

Wycombe were expected to finish in mid-table and that prediction held despite a poor start, with their impressive run of results feeling like an overdue correction. Performances then ebbed as sharply as they rose before improving yet again, making them a tough team to anticipate. Even though they fell back towards their pre-season prediction, a top half finish was more than the model expected.

Model performance: A bit too pessimistic, but it was a rollercoaster season

The model expected a lower mid-table finish from Yeovil and that proved to be the case. Despite a brief incursion into the top half of the table – which didn’t move the needle on their anticipated finish all that much – they’ve mostly hovered close to where they were expected to end up.

Model performance: Correctly maintained a lower mid-table expectation