Event sim real events

Home Design Build Race Links Reports Other Topics


I looked at the recent Worlds 2005 results, and at the variation in placing shown by each competitor around their final overall position.  For example, Craig Smith won the event with an average placing of 7.83, with actual placings of 5, 12, 13, 1, and so on, hence his place variation around his average was -2.83, 4.17, 5.17, -6.83, and so on.  The resulting graph for all competitors shows the classic Gaussian 'normal' curve.  The key statistic is that the standard deviation of these variations in placing is 11.08, and is a value that the simulation should approach.

I then ran a simulation of the event, and compared the results.  A very similar result, but not quite as much variation, with a standard deviation of placings of 9.02.  To bring the simulation into better alignment, there needs to be a probability of incidents greater than the 0.15 used, or variability in ability greater than the 25% used.  Since reports of the event indicate (with one or two major exceptions!) that incidents were not a major feature of the results, it seems that average race-by-race variation is larger 'in real life' than the 25% modelled in the simulation.

A second interesting analysis was to look at the pattern of final points scores, to see if the distribution matched the cosine 'S' model that could be used in the simulation.  These are the graphs from three recent events, showing that an 'S' curve can be seen in 2003 and 2004, but the statistics indicate that a linear curve is a better fit to the 2005 results.  The simulation results are shown below.

The simulation shows its cosine 'S' ability curve quite clearly.  It also shows an interesting artefact of the results, visible in the results from real events, a low-frequency oscillation around the line of best fit.


©2024 Lester Gilbert