In one of the five hundred and twenty five thousand online forum threads about why Chris Froome is or is not a doper, one of the questions raised was about whether a coach could detect if an athlete was on the juice based on their performance (power) data.
That led to a comment about typical changes in a rider's power over the course of a season.
As to the question of a coach's ability to detect doping from performance, performance changes are multifactoral and so that makes it nigh on impossible.
It's relatively easy to measure the performance change (power meters enable that), far more difficult to parse out the specific reasons why it occurs.
Now of course one can wonder if you have known an athlete for a long time and know their training and performance history and have a reasonable understanding of their potential. If they find a sudden large boost when nothing else in particular has changed, well you might naturally begin to wonder.
Consider that I have seen athletes attain Functional Threshold Power improvement of between 5% and 100% in 6 months of training and you can immediately see the problem, especially given doping provides performance advantages well within the range of those attainable by completely legitimate means.
Better training, better diet, better sleep, better psychology, better aero, better planning and support, better race skills and race craft, better equipment and tools, and of course, doping. These are not mutually exclusive means to improve performance.
This is the problem e.g. that makes up much of the discussion about Froome or others. Lot's of Clinic focus on his "transformation". The problem is that there are plenty of legitimate as well as illegitimate means by which such performance changes can be explained.
Balance that with the fact that in the past 30 years half the riders standing on the podium for the major Euro pro races and top 20 in GTs are known to be dopers (let alone the ones that slipped though the net). Objective assessment therefore needs to consider all such possibilities.
However that still doesn't mean one can immediately infer from performance data or even physiological testing data such as lactate threshold or VO2max the reasons for one's performance, or more to the point, their change in performance.
I think the only way an ethical coach is likely to spot or suspect doping is if they are in frequent eye ball contact with the athlete, and it's not so much going to be from their on-bike performance, but rather from observing off-the-bike behaviour.
As much as coaches might like to be in frequent eye ball contact so they can do a better job, coaches are often not in such frequent close quarters with their clients. Riders travel and coach can't be with all their clients all the time. The exception are squad/institute coaches that interact multiple times per week and travel with their athletes that typically attend the same races.
More usually the contact is via phone/skype/chat/email and other social and electronic media style interactions, as well as the athlete's diary notes that accompany their power meter files. For the most part this works pretty well (athlete results demonstrate that to us all the time) but of course there are some things for which seeing the athlete is preferable and some personalities that require more eye ball contact than others.
Anyway, on one of the forums I made a comment about the typical variability in FTP for an active racing cyclist. An often quoted value is about 10% variance from out of form/off season to peak fitness. That was questioned as being quite a large variance. I really had nothing other than my years of coaching and personal experience to suggest whether or not this was realistic.
So I thought about attempting to answer the question with some data.
Fire up WKO4 and create a report using the following expression:
max(ftp(meanmax(power),90)) / min(ftp(meanmax(power),90))
and apply it to ranges covering entire years of data (with power data for >>90% of rides).
That expression calculates the modelled FTP for the date range selected, locates the maximum and minimum values for FTP that are calculated during that date range, and calculates the ratio of the maximum to the minimum FTP.
I did that for a selection of 10 athletes over 2 seasons. These athletes are mostly competitive amateur through to elite level (but no full time pros), and have power data for >> 90% of their rides.
This is the summary:
What I find interesting is the variance as measured by the modelled FTP in WKO4 is larger than I would have expected.
Over 10 riders for 2 seasons each, we have an average maximum to minimum modelled FTP ratio of 1.23, meaning the peak modelled FTP for a season was, on average, 23% higher than the minimum modelled FTP for that same season.
Good luck trying to pick out one specific reason for performance changes when models are showing this sort of variance in FTP.
Do I think their FTP really varies that much? Well possibly not quite, but then with time I am seeing mFTP to be quite reliable indicator, provided the quality of input data is good. One erroneous power spike can mess with the power-duration data and mFTP value. Indeed when there are large changes in the modelled power-duration metrics, it's often due to input data error than anything else.
For reference I also provided an indication of their annual TSS (~27,000) and average CTL (~77 TSS/day) for this selection, just to show that theses are riders on average have quite decent training volume. I would not rely totally on those TSS values though, as they probably need an audit of the FTP history applied in WKO4 to generate them, so I consider them as just indicative for now.
I also looked at my own data for 2009 and 2010, and my annual mFTP variance was 15% each year, so a bit lower than the average reported above.
Now of course with all such things one needs to consider context, and quality of the input data. For now that's a study beyond what I have time for.