You are viewing 1 of your 1 free articles. For unlimited access take a risk-free trial
Can the excessive use of digital technology and electronic feedback actually hinder your performance? SPB looks at the recent evidence
The relentless march of digital technology and the rapid progress in artificial (machine) intelligence – more commonly known as AI – appears, for better or for worse, unstoppable. Unless you’re someone who deliberately tries to avoid these technologies, there are few aspects of our daily lives that now seem to be without it. Nowhere is this truer than in training, where athletes now have access to a plethora of accurate and affordable electronic monitoring, testing and feedback devices. From simple bike computers to GPS devices and power metering systems, it’s never been easier to monitor your performance and collect data to help you plan your future training sessions.
With so much information so readily available, it’s natural to assume that something like pacing a time trial or tailoring a strength-training session without the use of digital monitoring and feedback information will put athletes at a distinct disadvantage. Surprisingly however, it’s by no means as clear cut as you might assume.
For example, some research on cyclists and electronic feedback carried out in 2016 (which we reported at the time) found that providing this feedback didn’t necessarily improve performance(1). In this study researchers decided to investigate how experienced cyclists performed a time trial with and without the aid of commonly used feedback from electronic devices – eg speed, heart rate, power output, cadence, elapsed time, and elapsed distance. The researchers wondered whether without this digital feedback, the cyclists would find it more difficult to regulate, distribute, and manage their effort, resulting in poorer performances.
Their curiosity was piqued by the fact that previous research had found that an athlete’s prior experience and accurate knowledge of the task demands are what’s really crucial to success(2), and also that in cyclists performing cycling time trials, performance did not seem to differ when accurate or inaccurate split-time feedback was given during the trial(3). Importantly, prior to 2016, no endurance exercise studies had been carried out demonstrating the necessity of the presence of in-race instantaneous task-related feedback that is nowadays commonly available via external devices (eg bike computer, running watch, power output meters etc)
To put this to the test, 20 performance-matched cyclists were randomly divided into one of two groups and asked to perform a 20km time trial as fast as they could. These two groups were:
After the time trial, the researchers analysed the results to see what differences there were in terms of performance and other parameters.
of the cyclists using feedback were not significantly better than from the no-feedback group, nor was there any real difference between the average power outputs per kilo of body mass of the two groups (see figure 1). It was true that the cyclists with access to feedback did put a spurt in at the end of the time trial (presumably because they knew exactly how much distance remained and were able to time this spurt appropriately), but this didn’t enhance the overall performance. Moreover, the perceived ratings of exertion for the time trial were the same regardless of whether the cyclists had feedback or not.
What was interesting about this piece of research is that it suggested experienced cyclists were able to use simple bodily and environmental information - eg “How am I feeling and how hard am I breathing?” - to control and adjust their effort levels and achieve comparable time trial performances to when feedback was available. Indeed, the researchers concluded by questioning (quote) “the necessity of the presence of in-race instantaneous task-related feedback via electronic devices for maximising performance”.
Since the 2016 study above, further research has been carried out showing that in experienced endurance athletes at least, the availability of electronic feedback may not offer any performance gains compared to no feedback. In a 2020 study on 30 trained cyclists (competing at club level), researchers examined the influence of the availability of task-specific feedback on 20km time trial cycling performance and test-retest reliability (in other words whether having electronic feedback enabled more consistent performances)(4).
The club-level cyclists completed two 20km time trials on different days, one with feedback (power output, cadence, gear and heart rate) and one without feedback. Elapsed distance was provided in both time trials - on the basis that many time trial trials consist of laps of a known distance or clearly marked routes with distances between junctions marked, so even cyclists with no feedback will have a good idea of elapsed distance as they compete. During the two trials, the cyclists’ feedback data, times and heart rates were continuously recorded, and a rating of perceived exertion (RPE) was collected every 2km.
The results showed that neither time-trial performance nor pacing behaviour were statistically different between the feedback and no feedback trials. Furthermore, the perceived exertion of the cyclists between their feedback and no feedback trials was no different. And analyzed on a ‘per cyclist’ rather than as a group, the performance and pacing behavior remained pretty much the same regardless of whether the cyclist in question used feedback or not!
In fact, such were the findings that the researchers concluded that ‘except for elapsed distance, electronic feedback should be withheld from trained cyclists when testing out new interventions or strategies that may affect performance (because the temptation could be for cyclists to resort to following the feedback rather than monitoring their own internal and subjective performance).
The evidence above suggests that in experienced endurance athletes, the provision of electronic feedback doesn’t seem to confer an advantage. A further question is can there be such a thing as feedback overload? In other words, could flooding an athlete with data during an endurance task actually worsen performance? There’s very little data on this topic but another 2020 study published by British and Qatari scientists came up with some fascinating findings(5).
In this study, the researchers compared different modes of feedback (multiple vs. single) on 30-minute cycling time-trial performance in cyclists and triathletes. Twenty participants, 10 non-cyclists (controls) and 10 experienced cyclists/triathletes, performed two 30-minute self-paced cycling time-trials separated by 5-7 days, either using a single feedback (elapsed time) or multiple feedback modes (power output, elapsed distance, elapsed time, cadence, speed, and heart rate – the typical data displayed on bike computers/smart fitness watches).
The cyclists’/triathletes’ information acquisition was also monitored during the multiple feedback trial via an eye tracker. The subjects’ perception of motivation and ratings of perceived exertion (RPE) were collected every five minutes. In both trials, performance variables such as power output, cadence, distance, speed) and heart rate were recorded continuously.
The results showed that average power outputs were significantly greater in the experienced cyclists/triathletes’ compared to the non-cyclists (as you might expect), both with multiple feedback (227 watts vs. 137 watts) and single feedback (287 watts vs. 131 watts). What was fascinating however was that the non-cyclists’ performances did not differ between multiple and single feedback trials but the cyclists/triathletes’ time-trial performances were very significantly impaired with multiple feedback (227 watts) compared to single feedback (287 watts), despite adopting and reporting a similar pacing strategy and experiencing similar perceptual responses (see figure 2).
How could having access to a large amount of information feedback impair time trial performance? One theory is that this could result from an interference effect. Prior research shows that in physically and mentally demanding dual-tasks such as endurance cycling time-trials, when complex cognitive tasks are also given, mental fatigue can occur more rapidly, causing a reduction in exercise intensity(6,7). As a consequence, the study authors recommended that “overloading athletes with feedback is not recommended for cycling performance”.
Although electronic feedback has become very popular in endurance training and competition, it’s also finding its way into strength training, particularly when it comes to helping athletes to optimize their movement patterns and form during exercise. Since the human eye may be a limited tool for detecting minor errors in movement patterns, the idea is that technological methods providing real-time feedback on movement velocity, balance, body positioning, or force distribution can help better assess and improve the technical execution of body movements. Indeed, research has shown that in the absence of a trainer, real-time feedback on movement velocity during each repetition can improve back-squat performance(8).
One method of providing this kind of feedback is in the form of, for example, lights or sounds, which are used to illustrate the outcome of the performance or execution for the athlete – so-called ‘open ended’ feedback. The interactions between expectations, outcome, and feedback can then be used by the athlete to generate a relationship between the ‘feeling’ of a movement and the respective outcome(9). But how does this kind of electronic feedback stack up against the traditional verbal feedback and encouragement given in person by a coach, instructor or training partner? Does the all-seeing eye of electronic monitoring offer an advantage? Or does the lack of social interaction and communication with another human make it less effective?
A study last autumn by Norwegian scientists at Trondheim University tried to answer this exact question(10). Published in the journal ‘BMC Sports Science & Medical Rehabilitation’, the aim of this study was to assess the changes in performance and movement quality when executing the back squat following a five-week resistance-training program with either technological open-ended feedback or traditional, verbal feedback from an experienced trainer.
Twenty-two healthy, untrained females without a history of regular strength training experience in the previous eighteen months were recruited, and were randomly allocated to either a traditional feedback group with an instructor, or an open-ended electronic feedback group (where laser dots were projected onto a screen in front of the trainer, showing and guiding them to optimum positioning. Note that untrained participants were essential to the study in order that the effectiveness or otherwise of these two methods in terms of producing the correct movement patterns during the back squat could be assessed.
Testing to determine the correct initial loading for the back squat training consisted of three sets with ten repetitions of back squat, using 1) only the bar (20kg), 2) 50% of bodyweight, and 3) a load that allowed ten repetitions to be completed with approximately three repetitions in reserve (RIR – see this article). The submaximal loads were chosen due to the low training experience of the participants.
The participants were prescribed ten supervised training sessions over the course of five weeks (two weekly sessions), including three sets of ten repetitions. At least 80% attendance at training sessions was required to be included in the analyses, and an average attendance of 96% was reached. Each session was supervised by the same instructor and lasted around 20 minutes, including the warm-up. To maintain a standardized training condition between sessions and across all participants, a set of cues to be used during the training was developed (table 1 below).
TRADITONAL FEEDBACK |
ELECTRONIC FEEDBACK |
Try to push equally hard with both feet |
Keep the dots horizontally aligned |
Strive to press using the whole foot |
Try to keep the dots within the vertical lines |
Remember to engage the core muscles |
|
Maintain a slight outward knee rotation |
The training load during the 5-week intervention was self-selected. The participants were encouraged to increase their training loads throughout the intervention, but to prioritize selecting a load that they could confidently lift ten times with proper technique and with approximately two reps in reserve. This was not just to encourage and ensure good form, but also to minimize any risk of injury, which is an important consideration when performing back squats.
At the start and end of the training period, all the women were assessed for performance in three ways:
How did the two feedback methods compare? The findings were as follows:
· Both traditional and electronic feedback groups similarly increased their training resistances throughout the 10-session intervention.
· Both traditional and electronic feedback groups similarly increased their strength in the back squat.
· hen it came to the mid-thigh pull strength however, only the traditional feedback group experienced gains.
· The assessed lifting technique (reviewed by the highly experienced instructors) improved in the traditionally coached participants, but NOT in those receiving electronic feedback.
In summing up, the authors concluded that whereas squat performance was improved with either electronic or variable feedback, traditional verbal feedback and encouragement was superior in developing good technique, which could make it preferable for novices and those who are less experienced. Why was the electronic feedback less effective in generating good technique despite being more precise and continual in nature? The researchers speculated that using electronic feedback can make the user become dependent on that feedback to continuously correct the movement. In fact, several participants reported missing the feedback from the laser pointers when lifting without feedback!
It’s tempting to assume that the abundance of hi-tech electronic feedback makes for better training and competition, but hopefully you can see that while it can be a great aid, this assumption is not always true. Indeed, for experienced athletes, too much feedback could actually be detrimental.
Does this mean that endurance athletes should bin their smart watches, bike computers, GPS devices and power meters? Absolutely not, because there’s no denying that they can play an important role in ensuring the correct training intensity, and providing valuable information about whether the training performed is producing the required fitness gains. What the recent research does suggest however is that if you’re an experienced cyclist, runner or triathlete, you shouldn’t feel continuously wedded to these devices and be a slave to data.
In particular, there’s good evidence that electronic feedback is not a complete substitute for monitoring how you feel (perception of effort, breathing rate, sensations of discomfort etc); in this respect, your brain (trained by months or years of experience) can do an equally good or better job! For experienced athletes racing at a distance they are familiar with, too much electronic feedback seems detrimental so should be discouraged. More generally, the key to data collection is to know what data really matters, when to collect it, how to interpret that data and importantly, understanding how to use your findings to modify your training sessions going forward.
Finally, when it comes to developing good technique or honing skills using real-time feedback, while electronic feedback may help, the presence of an experienced coach or trainer to deliver that feedback verbally and using words of encouragement might be more effective than relying on electronic feedback. This is particularly true when athletes have to execute those skills in competition without electronic aids. In summary then, if you sometimes suffer from data overload, try some electronic-free training. Not only could actually be a rather liberating experience, it could even improve your performance!
References
1. Front Physiol. 2016 Aug 10;7:348. doi: 10.3389/fphys.2016.00348. eCollection 2016
2. Sports Med 2013. 43, 1–8. 10
3. Eur. J. Appl. Physiol 2012. 112, 231–236
4. J Sci Med Sport. 2020 Aug;23(8):758-763
5. Front Psychol. 2020 Dec 23;11:608426. doi: 10.3389/fpsyg.2020.608426. eCollection 2020
6. McCann R. S., Johnston J. C. (1989). The locus of processing bottlenecks in the overlapping tasks paradigm. Paper Presented at the Annual Meeting of the Psychonomics Society Atlanta, GA
7. J. Exp. Psychol. Hum. Percept. Perform 1984. 10:358. 10.1037/0096-1523
8. Proc Human Factors Ergon Soc Annu Meet. 2017;61:1546–1550
9. Exp Brain Res. 2008;185(3):359–381
10. BMC Sports Sci Med Rehabil. 2022; 14: 163.Published online 2022 Sep 2
Today you have the chance to join a group of athletes, and sports coaches/trainers who all have something special in common...
They use the latest research to improve performance for themselves and their clients - both athletes and sports teams - with help from global specialists in the fields of sports science, sports medicine and sports psychology.
They do this by reading Sports Performance Bulletin, an easy-to-digest but serious-minded journal dedicated to high performance sports. SPB offers a wealth of information and insight into the latest research, in an easily-accessible and understood format, along with a wealth of practical recommendations.
*includes 3 coaching manuals
Get Inspired
All the latest techniques and approaches
Sports Performance Bulletin helps dedicated endurance athletes improve their performance. Sense-checking the latest sports science research, and sourcing evidence and case studies to support findings, Sports Performance Bulletin turns proven insights into easily digestible practical advice. Supporting athletes, coaches and professionals who wish to ensure their guidance and programmes are kept right up to date and based on credible science.