Several research groups have applied systems theory approaches to quantify and describe responses to physical training (Busso and Thomas, 2006; Morton, 1997; Mujika et al., 1996). The Banister impulse-response model estimates performance at a given time to be the difference between the 'fitness' and 'fatigue' effects of prior training loads (Banister et al., 1975). The five adjustable parameters within the model (initial performance level; two time constants that describe fitness and fatigue decay rates; and two gain parameters that describe how daily training impulses determine the amplitude in fitness and fatigue effects) are calibrated against measured performance data to provide individualised training response information (Banister et al., 1975). Based on these relatively simple assumptions, the Banister impulse-response model can explain a substantial proportion (over 90% in some cases) of the variance in performance data (Busso, 2003; Morton, 1997; Wood et al., 2005). However, a major limitation of the Banister model and its extensions is the requirement for frequent maximal performance tests to accurately determine model parameters (Jobson et al., 2009). The use of regular maximal performance testing is especially difficult in team-sport settings with weekly competitive fixtures, given their potential to cause additional fatigue (Nedelec et al., 2013). To date, the need for regular performance testing has limited the broader use of the Banister model in team-sports.
Heart rate variability (HRV) is a popular tool for monitoring wellness and training adaptation in athletes (Bellenger et al., 2016). In particular, the parasympathetic activity of the autonomic nervous system, typically represented by the square root of the mean sum of the squared differences between R-R intervals (rMSSD) component of HRV, has been shown to correlate well with variations in performance within both cross-sectional (Kenney, 1985), and longitudinal studies (Chalencon et al., 2015) across multiple sports. In addition, HRV measures are associated with overuse injury risk (Williams et al., 2017) and, more broadly, markers of global health (Adamson et al., 2004; Kiviniemi et al., 2007). As such, rMSSD may be an appropriate representative parameter to describe an athlete's stress-recovery status (Chalencon et al., 2012), especially given the ease and non-intrusive nature of its collection. Indeed, the emergence of smartphone applications and technologies has dramatically increased the accessibility of HRV measurement, such that it can now be recorded accurately using only a smartphone device (Plews et al., 2017). Recent work in competitive swimmers has demonstrated that HRV measures may be used as a viable substitute for performance measurements for the mathematical modelling of training effects (Chalencon et al., 2012), and could therefore be used to optimally plan and monitor training strategies in an individualised manner (Chalencon et al., 2015). However, this is yet to be applied and evaluated in a team sport context where, as stated previously, regularly monitoring changes in performance is inherently more complex than in individual, endurance-based sports.
Rugby Sevens is a format of Rugby Union that has grown in popularity in recent years, and is now included in the Summer Olympic Games. The contact and collision events that are inherent to Rugby Sevens, alongside the high physiological demands (Higham et al., 2014), means that the risk of injury associated with the sport is relatively high (Fuller et al., 2010). In particular, the injury incidence rate associated with elite Rugby Sevens training is substantially higher than the 15-a-side game (West et al., 2017), which is likely a result of the high training loads that are necessary to meet the physiological demands of competition. Therefore, the careful monitoring and management of player workloads on an individual basis is of critical importance, in order to protect players from the negative consequences of training whilst increasing their performance capacity and resilience (Gabbett, 2016). Moreover, a consideration of the most appropriate load measures (e.g., internal versus external) for this setting is also required. Accordingly, the aim of the current study was to assess whether chronic HRV responses, as a representative marker of training adaptation, could be predicted from the training loads undertaken by elite Rugby Sevens players. In addition, we sought to compare the effectiveness of internal (session rating of perceived exertion [sRPE]) versus external (total high speed running distance [HSD]) load measures for this purpose.
Eight male international Rugby Sevens players (mean [+ or -] SD; age: 27 [+ or -] 4 y, height: 1.86 [+ or -] 0.07 m, body mass: 93.2 [+ or -] 8.6 kg) were followed prospectively throughout an eight-week pre-season period that was undertaken in preparation for the 2016-17 World Rugby Sevens Series. The priority during this phase was to develop central adaptations through the use of extensive intervals, with a linear increase in intensity. The average weekly sRPE and HSD loads across this period were 2947 [+ or -] 941 AU and 3389 [+ or -] 892 m, respectively. This eight week pre-season period was chosen as each parameter in the model was likely to be emphasised across this preparation phase (Clarke and Skiba, 2013), and periods of 60-90 days are recommended for the mathematical modelling of training and performance, after which parameters should be reset (Banister, 1991). The study was conducted in accordance with the principles of the Declaration of Helsinki (World Medical Association, 2013) and a local university research ethics committee provided ethical approval.
Heart rate variability
Athletes were instructed to perform a 90 second HRV measurement each morning upon waking whilst breathing spontaneously in a seated position (Esco and Flatt, 2014). A Polar H7 Bluetooth heart rate strap (Polar Electro, Kempele, Finland) paired with a freely available smartphone application (Elite HRV, Ashville, North Carolina, USA) were used for daily HRV acquisition. The rMSSD was the HRV measure used for analysis, as this has been demonstrated to have greater reliability than other spectral indices (Al Haddad et al., 2011). The rMSSD data were log-transformed (Ln) to reduce non-uniformity of error (Plews et al., 2012). The 42-day exponentially-weighted average of this variable (Ln [rMSSD.sub.42-exp]) was then calculated and used in further analyses, as a representative parameter of chronic training adaptation (Chalencon et al., 2015). The Ln [rMSSD.sub.42-exp] calculation was initiated with the mean Ln rMSSD value observed across the first seven days of the monitoring period.
The validity of the Elite HRV application for computing Ln rMSSD was established by comparing simultaneous 60 s recordings of the same tools used in this study (i.e., Polar H7 Bluetooth heart rate strap and application) with an electrocardiograph (Biopac MP100, Goletta, California, USA) among 10 collegiate athletes. Procedures and comparison methods from a previous study were replicated (Esco et al., 2017). Measures of Ln rMSSD were acquired in the supine, seated and standing position for each individual. Differences between supine (Elite HRV = 3.70 [+ or -] 0.43 ms, ECG = 3.70 [+ or -] 0.43 ms) seated (Elite HRV = 3.44 [+ or -] 0.62 ms, ECG = 3.43 [+ or -] 0.59 ms) and standing (Elite HRV = 2.84 [+ or -] 0.52 ms, ECG = 2.85 [+ or -] 0.52 ms) measures were not significant (p = 0.80...