The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): two novel evaluation methods for developing optimal training programs

AuthorMatt C. Howard,Rick R. Jacobs
Published date01 November 2016
DOIhttp://doi.org/10.1002/job.2102
Date01 November 2016
The multiphase optimization strategy (MOST) and
the sequential multiple assignment randomized
trial (SMART): two novel evaluation methods for
developing optimal training programs
MATT C. HOWARD
1,2
*AND RICK R. JACOBS
2
1
Department of Management, Mitchell College of Business, University of South Alabama, Mobile, A.L., U.S.A.
2
Department of Psychology, Pennsylvania State University,University Park, PA, U.S.A.
Summary Current methodologies in training evaluation studies largely employ a single method entitled random conr-
matory trials, prompting several concerns. First, practitioners and researchers often analyze the effectiveness
of their entire omnibus training, rather than the individual elements or identiable components of the training
program. This slows the testing of theory and development of optimal training programs. Second, a common
training is typically administered to all employees within an organization or workgroup; however, certain fac-
tors may cause individualized training to be more effective. Given these concerns, the current paper presents
two training evaluation methodologies to overcome these problems: the multiphase optimization strategy and
sequential multiple assignment randomized trials. The multiphase optimization strategy is a method to eval-
uate a standard training, which emphasizes the importance of a multi-stage training evaluation process to an-
alyze individual training elements. In contrast, sequential multiple assignment randomized trial is used to
evaluate an adaptive training that varies over time and/or trainees. These methodologies jointly overcome
the problems noted earlier, and they can be integrated to address several of the key challenges facing training
researchers and practitioners. Copyright © 2016 John Wiley & Sons, Ltd.
Keywords: training; methodology; statistics
Despite the widespread study of organizational training, certain methodological issues systematically appear in train-
ing scholarship. Currently, the most used method to evaluate a training program is called random conrmatory trials
(RCTs; Campbell, 1988; Tannenbaum & Yukl, 1992). While this method provides many benets, RCTs also have
many drawbacks. Notably, although a training program may consist of several individual elements,
1
RCTs only
analyze the effectiveness of the overall training (Bass & Avolio, 1990; Burke & Day, 1986; Smith & Smith,
2007). For example, Barling, Weber, and Kelloway (1996) investigated the effectiveness of a training program to
improve managerstransformational leadership. While the training consisted of four clearly identiable elements
to achieve this goal, their training evaluation methodology only analyzed the effectiveness of all the elements to-
gether. Although RCTs have provided insightful evidence on general training, little is known about the effectiveness
of the individual training elements. It is possible that a single element entirely drives employee improvement, or
some elements may even detract from the overall effect. Unfortunately, through only analyzing an entire regimen,
the successful or unsuccessful elements cannot be identied (Isler et al., 2009; Lesch, 2008). Organizations may
*Correspondence to: Matt C. Howard, 5811 USA Drive S., Rm. 346, Mitchell College of Business, University of South Alabama, Mobile, AL,
36688, U.S.A. E-mail: mhoward@southalabama.edu.
1
The term trainingrefers to an intervention created to improve employee attributes and/or performance, with the assumption that most training
programs involve several elements. The term elementrefers to an individual module or aspect of the training. The term modulerefers to a
section of instructional material that provides direct information about certain knowledge, skills, or abilities (e.g., presentation, hand-out, etc.).
The term aspectrefers to an attribute of the training that is primarily meant to inuence trainee motivation and/or reactions (e.g., paying trainees,
method of delivery, etc.). All training programs consist of one or more modules, but aspects are optional.
Copyright © 2016 John Wiley & Sons, Ltd.
Received 24 February 2015
Revised 01 February 2016, Accepted 15 February 2016
Journal of Organizational Behavior, J. Organiz. Behav. 37, 12461270 (2016)
Published online 29 March 2016 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/job.2102
Research Article
be spending excessive amounts on extraneous elements, and researchers are unable to further theory through the
analysis of particular training elements.
Additionally, although many individual differences that cause individuals to experience differential training effects
have been discovered (Bauer et al., 2012; Driskell et al., 1994; Martocchio & Judge, 1997), authors have noted that
only occasional efforts have been made to scientically harness and integrate these differences into adaptive training
programs (Gully & Chen, 2010; Tannenbaum & Yukl, 1992). The dearth of adaptive training programs may be due to
RCTspoor ability to evaluate the interaction between training elements with individual differences and/or the cost of
individualizing training. While the second issue is likely to remain, the rst issue should be evaluated. Given these
reoccurring issues in the training literature, the current study reports on two unique methods to better understand
the effectiveness of training programs and their elements. Both of these methods are likely unfamiliar to many
organizational researchers, as these methods were developed in other research areas (Engineering and Public Health).
The rst is the multiphase optimization strategy (MOST; Collins et al., 2005; Collins, Murphy, & Strecher, 2007).
Before the omnibus training evaluation, additional steps are taken to determine individual training elementseffective-
ness, interaction effects, and optimal treatment levels. These steps are reliant on experimental designs, and the current
article exploresthe feasibility of four experimentaldesigns that could be used in organizationalcontexts. The successful
implementation of MOST removes ineffective training elements, resulting in an optimized training. Also, compared
with RCTs, MOST provides richer information about a training program and included elements, beneting the inves-
tigationof theory.Many research questions can be answeredwith MOST that would otherwise be impossibleto explore.
The second methodology is the sequential multiple assignment randomized trial (SMART; Murphy, 2005;
Murphy et al., 2007). SMART enables researchers to investigate the main effects and interactions of individual
time-varying adaptive training elements. This method involves the random assignment of employees to alternative
training conditions within an experiment, followed by further random assignment to conditions in follow-up
experiments based on tailoring variables and decision rules. From the results of these experiments, practition ers
and researchers can determine the best training considering the individual characteristics of a trainee, and thus
provide adaptive or customized interventions. Compared with RCTs, SMART provides more detailed information
about the unfolding dynamics of individuals, training programs, and their interactions. Once again, many research
questions can be answered with SMART that would otherwise be impossible to explore.
Both of these methods have been repeatedly applied in other elds to streamline the evaluation of costly
interventions (Collins et al., 2005; Rivera et al.,2007 ). The intention of this article is to discuss when and under what
conditions MOST and SMART might be most useful to organizational practitioners and researchers. The following
sections (1) provide an overview of RCTs, MOST, and SMART; (2) discuss examples of organizational training
situations that might benet from using MOST and SMART; and (3) denote the implications of these methods
and propose directions for future research.
Training Background
In recent decades, training research has become increasingly nuanced and theoretically driven, demonstrating that
any successful training involves a careful consideration of a complex series of necessary factors (Grossman & Salas,
2011; Kirkpatrick, 1994; Rouiller & Goldstein, 1993). Despite great advancements in this area, authors have
continuously noted the limited use of methodologies to evaluate a training program, which hampers the investigation
of new theories and important training elements. In Campbells (1988) review, he noted that by farthe most
popular training evaluation methodology was the comparison of a single desired training program (whether newly
created or existing) to a control condition, which may be an alternative training program or none at all. This
sentiment was echoed by Tannenbaum and Yukl (1992), adding that this research type has only marginal utility
for improving our understanding of training(p. 407). They further noted the concerns with this design and the
importance of new methods by stating,
1247
Copyright © 2016 John Wiley & Sons, Ltd. J. Organiz. Behav. 37, 12461270 (2016)
DOI: 10.1002/job

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT