Employee Engagement Using the Federal Employee Viewpoint Survey

AuthorKyla J. Holcombe,Zinta S. Byrne,Theodore L. Hayes
DOI10.1177/0091026017717242
Date01 December 2017
Published date01 December 2017
Subject MatterArticles
/tmp/tmp-18vQOlX6upo4Rj/input 717242PPMXXX10.1177/0091026017717242Public Personnel ManagementByrne et al.
research-article2017
Article
Public Personnel Management
2017, Vol. 46(4) 368 –390
Employee Engagement
© The Author(s) 2017
Reprints and permissions:
Using the Federal Employee
sagepub.com/journalsPermissions.nav
https://doi.org/10.1177/0091026017717242
DOI: 10.1177/0091026017717242
journals.sagepub.com/home/ppm
Viewpoint Survey
Zinta S. Byrne1, Theodore L. Hayes2,
and Kyla J. Holcombe1
Abstract
To determine the consistency of the practice-oriented Federal Employee Viewpoint
Survey–Employee Engagement Index (FEVS-EEI) with growing academic consensus
around engagement as a motivational state, we examined the fit of the FEVS-EEI within
two major theoretical frameworks relative to an academically derived engagement
scale. Using a government sample (n = 241,465), we first examined the factor
structure of the FEVS-EEI (Leaders Lead, Intrinsic Work Experience, Supervisors).
Using a second field sample (n = 206), our results from dominance and relative
weights analyses showed that only one factor of the instrument significantly predicted
worker engagement as assessed using a scale validated for measuring engagement
and not antecedents of engagement. With the same field sample, we used structural
equation modeling to examine the fit of the practice-oriented FEVS-EEI to science-
oriented theoretical frameworks of engagement and found the FEVS-EEI acts like an
indicator of job resources, which itself is a predictor of engagement.
Keywords
engagement, FEVS, turnover, workplace attitudes and behaviors, organizational
behavior
The U.S. federal government employs over 2 million people in over 80 agencies out-
side of the uniformed military and the Intelligence community. Seeking to address
employee engagement among this massive workforce, the government’s human
1Colorado State University, Fort Collins, CO, USA
2Federal Bureau of Investigation, Washington, DC, USA
Corresponding Author:
Zinta S. Byrne, Department of Psychology, Colorado State University, 1876 Campus Delivery, Fort
Collins, CO 80523-1876, USA.
Email: Zinta.Byrne@colostate.edu

Byrne et al.
369
resources agency, the U.S. Office of Personnel Management (OPM), has routinely
surveyed government employees on their level of engagement, which they define
using characteristics of employees, such as employees who increase revenue and dem-
onstrate high energy, attachment, and commitment (U.S. Office of Personnel
Management [USOPM], 2014).
Among organizational leaders, engagement is also typically defined by the charac-
teristics of engaged employees, as opposed to defining the construct itself, which
introduces a lack of clarity and precision (Podsakoff, MacKenzie, & Podsakoff, 2016).
For example, Mathews (2010) described engagement as employees who
take pride in their organization and work; take ownership of projects; talk positively
about themselves, their employer and the goods and services they help deliver; view
working for their organization as a career, not just a job; and, above all, perform better.
(p. 29)
Although these characteristics sound beneficial, they say nothing about what exactly
engagement is. In contrast, scholars within academic circles define engagement as the
“harnessing of the organizational members’ selves in their work roles” (Kahn, 1990, p.
694), manifesting in affective, cognitive, and physical activities. The academic orien-
tation is toward defining the construct rather than its outcomes, in an effort to promote
precision and delineation from other similar but distinct constructs (Podsakoff et al.,
2016).
OPM’s definition of engagement serves as an example of a common divergence
between how engagement is defined in practice versus science. Though discrepancies
between science and practice are to be expected given their different objectives (Macey
& Schneider, 2008), as interest in employee engagement increases, scholars of the
construct combat growing criticism of inconsistent definitions and inadequate mea-
surement within and across the science–practice divide (e.g., Masson, Royal, Agnew,
& Fine, 2008; Newman & Harrison, 2008). As noted by Macey and Schneider (2008),
practitioners and academicians conceptualize engagement differently, yet similarly
aim to identify levers and outcomes of engagement.
Given the number of federal employees who complete OPM’s engagement survey
(n = 392,752 in 2014) annually, and the hundreds of thousands of beneficiaries of
government services across the United States, OPM has far-reaching capacity to dis-
seminate its view of engagement. To advance recent efforts to bring science and prac-
tice a little closer together within the engagement literature (Van Rooy, Whitman,
Hart, & Caleo, 2011), we chose to examine OPM’s practice-oriented conceptualiza-
tion of engagement relative to a prominent research-oriented conceptualization (i.e.,
Kahn, 1990).
We separate ourselves in several ways from previous researchers using and examin-
ing the Federal Employee Viewpoint Survey (FEVS), OPM’s annual employee survey
that includes an index of engagement (FEVS-EEI) made up of three subfactors
(Leaders Lead, Supervisors, Intrinsic Work Experience). First, we focus on the engage-
ment index only, which heretofore has not been done (Fernandez, Resh, Moldogazlev,

370
Public Personnel Management 46(4)
& Oberfield, 2015). Using the 2014 FEVS data made public by OPM, we first confirm
their three-factor structure for the engagement index. We chose the 2014 data because
of the stability in the government relative to 2013 involving a government sequestra-
tion, and 2015, which was the year before a presidential election. Second, we collected
a new sample of field data assessing the FEVS-EEI in addition to theoretically pro-
posed antecedents and consequences of engagement, such as transformational leader-
ship, psychological meaningfulness, and turnover intentions. We applied research
methods for minimizing common method bias, such as collecting data at two different
time points separated by a little over a month (Podsakoff, MacKenzie, Lee, &
Podsakoff, 2003), because it can artificially inflate or deflate correlations (Doty &
Glick, 1998). Such methods are not possible under typical circumstances encountered
by OPM (Lee, 2015), but minimizing method bias allows for more accurate evalua-
tions of how the FEVS-EEI relates to organizational constructs. Third, as part of this
additional sample, we evaluated the FEVS-EEI relative to an academically derived
measure of engagement, the job engagement scale (JES), to determine whether the
measures are similar to or different from one another in their contribution toward
understanding employee engagement. Using structural equation modeling (SEM), we
examined how the FEVS-EEI functioned within the two most popular theoretical
frameworks of engagement used in academic research. This comparison provides
insight into improvements and interpretations from existing scholarly literature on
engagement, and in evaluating the efficacy of translating science into practice.
Background
OPM administers the FEVS to determine whether managers of U.S. government agen-
cies foster a work environment conducive to job satisfaction, commitment, and reten-
tion (Fernandez et al., 2015). Since 2010, iterations of the FEVS have included the
engagement index. The U.S. Congress views the engagement scores as an indicator of
progress toward fulfillment of federal statutes on government management modern-
ization, which may otherwise be difficult to quantify and are required by the
Government Performance and Results Act of 2010. Congress mandated an annual
assessment of management aspects related to agency performance and employee sat-
isfaction with leadership, working environment, rewards, and opportunity for growth
and contribution (see Public Law 108-136, Division A, Title XI, Section 1128, https://
www.gpo.gov/fdsys/pkg/PLAW-108publ136/pdf/PLAW-108publ136.pdf),
and the
FEVS fulfills this requirement as well as incorporating questions about engagement
(as allowed by law). Thus, through its investigative arm, the U.S. Government
Accountability Office (2015), Congress evaluates the progress that agencies make
toward achieving target engagement index scores, resulting in routine hearings when
agencies struggle to engage their workers (e.g., Rein, 2016).
According to OPM’s technical reports, the FEVS has not changed for several years
and comprises 84 questions grouped to measure seven topic areas: personal work
experience, work unit (e.g., recruitment, quality, cooperation), agency (e.g., perfor-
mance, fairness, personal empowerment), supervisor (e.g., perceptions of supervisor

Byrne et al.
371
support and leadership skills), leadership (e.g., ability to motivate employees, ethical
practices), satisfaction (e.g., pay, job, training, promotion), and work/life (e.g., bene-
fits, telework). The FEVS-EEI was developed using 15 of the 84 items hypothesized
to drive engagement (USOPM, 2014, see the appendix). Developers of the FEVS
report the EEI factors into three dimensions, labeled Leaders Lead, Supervisors, and
Intrinsic Work Experience (USOPM, 2014). As the goal of OPM in using the FEVS is
to understand employee retention and determine what work factors and conditions
under management’s control inhibit or foster employee retention, we focused on reten-
tion factors in our theoretical models of antecedents and consequences.
In contrast to the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT