Do Inmate Survey Data Reflect Prison Conditions? Using Surveys to Assess Prison Conditions of Confinement

Date01 June 1999
Published date01 June 1999
DOI10.1177/0032885599079002007
Subject MatterArticles
6TPJ99.VP:CorelVentura 7.0
THE PRISON
Camp /
JOURN
PRISON
AL / June
CONDITIONS 1999
DO INMATE SURVEY DATA
REFLECT PRISON CONDITIONS?
USING SURVEYS TO ASSESS PRISON
CONDITIONS OF CONFINEMENT
SCOTT D. CAMP
Federal Bureau of Prisons
This study examines whether survey data collected from inmates can be used to create
group-level measures of prison conditions. Inmates often carry a stigma that they are
never to be trusted. A subset of a national survey of inmates was used to examine how
inmates incarcerated in prisons operated by the Federal Bureau of Prisons answered
questions about safety, noise, and job assignments at their prisons. In particular, this
report demonstrates that inmate answers to the questions vary in a systematic fashion
that lends credence to using survey data from inmates to obtain information about the
prisons in which they are incarcerated. However, proper techniques for using survey
data have not been practiced in existing evaluation studies comparing public and pri-
vate prisons.

Correctional experts claim that there are clear differences between pris-
ons that are well and poorly run. Even when there is agreement about what
constitutes a well-run prison, and often there is no such agreement (for con-
trasting opinions see DiIulio, 1987 and Wright, 1994), capturing these differ-
ences in a systematic and defensible fashion has proven to be challenging for
both inside and outside corrections experts. Comparing the relative perform-
ance of prisons within a given prison system has always been an interest of
public-sector prison administrators, but a new interest is comparing prisons
under the legal authority of one prison system that are run by different enti-
ties. This, of course, is an allusion to the legal stipulations that often exist
when an existing public prison system is charged with contracting the
The author would like to thank Dr. Christopher A. Innes, Chief of the Statistical Reporting
Section in the Office of Research and Evaluation at the Federal Bureau of Prisons, for providing
the data analyzed here. The author also would like to thank William G. Saylor, assistant director
in the Office of Research and Evaluation at the Federal Bureau of Prisons, for his helpful com-
ments on an earlier version of the article.
THE PRISON JOURNAL, Vol. 79 No. 2, June 1999 250-268
© 1999 Sage Publications, Inc.
250

Camp / PRISON CONDITIONS
251
operations of a prison to the private sector. There is often language that the
private-sector contract can be awarded or renewed only if costs are lower and
quality of services are comparable at the private prison or alternatively if
costs are comparable and services are superior at the private prison. But how
do researchers or policy makers evaluate the quality of services at the respec-
tive public and private prisons? (Others can examine the cost issues.)
One well-established approach to comparing prisons (at least in the
United States) is to conduct prison audits. This was the approach used in the
evaluation of a private prison in Tennessee (Tennessee Select Oversight
Committee on Corrections, 1995). Audits, though, tend to have several short-
comings in identifying the well from the poorly functioning prisons even
when the prisons are governed by identical policy. First, audit procedures are
predisposed toward being paper exercises that document adherence to policy.
Although it is true that a properly functioning prison adheres to good policy,
it is probably the case that adherence to policy is a necessary but not sufficient
condition of operating a well-run prison. The American Correctional Asso-
ciation (ACA), for example, makes no claim that ACA accreditation is neces-
sarily an indication of superior performance. Second, when the auditors at-
tempt to generalize their findings into an overall rating, there is much room
for subjectivity, and this may inhibit auditors from making critical remarks
that cannot be directly supported by the procedural evidence. This may en-
courage “grade inflation” in the evaluations of prisons. Third, audits tend to
be costly, especially in the use of human resources. This last point about cost
is especially problematic when the comparison prisons are operated under
different policy, which is the case at least in the federal sector where
performance-based contracting is being used.1
A complementary approach to using audits is to survey the opinions of
those most intimately involved with prison operations. Whereas this can in-
volve gathering feedback from local prison administrators, it can also involve
soliciting the opinions of line staff and inmates. Although line staff and in-
mates generally do not have detailed knowledge about certain aspects of
prison operations, matters such as the prison budget, personnel administra-
tion, or technical details about the physical plant, there are many relevant as-
pects of prison operations about which staff and inmates are informed by
their normal day-to-day activities. It seems reasonable to assume that line
staff and inmate evaluations of these aspects, factors such as inmate safety,
staff safety, quality of programs, inmate idleness, the accessibility and qual-
ity of medical care, and the quality of food operations are partly influenced
by specific practices and resources at the prison.
Managers, though, are suspicious of relying on the evaluations of those
working under them to provide feedback about their effectiveness. Wardens

252
THE PRISON JOURNAL / June 1999
are no exception to this pattern, and if wardens are suspicious of anything
more than staff evaluations, it is inmate evaluations. After all, everyone
wants evaluations (whether of themselves or the work setting) to be produced
by trustworthy evaluators. But are staff and inmates trustworthy in providing
evaluations of those things about which they have direct knowledge? Surpris-
ingly, little research has been done on this topic, especially for data provided
by inmates. The gut instinct of wardens seems to be that evaluations of prison
operations provided by staff, and certainly by inmates, can be viewed as little
more than individual and collective whining. That is, a warden does not want
to receive a poor overall evaluation simply because she or he had malcontents
providing the evaluations or had an antagonistic union leader who influenced
the evaluations provided by line staff.2
Of course, having an antagonistic union relationship typically says some-
thing about prison operations, but what is really needed to determine whether
staff and inmate data can be used to produce prison-level measures is empiri-
cal demonstration of the following two points: first, that the average re-
sponses provided by staff and inmates at prisons actually differ and, second,
that the differences are independent of the individual characteristics of the in-
mates and staff providing the evaluations. This demonstration, believe it or
not, is not so different from the problem of evaluating the ability of school
managers and teachers to influence the test performance of students. When
comparing average test scores for students at different schools, you want to
know how much of the average test score reflects knowledge added by the
school, not just whether the school had good (or bad) students to start with.
As such, education researchers have been very active in popularizing multi-
level methods for untangling the effect that schools produce on student
achievement (Bryk & Raudenbush, 1992; Raudenbush & Willms, 1991), al-
though certainly the methods can be traced in sociology to suggestions by
Lincoln and Zeitz (1980) and the exposition by Mason, Wong, and Entwisle
(1983). Multilevel models are becoming more common in criminological re-
search to separate out individual and community effects on crime (Horney,
Osgood, & Marshall, 1995; Rountree, Land, & Miethe, 1994).
Camp, Saylor, and Harer (1997; Camp, Saylor, & Wright, 1999) have
researched the use of staff survey data to create prison-level measures em-
ploying multilevel techniques. Using data collected at the Federal Bureau of
Prisons, they have demonstrated that the prison-level “average” for some
measures (such as a job satisfaction scale) cannot be properly used because
the measures are not sensitive to differences between prisons. They report
other measures (such as an organizational commitment scale) where prison-
level measures are appropriate, but where substantive differences arise for
prison-level scores computed by different methods. In the most extreme case

Camp / PRISON CONDITIONS
253
of incongruence, an institution that ranked 59th out of 80 federal prisons on
the “naïve” average level of commitment actually ranked 8th when appropri-
ate controls were introduced.
The research by Camp and his colleagues suggests that staff survey data
can be used to create some types of prison-level measures. This article tries to
extend that research by looking at whether survey data collected from in-
mates also demonstrate prison-level sources of variation. The data analyzed
here are from the 1997 Survey of Inmates of Federal Correctional Facilities
(referred to hereafter as the 1997 Inmate Survey). This survey focused pri-
marily on the criminal history of inmates confined in federal prisons, but
there is a section of the survey on the conditions of confinement. In that sec-
tion, there are several candidate questions for creating prison-level measures,
questions about prison safety, noise conditions in living units, and job oppor-
tunities. Unfortunately, there were no appropriate questions on...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT