Comparing student achievement across experimental and lecture-oriented sections of a principles of microeconomics course.

AuthorEmerson, Tisha L.N.
PositionTargeting Teaching
  1. Introduction

    The 2002 Nobel Memorial Prize in Economic Sciences was awarded to Vernon Smith "for having established laboratory experiments as a tool in empirical economic analysis." (1) Professor Smith's contributions have had profound effects far beyond the theoretical and empirical advances of experimental economics for which he was honored. The use of experiments as pedagogical tools is also rapidly increasing. As evidence, the first principles of microeconomics textbook exclusively devoted to the use of experiments (Bergstrom and Miller 2000) is currently in its second edition and gaining wider acceptance. (2)

    Rather than simply relying on the instructor-centered lecture that has historically been the preferred method of instruction to describe the market mechanism, the use of experiments in the principles classroom provides students with an experiential learning opportunity: the chance to participate in a controlled market environment and to observe market forces that are normally only talked about and described as movements on a graph. Experiential learning, followed by instructor-led discussion that places the experimental results into their theoretical contexts, affords the student with a hands-on learning experience that is quite different from the traditional "chalk-and-talk" approach. Bergstrom and Miller (2000, p. iv) summarize the student experience in the preface of their increasingly popular text:

    You will be studying the behavior and interactions of people in economic situations. And as one of these interacting economic agents, you will be able to experience first-hand the problems faced by such an agent. We suspect that you will learn nearly as much about economic principles from your experience as a participant as from your analysis as an observer. While there is evidence that the experimental approach to teaching economics is being adopted by more instructors, there is relatively little empirical evidence regarding its efficacy as a pedagogical tool. Siegfried and Fels (1979) report initially that the use of games, simulation models, and demonstration routines in teaching economics is far from promising, with either no significant impact or a negative effect on student achievement. Further, early studies often focus on simply describing experiments, and these studies provide only anecdotal evidence as to their effectiveness and impact on student outcomes (see, e.g., DeYoung 1993; Williams and Walker 1993). On this note, Fels (1993, p. 365) comments that "it is ironic that those who use controlled experiments in their research on economics do not use controlled experiments to evaluate their teaching."

    Recent studies take a more formal approach to estimating the value of experiments in the classroom. Gremmen and Potters (1997), for example, study the effectiveness on student learning of an international economic relations simulation game. They find that students who participated in the game performed better on objective tests than those students exposed to the material only through lecture. However, the Gremmen and Potters study, and others like it, focus only on the efficacy of a single experiment or a small set of experiments (see also Siegfried and Fels 1979; Frank 1997). Additionally, Fels (1993) questions the value of a single experiment in the context of comprehensive student achievement and suggests that, like a single classroom session, a single experiment is unlikely to make any important difference. Fels instead argues that studying the effectiveness of several experiments over the course of a semester is likely to provide a better indication of the value of experiments as a pedagogical tool.

    Following this advice, two recent studies assess the effectiveness of multiple experiments in a principles of microeconomics course, but these studies find mixed results. Cardell et al. (1996) study a sample of 1800 students, with slightly less than one half assigned to experimental sections and the remainder assigned to traditional lecture-oriented sections. After controlling for a variety of student-and instructor-level characteristics, Cardell et al. find no statistically significant impact of the experimental approach on student achievement, where achievement is measured by differences in post-and precourse performance on the Test of Understanding in College Economics (TUCE). Dickie (2000), on the other hand, finds that, in his sample of 142 students, those exposed to experiments in their principles classes achieved a significantly greater improvement on the TUCE than those exposed to traditional pedagogy. Although students in the treatment group experienced greater TUCE improvement overall, Dickie also finds differential effects across students depending on their ability level. Specifically, Dickie finds that better students (as measured by grade point average [GPA]) experienced larger benefits from experiments while lower ability students may have experienced reduced achievement relative to what they may have attained under the traditional approach.

    Becker (1997) notes that, although limited anecdotal evidence confirms the value of teaching with active approaches, these approaches have not been empirically demonstrated (at least not consistently) as superior to chalk-and-talk methods. Becker suggests that the failure in the literature to find consistent support for active teaching methodologies may lie in the testing methods employed rather than in the absence of any effect (e.g., inappropriate statistical methods and the use of assessment instruments with problematic measurement error). Further, Becker et al. (1991) call for the use of multiple measures of student outcomes as indicators of the efficacy of various teaching approaches, and they encourage the replication of earlier studies to investigate the robustness of previous findings.

    The present study addresses the concerns and suggestions mentioned above and contributes to the small literature on the effectiveness of experimental methods by examining the potential differences in student achievement between students exposed to a comprehensive, multiexperiment approach to teaching principles of microeconomics and students participating in lecture-oriented classes that used no experiments. Our data, collected in the spring of 2002 at Baylor University, include a rich set of information on student achievement and other student- and section-level characteristics from a sample of 300 students. Specifically, two out of nine total sections of the core microeconomics course (59 students) used 11 experiments from the Bergstrom and Miller (2000) textbook to supplement the curriculum. The remaining seven sections (241 students) used the traditional lecture-oriented approach. After controlling for student- and section-level characteristics, our results indicate that students in the experimental sections improved their TUCE scores by an average of 2.42-2.99 points over the control group, or using a slightly different measure, by 11.1-12.3 percentage points of the possible percent increase in scores. Additionally, these differences are present across various cognitive, content, and difficulty levels. We find few differences between experimental and non-experimental students, however, in other outcomes, such as performance on a departmental final exam, student evaluations, or class attrition rates. Finally, we find that certain student characteristics can affect the likelihood of achievement in an experimental course, including a student's gender, GPA, and major. These results are robust to potential issues of positive selection bias, endogeneity of precourse ability, and potential censoring of the dependent variable.

    The remainder of this article is organized as follows. Section 2 describes the data and presents our empirical methodology. Section 3 discusses the empirical results on various qualitative and quantitative student outcomes and presents differential effects across student characteristics that can provide a road map for academic advisors or course coordinators wishing to advise students as to their likely benefit from participating in an experimental section. Finally, section 4 summarizes our results and discusses possible extensions of this research.

  2. Data and Empirical Methodology

    Students in our study were enrolled in one of nine sections of the core course in microeconomics principles at Baylor University during the 2002 spring semester) Two of these sections (the treatment, or experimental, group consisting of 59 students) supplemented the standard curriculum using 11 in-class experiments taken from the Bergstrom and Miller (2000) textbook. The remaining seven sections (the control group consisting of 241 students) used the traditional lecture-oriented methodology.

    Aside from the treatment group's use of experiments, considerable effort was made to maintain as much homogeneity as possible, both between and within the control and treatment groups. The number of total contact hours between students and instructors was equal across all sections, though the allocation of these hours differed across control and experimental sections, with experimental sections substituting experiments for lecture time and other activities. (4) All students in the sample used the same required textbook (a commonly used microeconomics principles text) and covered the same major topics. (5) Assignments across all sections were similar, including a mixture of homework, two or three midterm exams, and a comprehensive final exam. Each section of the course employed exams that consisted of some combination of multiple-choice and essay questions. Finally, the class size of all sections fell within the range of 23-35 students.

    Both sections within the experimental group were organized in the same manner. Students in this group participated in one experiment per week (usually taking one full class period), while the remaining class time was...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT