You have great gestures: an analysis of ballot commentary to pedagogical, outcomes.

AuthorElmer, Denise

The most important thing for a judge is-curiously enough-judgment. --Lord Patrick Devlin, Judge of the High Court (Simpson, 1988) Competitive contests require a judge to declare a winner. This verdict is obtained through the process of judgment; that is, comparing substance to standards, the process used in Individual Events tournaments (Verlinden, 2002). Individual Events (IE) are, by convention and experiential learning theory, considered educational activities. The Association for Experiential Education (AEE; 2002) defines experiential learning as "a process through which a learner constructs knowledge, skill, and value from direct experiences" ([paragraph] 37-38). As an educator, a judge should write a verdict (or ballot) that encourages "reflection, critical analysis, and synthesis," as well as "initiative, decision making, and accountability" (AEE, [paragraph] 37-38). Dickmeyer (1994) explains the experiential progression in forensics and resulting educator responsibility in judging:

Participants in forensics generally agree that education is the foremost concern of the forensics community .... Students learn to research, narrow topics, assess quality of evidence, improve their writing styles, etc., through the activity of preparing presentations. Additionally students learn presentational and interpersonal skills through interactions with the coaching staff. Finally, students may be educated through the ballots they receive from judges in regional and national competitions. (p. 2) Thus, oral interpretation judges are both adjudicators and educators. Yet writing insightful educational ballots could be a challenge. As Verlinden (2002) points out, "the judge does not know what the student has been told about oral interpretation" or "if he/she is following the principles ... taught" (p. 14).

Judgment is further compounded by the fact that there is no definitive standard for event descriptions or judging requirements. The American Forensic Association's (AFA) master ballot lacks any preprinted judging criteria. The information printed on the AFA's ballot is as follows: contestant's name, round, rating, and ranking. The judge circles or fills in the appropriate blank and includes observations under the section titled "comments." Other ballots, such as the 1999/2001 National Catholic League (1997, [paragraph] 3) revised ballot or Whitman Speech and Debate (1998, [paragraph] 6) college ballot, provide specific, yet different, criteria, rules, and directions for judges to use when composing their judgments. Interestingly, rating and ranking are common to all ballots, suggesting the most important verdict is contestant placement, not educational enlightenment.

REVIEW OF LITERATURE

I have but one lamp by which my feet are guided, and that is the lamp of experience. I know no way of judging of the future but by the past. --Patrick Henry (Columbia, 1996). The ballot is the communication medium used by the judge to advise competitors of the competitive decision. As a rhetorical text, the ballot can be examined to determine if IE judges are communicating educationally with competitors. The literature reviewed for this study suggests that a common research mechanism is content analysis. Content analysis is a scientific examination of the "syntactic and semantic dimensions of language" using an "objective, systematic, and quantitative" process (Berelson, 1952, pgs.15 & 18). Content analysis is a communication research methodology that includes describing trends, auditing content against objectives, constructing and applying communication standards, identifying the intentions of the communicators, and discovering categories for identification (pgs. 29, 43, 46, & 72). The ballot research discussed in this section reflects all of these processes.

Trimble's (1994) construction of nine guidelines for writing a ballot served as the framework for Dickmeyer's (1994) audit analysis of 79 randomly selected IE ballots from 211 ballots received during the 1994 forensic season. Dickmeyer's goal was to prove that judges do write what they should write on ballots. Trimble's nine guidelines were: (1) write a ballot, (2) divulge philosophy of interpretation, (3) suspend evaluation of "past experience," (4) flow the performance, (5) comment on technical aspects of presentation, (6) comment on emotional and intellectual portrayal of characters, (7) avoid jargon, (8) don't ignore primary issues, and (9) include constructive criticism. For the purposes of Dickmeyer's study, comments were not considered as separate units; instead, the whole ballot was evaluated to determine if it met each guideline. Findings indicated that while judges do write on Trimble's suggested guidelines, not one ballot met all the criteria.

Jensen (1997) included Dickmeyer's (1994) study in her review of published literature employing content analysis. Her study analyzed 304 (prose, poetry, program, duo, and dramatic) ballots, found 1,737 comments used to discover 25 classifications, and concluded that judges used the ballot as an educational tool. Jensen's method was based on Preston's 1983 definition of a comment: "any sentence, phrase, or single word that provides some critique of the speaker's performance or advice for improvement" (Preston, 1983, as cited in Jensen, 1997, p. 9). Jensen found that 47% of the ballots commented on vocal delivery. Encouragements, i.e., "uplifting comments," were found on 5% of the ballots.

Jensen (1998) applied the findings from her 1997 study and constructed a judge's taxonomy for oral interpretation. This taxonomy consisted of literature, physical delivery, characterization, vocalization, and technique. Jensen argued that it is the judge's responsibility to determine if the text selected for interpretation is quality literature, and the physical delivery is believable through natural gestures, facial expressions, and posture. In addition, Jensen claimed that judges must evaluate if characters were "alive," or believable, and expressed appropriate levels of emotion; if the vocal range used considered rate, pauses, and distinct character voices; and if the technique used included an appropriate teaser, introduction, cutting, memorization, and binder handling. She concluded by recommending judges include specific examples that assist the competitor with understanding the ballot remarks.

Klosa and DuBois's (2001) findings were similar to Jensen's. Using 425 (prose, poetry, and drama) ballots, 2,743 comments were recorded using Krippendorf's 1980 syntactical unit's definition. As with Jensen's study, the types of comments determined the classifications created, generating four common categories: (1) literature choice, (2) vocal and physical distinctions of characters within and between genres, (3) logical cuttings and juxtapositions of multiple texts, and (4) vocal variation (e.g., rate and pauses). The authors pointed out that most ballot comments are open-ended; therefore, determining judge's intent and specific categories were difficult and subject to interpretation by others. They also noted that judges comment less frequently on performance category measures that are met and more frequently on poorly used or missing performance techniques.

While multiple studies exist that either compare ballot content to existing guidelines or suggested guidelines, few studies have examined the "unwritten rules" of oral interpretation. Cronn-Mills and Golden (1997) identified the evolution of "unwritten rules" and 10 implicit "un-rules" that are regularly (re)enforced by competitors, coaches, and judges. The "un-rules" varied from teasers being mandatory to multiple sub-un-rules on manuscripts techniques, to texts "fitting" the interpreter. The authors suggested that competitors learn the "unwritten rules" through observation of new and/or different approaches that seem to win consistently, resulting in new judges assuming this particular approach as the "norm." Hence, recorded comments reflect this lack of "norm" or "un-rule" use (Cook & Cronn-Mills, 1995; Cronn-Mills & Golden, 1997). Rice and Mummert's (2001) research confirmed that competitors perceived that "unwritten rules" exist. They found no difference between IE novice competitors and veteran competitors' perceptions of whether rules exist (p = .175), whether competitors believe teasers are mandatory (means over 4), or whether first-person narration is the preferred prose style (means over 4).

While all of the above reviewed literature provided suggestions for educational ballot writing, the studies lacked a succinct, didactic format. A more concise source that new judges may find helpful is Littlefield's (1987) Judging Oral Interpretation Events. Little field identified the characteristics of a "good judge," provided a technique for ranking and rating IE round participants, and presented suggestions for writing constructive ballots. Among the characteristics of a "good judge" were familiarity with the local rules, flexibility in accepting the choice/interpretation of the literature, and the ability to recognize good literature. Littlefield contended that "good judges" write constructive ballots that include a balance of positive and negative comments and an explanation for the decision or ranking.

New judges could also benefit from reviewing Little field, Canevello, Egersdorf, Saur, Stark, & Wynia's (2001) criteria for "oral interpretation performers meeting the expectations of their critics" (p. 45). Taking a pedagogical approach, the authors identified five oral interpretation "cognitive outcomes" with corresponding "assessment items" for...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT