Assessment of learning has received much attention in recent years. Parents, students, and policy makers are interested in determining the extent to which colleges and universities are meeting expectations. However, measuring what students have learned is a complex, controversial task. Several kinds of national assessments of learning are currently in use, including standardized tests (Perez-Pena, 2012). In addition, while such assessment tools may have value in that they indicate what students know upon graduating, they do not tell how much they have improved along the way. Indirect metrics, such as the number of hours students spend studying and how much they interact with professors, may be more difficult to determine than scores on a standardized test, but they may also be important indicators of achievement in such competencies as critical thinking and problem solving.
In addition to standardized tests, student GPA is often considered an easily computed, valid indicator of student learning. For example, potential employers often use GPA as a screen when deciding whether to interview a student applicant. They also tend to believe that a student who successfully completes a course has mastered the course topics. They see a student's final grade in the course as a valid descriptor of the student's learning (Hynes & Sigmar, 2009). However, a final grade does not indicate whether the student perceives the relevance or importance of the course material. Nor does a final grade indicate the extent to which students agree with the instructor's assessment of the student's achievement.
If standardized test scores, course grades, and GPA are insufficient assessments of learning, one might ask, what else should be measured? The concept that learning "must be measured by institutions on a 'value added' basis that takes into account students' academic baseline" gained prominence in 2006, when a commission of the U.S. Department of Education issued its report on higher education (U.S. Department of Education, 2006). This concept provides the theoretical framework for the current study, in that it attempts to assess student learning by considering the distance from the baseline to the finish line. Sanchez and Hynes (2001) found in their online communication skill study that students' perceptions of their entering and exiting skills levels provided much more detail on the nature of the learning that actually took place. This theoretical framework also assumes that students are the best determinants of what and how much they learned.
As can be seen in Table 1, various methods have been used to evaluate student achievement.
In our College of Business Administration, students are asked to complete a course evaluation form at the end of every course. The Individual Development and Education Assessment (IDEA) form was developed at Kansas State University and has been used in our College since 2005. The form includes a section where students are asked to rate their progress on the acquisition of knowledge, skills, and competencies (www.idea.ksu.edu). One item asks the extent to which students perceive progress on "developing specific skills, competencies, and points of view needed by professionals in the field most closely related to this course" (IDEA item #24). This item stimulated our thinking about students' ability to critically evaluate their own competencies. We hypothesized that, when presented with a list of skills addressed in a managerial communication course, they could more accurately analyze their level of competency after completing the course than at its onset. That is, after taking the course, they had a better understanding of what they knew and what they did not know, what they could do well, and what they could not do well.
This study attempts to capture students' self-assessments of their learning in a graduate managerial communication course. We as managerial communication professors wanted to know how the students perceived their communication skills before being introduced to these skills in a semester course and after they finished the course. We wanted to know how business graduate students would rate their own level of competency at the onset of a required managerial communication course and at the conclusion of the course. These students, from two public universities, were asked to rate themselves on 35 communication skills that are addressed in the course. The skills included interpersonal relations, listening, speaking, asking and answering questions, team communication, interviewing, meeting management, writing routine documents, reports, and proposals. The assessment instrument consisted of 5-point Likert-type scales.
English, Manton, and Walker (2007) surveyed 200 of the largest firms in Dallas and found that the most highly rated traits these managers looked for in business college graduates were "integrity and recognition of appropriate confidentiality in communication" (p. 414). The next highest-rated trait they wanted the business college graduate to possess was "the ability to produce neat and well organized documents that use correct grammar, punctuation, and spelling" (p. 414); plus, "the ability to proofread documents and understand the principle of effective communication" (p. 414). In light of the recent corporate scandals, it is understandable that the human resource managers would rate...