With more classes being taught to more students and the average class size on the rise, academic libraries have taken a closer look at instruction-related data to help determine areas of success and identify what areas of instructional programing are in need of attention and adjustment. Specifically, quantitative and qualitative responses gathered from student feedback can assist with analyzing the "big picture" of what students are experiencing and communicating to librarians about their experiences in information literacy classes. The goal is to better understand what students are learning and struggling with during and after their library instruction sessions in order to meet their research needs. With universities placing a greater emphasis on research-based undergraduate curricula and student lifelong learning skills, libraries have a strategic opportunity to join forces with faculty to partner in this effort. This study examines an academic libraries' instructional sessions from the period of 2007/2008 to 2011/2012. This time period captured the increasing growth in the library instruction program and coincided with a shift in institutional strategies for the undergraduate student population. Six refined core learning outcomes were implemented to instill students with critical thinking skills, communication skills, empirical and quantitative skills, teamwork, personal responsibility, and social responsibility (Undergraduate Studies, 2015). Student feedback forms from undergraduates and graduate students were analyzed to reveal overall themes from comments to understand participant satisfaction in regards to pace, content of instructional sessions, and their overall experience. The authors utilized ATLAS.ti, qualitative data analysis software, to evaluate the comments in the feedback forms.
Assessment of information literacy efforts has contributed to the improvement and evolution of the teaching component of librarianship. To show value and effectiveness of library instruction sessions, it has become necessary to gather data on existing efforts and to develop insights and strategies for the growth and future planning of these types of programs (Larsen, 2010). Assessment has become useful in measuring learning outcomes established in information literacy instruction classes, whether these outcomes are established by the instructing librarian or visiting faculty member. Librarians have been asked and are continuing to be asked, to collect and provide data on the effectiveness of their instruction classes and are expected to make sense of the pre-tests, post-tests, surveys, minute papers, student feedback, and faculty feedback that has been dutifully collected throughout the years (Vance, Kirk, & Gardner, 2012; Oakleaf, 2008). The reason for gathering this feedback is to assess and determine the efficacy of the information literacy sessions and effectiveness of the instructors.
Assessment can be an effective process used to shed light on the positive and negative outcomes of library instruction (Wiliam, 2011). In 2013, Jimaa reported that obtaining feedback from students that requires them to reflect upon their individual learning process and critical thinking activities during class allows instructors to assess and monitor their teaching progress. This enables instructors to improve upon areas based on student feedback or explore curriculum enhancements to boost student learning. Similarly, Kavanagh conducted a three year study in 2011 to assess the evolution and development of an embedded information literary course. Through a series of student feedback and focus groups, Kavanagh found that reviewing and gathering student feedback is "worthwhile for librarians" (p.15). The feedback gathered assisted in modifying and tailoring the class from year to year to meet the students' needs, such as moving the information literacy classes closer to the project deadline to for a "just in time" approach. In terms of retaining what was learned, in 2012 Vance, Kirk, and Gardner conducted an analysis to determine if attending a library instruction session had an impact on first-year retention. They found that there is a small measurable correlation with students exposed to library instruction early on in their educational careers.
Employing qualitative analysis methods to review student feedback can be beneficial to uncovering deeper meaning within the students' comments. Qualitative analysis is a form of research in which the investigators adopt a flexible and open design to collect and interpret data by exploring the data through comprehensive analysis, discovery, and holistic meaning (Corbin & Strauss, 2015). Grounded theory has been a complement to qualitative research as it allows the researcher the opportunity to interpret, predict, explain, and apply the data through emerging theory based on the data at hand (Glaser & Strauss, 1999). Grounded theory starts with inductive data, positioning the researcher to invoke comparative analysis strategies to develop ongoing interaction and cultivate emerging analysis (Charmaz, 2014).
Qualitative analysis allows librarians to explore their student feedback beyond numbers. The simple-to-complex comments left by students can be interpreted into meaningful data that can change and ultimately improve or enhance library instruction classes. Qualitative analysis is designed to evaluate open-ended responses; this text-rich data can play an important role in providing authentic student-centered assessment. Qualitative research sources such as text, interviews, comments, and other "unstructured data" can be essential to determining meaning, pedagogy, preferences, and thought processes of student learners (Scales, 2013).
Coding is a practice used by researchers categorizing qualitative data. Generally, coding and grounded theory are utilized together to generate questions, fracture data, and develop relationships or categories to integrate into the conceptualized analysis (Strauss, 1987; Glaser, 1978). Coding encourages the researcher to discover categories based on the themes that appear throughout the initial analysis. Coding in the initial phase of analysis is known as "open coding." Open coding is accomplished by scrutinizing the raw data to procure concepts leading to questions and answers pertaining to "conditions, strategies, interactions, and consequences" (Strauss, 1987, p. 28). Open coding breaks down the data at the beginning phase of examination to compare and group data into categories based on similarities from the first phase of coding (Boeije, 2010).
Using qualitative content analysis software to engage with multiple sets of data has become a useful tool for determining what users are thinking about the library. Libraries collect data, but do not always utilize it to further their missions and make changes that could positively affect student learning and research. Prior to the advent of content analysis software products, conducting an analysis used to consist of coding and categorizing all user comments manually. These new tools have allowed for relatively easy organization and cataloging of large amounts of content that lead to the emergence of themes (Dennis & Bower, 2008). Although categorizing content can be time consuming, using a tool like ATLAS.ti, Dedoose, or NVivo can help make the process much easier. Engagement with the data allows for the development of codes and subsequently the development of code groupings or families. Networks and themes can emerge that allow a researcher to theorize their findings and draw conclusions on the most prevalent patterns (Passonneau & Coffey, 2011).
Coding has been utilized in several library studies to gather and decipher the inclinations and thought processes of student users. Most library instruction classes begin or end with some kind of assessment of what the students know or have learned over the course of the class. In 2006, Lebbin transcribed and coded taped qualitative responses after conducting focus groups with students who enrolled in a LIS 100 course simultaneously taking ENG 100. The skills the students learned and absorbed were useful to their entire college career. In 2008, Dennis and Bower utilized ATLAS.ti to open code and make sense of over 750 comments received from LibQUAL+[R] data. LibQUAL+[R] is a suite of services used by libraries to gather qualitative and quantitative feedback from the library user community (LibQUAL+[R], 2015). They were able to efficiently separate the comments by college department to provide individual librarians the ability to view only the pertinent text for their assigned areas. The authors created preliminary codes before combining, deleting, and collapsing similar codes. Frequency of the final codes were compiled and analyzed for reporting back to colleagues and administrators.
Passonneau and Coffee used ATLAS.ti in 2011 to analyze and code Meebo chat transcripts through grounded theory. Their coding analysis allowed for the creation of code families, super families, and networks. The authors were able to analyze chat responses and reveal technology issues, queries on locations of resources, and user error with the chat system. Scales (2013) used grounded theory and open coding to analyze student reviews of WSU Libraries' online Google Scholar tutorial. She found that eleven comments were based on personal experiences...