Evaluating research administration: methods and utility.

Author:Marina, Sarah
 
FREE EXCERPT

Introduction

Metrics are "a means of representing a quantitative or qualitative measurable [emphasis in original] aspect of an issue in a condensed form" (Horvath, 2003, as cited in Kreimeyer & Lindemann, 2001, p. 75). Consequently, performance metrics represent "[m]easures used to evaluate and improve the efficiency and effectiveness of business process" (Cole, 2010, p. 14). Examples of quantitative metrics used in the field of research administration include success rate (number of submitted proposals accepted for funding), dollar amount of funding applied for and received, and number of applications submitted. Customer feedback on research administration services is an example of qualitative metrics. The benefits of developing and implementing metrics for research administration offices include defining and monitoring business processes and their impact, defining responsibilities, managing expectations, improving decision making and prioritization, motivating teams and evaluating staff performance (Haines, 2012). These benefits can be condensed to three areas: changing behavior, driving performance, and supporting investments in research administration (Taylor, Lee, & Smith, 2014, slide 5).

Current use of metrics in evaluating research administration

Analyzing metrics in relation to sponsored funding and measuring research productivity is a well-established business practice among academic institutions with a research mission or focus. The University of Minnesota, for example, tracks data related to expenditures; publications and indicators of faculty reputations; proposals and grant awards; invitations and collaborations; indirect cost recovery; student engagement in research; space allocations; and other "common research metrics" (University of Minnesota, 2008, p. 10). Some institutions have incorporated metrics into their daily operations. The University of Iowa posts weekly "Homepage Metrics" on its Division of Sponsored Programs' website (http://dsp.research.uiowa.edu). These metrics consist of the numbers of routing forms that were received; submitted proposals; completed contracts; non-monetary agreements and subawards; and processed awards, and are calculated weekly and during the fiscal year to-date.

Those institutions that do not already use metrics to guide and evaluate their work are now outside the norm. A recent informal survey of research administrators for the Society of Research Administrators (SRA) International's electronic newsletter, Catalyst, found that most research administration offices (78% of those who responded) conduct some kind of evaluation of their services (Davis-Hamilton, 2014). The most commonly used evaluation methods reported include collection of informal feedback from customers, examination of existing management reports and data, and comparison of current internal operational data to those from prior periods.

Pitfalls of current metrics used to evaluate research administration

While the metrics discussed above can be useful and informative assessment tools, some scholars feel that metrics based on financial or other quantitative measures "do not sufficiently capture the quality of the level of service demands" placed on research administration (Cole, 2010, p. viii). By "reducing the complexity of the representation of an issue" quantitative metrics "tend to oversimplify or omit dependencies of an issue, thus making the representation incomplete" (Kaplan & Norton, 1992, as cited in Kreimeyer & Lindemann, 2001, p. 87).

Furthermore, the external environment influences traditional quantitative metrics, like success rates, making it difficult to evaluate the merit of the activities internal to the institution. This can be illustrated by looking at success rates from the perspective of the PESTEL framework, a tool used to identify the external opportunities and threats that may impact an institution's operation. The PESTEL framework organizes these external "forces" into six major categories: Political, Economic, Socio-cultural, Technological, Ecological, and Legal (Rothaermel, 2013, pp. 56-57). These forces can drive funds available to support research, which then influences success rates in the public and private sectors.

Research programs outside of a sponsoring agency's priority areas face increased challenges in securing funds. An example is the recent emphasis placed on obesity research by the United States Department of Agriculture and the National Institutes of Health, linked to the obesity epidemic in the US and the current administration's personal interest in tackling this public health challenge (socio-cultural and political forces). This presents an opportunity and competitive advantage for organizations with active obesity research programs, increasing their success rate. Conversely, it serves as a threat, and competitive disadvantage, to other health sciences organizations, lowering their success rate. Similarly, the more recent decline in economic growth has culminated in sequestration and fewer research dollars, lowering success rates nation-wide and threatening the breadth and longevity of many research programs (an economic force). The influence of these external "forces" must, therefore, also be considered when using quantitative metrics to evaluate an institution's research administration enterprise. While many research administration offices currently use metrics in evaluating their work, the need for an effective, evidence-based metric standard that captures the complexity of the field remains unmet. Adoption of a mixed methods approach, utilizing both quantitative and qualitative measures, may allow research administrators to garner more comprehensive evaluations of their services, either individually or collectively as representative of the research administration enterprise.

The search for effective metrics to evaluate research administration: complexity metrics & satisfaction

The current lack of standard performance metrics for research administration services has far-reaching consequences. According to a recent SRA Catalyst survey (Davis-Hamilton, 2014), 15% of those who conduct evaluations of their offices have doubts about their validity. As noted in the same Catalyst article, some common platforms for evaluation were utilized but no clear standards emerged from the results of their informal survey. This lack of standard metrics not only creates validity concerns, but also makes comparisons across offices more difficult.

The ability to assess the quality of a research administration enterprise is extremely important. It is critical to ensure that available research administration resources adequately support investigators. As well, such assessment can inform decisions on allocation of additional resources that meet the changing needs of faculty and drive competitive advantage. To quote Janice Besch, Managing Director of the National Institute of Complementary Medicine at the University of Western Sydney, "[r]esearchers require robust management systems to support their activities in a funding environment that is highly competitive and carrying a significant compliance burden. If they are not well supported, they are likely to scale down, or fail in, their grant seeking activities; funding will diminish; and there is a risk that whole research programs could be shut down due to compliance breaches" (Besch, 2014, p. 1). Management systems to support research activities can include the various software and other technological tools research administrators use, but more fundamentally can be viewed as the research administrators themselves and research administration as a whole. Proper assessment of the quality of such instrumental systems, as part of a comprehensive effort to optimize them, is part of the effort to diminish those threats to which research activities are vulnerable, according to Besch's observations. Another study (Cole, 2010) further posits that the success of research administration offices should be measured by performance metrics grounded in needs and preferences of both faculty and department administrators. Below, we assess two solutions to meet this requirement: complexity metrics and satisfaction surveys.

Complexity metrics. In the quest to develop meaningful metrics of research administration, one must take into consideration the complexities of the tasks performed by research administrators. In this context, the judgment of complexity is in relation to the "more complex grant awards, which require more time and resources to manage due to the nature of sponsor requirements and/or collaborative activities with multiple researchers and institutions" (Taylor, Lee & Lee, 2014, slide 30). One means of measuring complexity was compiled by Chris Thomson of Moderas (moderas.org). His Proposal Complexity Scoring Matrix aims to judge the complexity of a proposal by the workload that goes into its review, which includes factors related to staffing, budget, human or animal subjects, subcontracts, international collaborations, and others.

Duke University offers another approach to measuring complexity, where management measures the complexity of a department's sponsored research portfolio and ties it to the compensation of research administrators. The information on complexity and types of grants is supplemented "with information about the department's practices with training, hiring, and procurement. Each assessment received a score, and the overall score is averaged" (Melin-Rogovin, 2012). A lower score is better, as it indicates that the department manages its portfolio well, has appropriate skills and training, is hiring adequately, and uses existing systems to maintain compliance.

Despite painting a more comprehensive picture of the research administration enterprise, complexity metrics take significant time and effort to develop and may be cumbersome and time-consuming for end-users. Additionally, it is highly...

To continue reading

FREE SIGN UP