During a project, how do you know whether you, or the research team in charge of the research, are on track to succeed? In a long project, this can be a difficult challenge.
One approach is to identify key “milestones” (these were originally stones that were marked every mile to indicate how far you were from the town you had left and the one you were going to – hence the name). These are things that need to be done by a certain time. They are developed at the planning stage of the project. The funding agencies or the University Research Office can demand these from researchers.
Project planning tools are available to help you to do develop milestones (e.g., Microsoft’s Project Management Software, which is part of the Microsoft office package).
How do you know you have succeeded?
At the end of your research, how does anyone know whether the project is successful or not? The only real way is to define the expected outputs, outcomes and impacts beforehand, and also develop ways to measure them.
Outputs are relatively easy to define – reports, workshops, scientific papers are outputs. Outcomes are such things as policy changes, establishing a different way of growing a crop by a group of farmers, or altering a teaching method. Again, these can be measured.
Impacts are more difficult to assess. These are consequences that arise from the outcomes. For example, better health because of the policy changes, improved nutrition among people because there is more food, or more students getting better jobs because they got better grades in school examinations.
Sometimes the impact can be due to a number of causes and this has to be acknowledged. As long as one can show that the research led to part of the impact, this is often as far as one can go.
A DRUSSA approach
In the DRUSSA programme, we worked with all partner universities in the establishment of what we called “Institutional Action Plans” – two-year planning documents to help research management offices to establish research uptake objectives and to specify the outputs and outcomes that they would measure and evaluate (whilst recognising that impacts would be a longer-term process, beyond the scope of any two-year plan).
It should be noted that the purpose of these Plans was not to measure the outputs and outcomes of research in society, but rather the outputs and outcomes of research uptake systems that the research management offices were designing. That is to say, what difference did it make to coordinate the public relations and research management functions through quarterly meetings? Did this result in more media briefings? Or, what difference did it make to establish a reward and recognition scheme for academics who dedicate time to research uptake? Did this improve the quality and frequency of research uptake activity?
The Action Plans began by according to a set template, with research management offices across our partner universities setting tasks under the themes of:
Actions to ensure engagement and support of senior university staff and governing bodies
Actions to establish university-wide research uptake teams (of both/either administrative staff or academics themselves)
Actions to develop, change, ratify, implement and publicise policies relevant to research uptake
Actions to address recording of (and access to) university research
Actions to identify stakeholders and end users
Specific research-uptake orientated projects
Actions to engage with media
DRUSSA research management leaders would identify a series of objectives under each of the above headings, then establishing timelines, budgets, staff allocation and indicators of success. We worked with each institution to help determine indicators that would be:
Specific (i.e., is there a clear link between the output and the outcome?)
Measureable (i.e., can you quantify and assess degrees of success in achieving this output and outcome?)
Achievable (i.e., can this objective be met with existing resources, time and skills, and are the systems you need already largely in place?)
Responsible (i.e., who exactly will be delivering these objectives, who owns the supervisory processes, and who do they report to?)
Time-bound (i.e., by which point do you expect your objective to be met? Aim short- and medium-term)
By measuring the outputs and outcomes of the research uptake management process, research offices can gain critical insight as to how to improve and drive research engagement with external stakeholders. From there, the longer-term outcomes and impacts of the research itself may begin to emerge.
Things to think about
How do you know whether or not you have achieved the aims of the project?
What is the difference between, and how can you measure
Things to do
Using the DRUSSA Institutional Action Plan template, chart out some objectives for you and your office to build up a more robust institutional research uptake management system. What initiatives could you introduce that you feel would make a different – and what indicators will you establish to measure whether you have achieved the outputs and outcomes you desired? Are your indicators: specific, measureable, achievable, responsible and time-bound?
This summary report of the third and final DRUSSA Benchmarking Survey (2016) tracks both the type and the degree of institutional change for research uptake observed across all 22 DRUSSA universities. Based on the direct testimony of the universities themselves, through survey data and through the 2016 Benchmarking and Leadership Conference (April 2016, Mauritius), this report captures the innovations, successes, challenges and lessons of the five years of the programme. It does this in dialogue with the previous iterations of the survey and conferences, in 2012 and 2014, in order to establish trends in institutional change by geographic region and by thematic area of focus.
Ugandan National Council for Science and Technology (UNCST)
This report from the third DRUSSA Policy Symposium at the Ministry of Education, Science, Technology and Sports (Uganda) outlines the commitments, plans for action and opportunities for collaboration identified between higher education leaders, academics and policymakers looking for ways to incorporate research evidence into policymaking
Institute of Statistical, Social and Economic Research (ISSER)
This short module explains the process of evaluating research for policy applicability and how research quality can be ascertained and communicated with non-academic audiences, as well as providing useful resources for further reading.
This detailed and instructive framework outlines approaches to evaluating and measuring the impact of university-community interventions and the impact of university research in the public sphere. As one of a cohort of engagement “beacons” (leading institutions) under the auspices of the UK’s National Co-ordinating Centre for Public Engagement, UCL has provided here some key insights into methodological approaches to evaluation, to ascertaining the audience of the evaluation, to building research uptake capacity, and how to cascade engagement culture across the institution.
The step-by-step guide takes you through each stage of the impact strategy development process, including setting objectives, developing key messages, identifying your audience, getting them involved, and measuring success.