Skip to main content

Measuring research culture: How was your journey?


Cat Davies, Dean for Research Culture, appeals for REF2028 to follow Universities’ lead in measuring research culture rather than imposing counterproductive metric regimes.

A month has passed since the initial decisions on REF2028’s methods were made public. The UK Higher Education sector is digesting the rebalanced, rebranded picture. Although questions are being mulled over for all three elements, it’s the expanded People, Culture, and Environment component that presents the most unknowns. The £471 million question: how exactly will it be measured?

What we do know is that research culture will be assessed at both institutional and disciplinary level. We know that outcomes will need to be quantitatively and qualitatively evidenced. We know that the format-in-waiting will use a snappier, structured template for greater consistency across submissions (and arguably to safeguard against it being the ‘exercise in storytelling’ that James Wilsdon has already seen off). And that it will be flexible enough to allow HEIs to tailor submissions to their own circumstances.

So far, so OK.

A key unknown is the measures themselves. What kind of cultural data is available and will stand up to scrutiny? How are we going to robustly and responsibly evidence our progress towards an inclusive, collaborative, and supportive research system?

Although metric-setting is a tricky brief, we are not short of ideas. The Future Research Assessment Programme recommends use of the SCOPE framework to develop the details of evaluation methods. Several Universities have generated longlists of measurables and are whittling them down using the same framework.

It’s unclear how Research England will use these ideas. Although the summer consultation survey is asking for our input on specific policy aspects, the PC&E component is not (yet) open for debate.

Why measure?

Instead of rushing into a shopping trip to fill our basket with measures, we must first consider why we are measuring research culture.

Measuring to find a winner is absurd. We assess culture not to see who’s best at it, but to evaluate how we’re tackling the root causes of obstructive practices. We should forget the superlatives for a moment and instead ask not who or how much, but how we can make useful contributions to the sector, via adaptable achievements within and between our institutions.

Recalibrating our measurement approach from product to process presents an amazing opportunity: if REF helps us share meaningful and diverse narratives on culture, we can tackle together the many threats to our research quality, without having to do Everything Everywhere All at Once.

We should measure for analysis and accountability rather than acclaim. Rather than being an end in itself, a REF that shows itself to be a useful investment that facilitates HEIs strategic work would be a win for us all.

Bringing REF along for the ride

So what might this look like in practice? REF should drop its hubris and instead piggyback on the cultural work that Universities need to do, REF or no-REF. In July’s webinar on the emerging shape of REF2028, Steven Hill acknowledged that Universities are developing their own metrics, and crucially, that the REF should go with the grain and learn from what we’re doing. Research England should commit to that.

As well as identifying workable metrics, HEIs are devising their research culture action plans[1][2][3]. If REF provided a template to publicly host these plans, milestones, outcomes, and key indicators over the submission period, as well as providing space for lessons learned when initiatives take unexpected turns, it would serve the institutions compiling their plans/submissions, help their peers, and constitute a neat submission in itself. The template could be adapted for institutional and disciplinary levels: the former including policies and strategies, the latter focusing on areas of disciplinary relevance.

Content could be themed for obligatory and optional areas of research culture, e.g. open research, EDI, reward and recognition, and researcher development, helping us to clear basic expectations or hygiene measures and to identify centres of specific expertise from which to learn. The approach could accommodate both input and output measures. The action plans would naturally be bespoke to individual institutions, while reflecting the macro-culture in which our individual research cultures exist. Demonstrating the distance travelled by individual HEIs would straightforwardly acknowledge different levels of opportunity, making the exercise equitable.

… and open, flexible, considered, honest, practical, evidenced, efficient, AND useful beyond the REF.

So what of metrics?

An action/progress template will need metrics but let’s make them the right ones. Ones that already exist. Ones that align with our goals and values. For every metric used, research leaders need to ask themselves “will this enrich the research culture of our institutions?” rather than “will it make the boat go faster? Will it bring us cash?”.

Let’s not undo all the progress we’ve made on the responsible uses of metrics. Institutions are already rallying against expectations to produce data that’s elusive (some types of EDI data), gameable (e.g. number of people attending development programmes), not sufficiently established (positive action initiatives), or hard to interpret (number of harassment cases reported).

The only data that REF requires should be data already collated for other purposes, e.g. institutional KPIs, HESA, proportion of researchers on short-term contracts, pay gap data. As well as baseline data already being in the bag for existing measures (can we really talk about trajectory over a window of only 2-3 years?), it would help control the weirdness of data that’s known to be for REF eyes only, e.g. how people reply in surveys.

Much to gain

The REF is sorely in need of supporters, and it will gain them by being radical, minimising burden, and improving the daily practice of institutions and researchers. By co-opting work that’s happening anyway, rather than introducing more busywork (as I’ve written elsewhere), the buy-in will be substantial. By sharing effective practice, and the route to getting there, the sector stands to win big. Maybe that exercise in storytelling is not such a bad thing.

With thanks to Elizabeth Adams, Amanda Bretman, Lizzie Garcha, and Nick Plant for their helpful comments on an earlier draft of this piece.