The Scholarly Communication in Africa Programme (SCAP) recently hosted Cameron Neylon on his first visit to South Africa for a week of activity and discussion around alternative metrics and research evaluation.
Based at the UK Science and Technology Facilities Council, Neylon is a leading thinker in open science, open access and open data. He is one of the original authors of the Altmetrics manifesto, co-author of the Panton Principles for open data in science, and founding Editor-in-Chief of the journal Open Research Computation.
He visited UCT in his capacity as a member of the SCAP Advisory Panel and to participate in discussions around defining and measuring the impact of academic research – a core strand central to all SCAP activity. SCAP Research Lead Catherine Kell interviewed him briefly.
CK: What is alternative metrics?
CN: When we think of the term alternative metrics we have to think what the word alternative means, what it is alternative to. There are many differences within this emerging field, but for most of us working in this area – it is seen as alternative ways of measuring scholarly output. At the moment the system currently involves simple citation counting and this runs into all sorts of problems, where you are really just judging a book by its cover, that is, by what journal it appears in.
There are two strands to the alternative metrics movement. One takes the traditional journal articles approach to scholarly outputs, and it tries to measure that related to the use of the articles on the web. So what the ‘Article level metrics’, which is the approach that the Public Library of Science (PLoS) is taking, involves providing information on the details of article usage gathered through internet tracking, such as numbers of downloads, page views along with information on who is bookmarking articles and who is talking about them online. So with regard to wider forms of communication, there is a range of different measures on usage which can be tracked through what happens on the web, like the number of tweets, the use of FaceBook, blog-posts. It could also include other references like policy papers and other news outlets, which link back electronically to the original research articles.
So, in the first strand of the alternative metrics movement new measures are applied to traditional scholarly outputs. The second strand involves working out how to apply the traditional idea of measurement to new scholarly approaches which are emerging online. An example of this is data citation (where the web enables data itself to be cited independently of the article in which it is embedded), and equally the production by academics, of video and audio presentations, and websites in general. So: how we apply traditional notions of referencing using citations to track that variation in type of output.
CK: Why did you come to be involved with this work in alternative metrics? You are a very successful scientist, does this not just take you away from your scientific work?
CN: I came to be interested in alternative assessment measures because of what I saw in my own experience: that traditional measures and the impact factors currently in use seemed to be pushing academic work in a direction that was not in line with my values. I wanted to make sure that my research was useful and was being used, and I felt that the incentives were pushing me in a direction which made my research output less useful. I was writing articles and putting them into journals where people couldn’t read them, making data into a state where it was not useful for others. So I wanted to find procedures for managing and describing my data and my work that would make it reusable so that my research could really make a difference.
CK: So what steps did you take to start engaging with the idea of alternative metrics?
CN: Well, when I started to meet up with others in this area, lots of us had the same experience. So our thoughts were around how to encourage people to ensure that their research is communicated in a way that it reaches the right people. So this raises a policy issue. Funders and universities just tend to tell people what to do, and that is not effective. So, for us, the question became how do you shape the incentive system so people are really encouraged to do the right thing. We felt it was important to make it a plausible adjustment and not an impossible leap, so it was: how do we take what already exists in terms of measurement and adjust and expand that?
My own thinking has been: if we want to measure use of research, then we need to measure its reuse. We need to track different ways that this happens and this has got a lot easier with the way the web works now. In the ‘traditional’ academic world this is already happening with measures of citations, and people’s work being measured through journal impact factors, like how many times is it cited? So this idea of tracking reuse fits quite naturally.
CK: What potential is there for this approach in Southern Africa?
CN: Well, I’d like to say that this is my first visit to Africa and to South Africa, so I would be very cautious about saying anything. But for me, having had a few days in SCAP now, what is most striking is that researchers are really aware and motivated to find ways by which to engage with the community; who see that their research has value for development and education and for improving the social situation and the economy. I get that strong sense that that is what researchers and academics want to do. So, in some ways, it will already be easier than it might in the UK or more widely in the North. There’s already an acceptance of such an approach.
CK: How would you tell if this work around alternative metrics was making a difference?
CN: Well, you would ask is your research reaching local people and are they engaging with it? For example, you would look at people who are talking about a paper using social media and Twitter, and then you would look at, is it local people tweeting about this paper, or is it a wider community, or an international community? Is it people on the ground, who are directly connecting with it? So people are constantly leaving traces in public places on the web. And you can so all if this with ease, actually, the techniques are just so good. So this is a tremendously powerful and new phenomenon and the results can be very meaningful, because you are really following through on the question of how your research is relating to people and how they are relating to it.
In this way researchers can become more motivated to address challenges and local issues. By getting this feedback they can tell if they are doing the right thing. They can ask what worked and what did not work?
On the other hand, because there is a sense of a wish to engage together with a desire to make a difference, this can enable communication to work in the other direction – you, as a researcher, can get direct feedback on what people think. So it short-circuits normal lengthy feedback routes, which, in many cases, do not exist at all.
Otherwise what researchers are doing is meaningless, research always has to be in context. You have to find a balance between the right breadth of research and the right areas to be looking into. More broadly, our political systems are based on mid 19 Century technologies. The evolution to a more democratic political system has to involve an engaged community. If you don’t talk to the community it can’t work, and the community needs to know what’s going on to participate. We have the mechanisms now, and we have these channels that can demonstrate engagement.
CK: Is it not possible that this approach could be co-opted, for example, by big business wanting to use it for instrumental ends? We could therefore end up with narrowed research agendas?
CN: There is potential for that, but I think it is possible to mitigate against the risk of co-option by creating value in a range of outputs which have different levels of access and by being mindful of this risk. We know that one number (the traditional one) as a measure doesn’t work, but we can now get the responsiveness and know who is the target of our efforts with appropriate methods, and these would involve tracking and getting feedback on all kinds of outputs from traditional papers to technical reports, policy papers, to visual and audio presentations and even to oral workshops. So the range is very wide now. And the web makes all of this possible and in some ways fairly effortless.
*** Postscript: See Cameron Neylon’s reflections on his recent visit and the dynamics around open access in the developing world in his blog post ‘Open Access for the other 85%‘.