Publish or perish: is there an alternative?

The findings of the Nuffield Council report on the culture of science in the UK were discussed at the BioMed Central Editors’ Conference which took place in London on Tuesday 19 May 2015 . Editors, who are also researchers, authors and reviewers, from all over the world explored how to relieve the pressure to publish.

1

In December 2014 the Nuffield Council on bioethics published a report on the culture of scientific research in the UK. The main findings were: that intense competition for jobs, promotion and funding is creating incentives to produce poor quality research and discourages collaboration, that funding shortages are leading to a trend towards funding short term projects and ‘safe research’ which is stifling creativity and innovation, and thirdly that publication in high impact factor journals is perceived to be the most important factor in assessments for jobs, funding and promotion.

Findings from the Nuffield Council report

The findings of the report reflected the views of researchers, funders, institutions, publishers and editors and were gathered via an online survey, discussion events and evidence gathering meetings. The report makes a number of recommendations and, as reported by Catherine Joyston of the Nuffield Council, the response to the report has been very positive. However, she points out that:

“It is extremely difficult to change the culture of an organisation, let alone multiple organisations…”

“…but an acceptance of a collective obligation to enact change is certainly the first step.”

Recommendations for publishers

It is reassuring to note that a number of recommendations for publishers are already being addressed by the publishing industry as a whole. Initiatives to widen the range of research published, increase openness and sharing, attempt to improve peer review and to tackle ethical issues are all underway.

Perhaps the most worrying finding of the report is that, “58% of survey respondents are aware of scientists feeling tempted or under pressure to compromise on research integrity and standards.”

Publishers and editors often have to deal with the consequences of compromised research integrity and standards, so they may well feel more motivated than most to assume their share of the obligation to make change happen. This together with the likelihood that the same issues prevail at a global level prompted a set of discussions amongst editors and publishers during the recent BioMed Central Editors’ Conference in London, UK.

What did our editors think?

Editors were asked to focus on two recommendations of the Nuffield report; the recognition of wider activities of researchers and the use of a broad range of criteria in research assessments.

Editors from Australia, USA, China and all over Europe shared similar experiences of the pressures they are under to maintain and progress in their careers.

A recurrent theme throughout the discussions was the important institutions place on how much grant money a researcher can bring to the institution and how this depends on their publications in high impact factor journals.

Interestingly, in Australia the National Health and Medical Research Council ruled out the use of journal impact factors for peer review of individual research grant and fellowship applications in 2010. The view from our editors however was that while the use of other factors such as impact on broader community, media coverage, engagement with consumer groups and invitations to speak are all considered, the ruling is difficult to administer and suspicions remain that journal impact factors and the H-index are still considered.

It was suggested that we need a metric that takes into account a broader range of information

The limitations of journal impact factors and the use of alternative metrics have featured heavily elsewhere in discussions of research assessments and more about calls for better ways to evaluate research can be found in the report by Catherine Joynson, a blog about transparency metrics and the San Francisco Declaration on Research Assessment.

Our editors also had ideas about alternative metrics. They did not feel it is possible to completely get away from using scores and numbers, especially in situations where there are hundreds of grant applications to assess. It was suggested that we need a metric that takes into account a broader range of information such as, the number of first author papers, last author papers, patents and successful grant applications.

The role of social media was also discussed, but it was felt this would only be relevant to younger researchers. There were concerns about the permanency of certain social networking sites and the possibility of ‘gaming’ social media activities.

The importance of recognising peer review activities

Of all the ideas that were discussed, the need to recognise peer review activities and editorial work of researchers was raised by all groups of editors. Suggestions ranged from a journal simply providing an email acknowledging an individual’s activities, to certificates and awards for high quality peer review, to linking peer review activities to an individual’s ORCID record. These are ideas that publishers can address.

The movement to change the culture of science has only just begun. Publishers and editors can take a leading role. The question remains how far funders and institutions are willing to recognise peer review activities, or any of the other activities and measures discussed, in their research assessments.

Thank you to fellow members of the Research Integrity Group at BioMed Central for facilitating these discussions and to all the editors who took part.

View the latest posts on the Research in progress blog homepage

One Comment

Michaël Bon

There is a readily available solution that BMC could immediately encourage. It consists in depositing articles (pre-prin or print) into the “Self Journal of Science” (www.sjscience.org), a novel repository that provides an additional layer of certification (through open debate and discussions) as well as a novel numerical evaluation of articles that is much sounder than IF, individual and very hard to falsify, all on a community basis. The benefits of SJS (for authors, reviewers and journals) stand by themselves and are a sufficient reason why everybody could use it. Hence, the complementary value its processes naturally create will have a real impact. Then, it will be much easier to turn this real impact into a “legal” one too, as there will already be all the necessary evidence. Changing the whole culture of an (international) organisation from scratch, or just based on ethical arguments, is very unlikely.

Comments are closed.