Complete Story
 

01/21/2021

Citing Software in Scholarly Publishing to Improve Reproducibility, Reuse and Credit

While software is essential to research, practices for its formal citation have not kept up

[Editor’s Note: Today’s post is by Daniel S. Katz and Hollydawn Murray. Dan leads the FORCE11 Software Citation Implementation Working Group’s journals task force, and is the Chief Scientist at NCSA and Research Associate Professor in Computer Science, Electrical and Computer Engineering, and the School of Information Sciences at the University of Illinois. Until recently, Holly was the Head of Data and Software Publishing at F1000Research where she led on data and software sharing policy and strategy. She currently works as Research Manager at Health Data Research UK. The work discussed here is supported by representatives and editors from: AAS Journals, AGU Journals, American Meteorological Society, Crossref, DataCite, eLife, Elsevier, F1000Research, GigaScience Press, Hindawi, IEEE Publications, Journal of Open Research Software (Ubiquity Press), Journal of Open Source Software (Open Journals), Oxford University Press, PLOS, Science Magazine, Springer Nature, Taylor & Francis and Wiley]

Software is essential to research, and is regularly an element of the work described in scholarly articles. However, these articles often don’t properly cite the software, leading to problems finding and accessing it, which in turns leads to problems with reproducibility, reuse and proper credit for the software’s developers. In response, the FORCE11 Software Citation Implementation Working Group, comprised of scholarly communications researchers, representatives of nineteen major journals, publishers and scholarly infrastructures (Crossref, DataCite), have proposed a set of customizable guidelines to clearly identify the software and credit its developers and maintainers. This follows the earlier development of a set of Software Citation Principles. To realize their full benefit, we are now urging publishers to adapt and adopt these guidelines to implement the principles and to meet their communities’ particular needs.

Reproducibility
In March 2020, Fergusson et al. published a report that used software to model the impact of non-pharmaceutical interventions in reducing COVID-19 mortality and healthcare demand. This report, and others like it, were widely discussed and appeared to have had a quick impact on government policies. But almost as quickly, both political and scientific criticism started. The scientific criticism included questions about the modeling, such as the parameters and methodology used, which could not be easily addressed since the code was not shared in conjunction with the publication.

Please select this link to read the complete article from The Scholarly Kitchen.

Printer-Friendly Version