Scientists are increasingly overwhelmed by the volume of articles being published. The total number of articles indexed in Scopus and Web of Science has grown exponentially in recent years; in 2022 the article total was ∼47% higher than in 2016, which has outpaced the limited growth—if any—in the number of practicing scientists. Thus, publication workload per scientist has increased dramatically. We define this problem as “the strain on scientific publishing.” To analyze this strain, we present five data-driven metrics showing publisher growth, processing times, and citation behaviors. We draw these data from web scrapes, and from publishers through their websites or upon request. Specific groups have disproportionately grown in their articles published per year, contributing to this strain. Some publishers enabled this growth by hosting “special issues” with reduced turnaround times. Given pressures on researchers to “publish or perish” to compete for funding, this strain was likely amplified by these offers to publish more articles. We also observed widespread year-over-year inflation of journal impact factors coinciding with this strain, which risks confusing quality signals. Such exponential growth cannot be sustained. The metrics we define here should enable this evolving conversation to reach actionable solutions to address the strain on scientific publishing.
Here, an interesting paper suggests possible practical ways to tackle it. I recommend reading the introduction too, as it really sets the problem quite well.
Here some suggestions from the paper:
First, we can make it easy to track scientific progress and reduce overpublishing by moving to open-ended and stackable publications instead of publishing multiple papers for each research direction. For example, instead of ten papers published on one line of research, a scientist can prepare a single study where each piece (‘chapter’) can be stacked with or inserted into the previous piece. A similar approach is implemented on Github where codes can be updated and expanded; or on Jupyter where the data, analysis and text can be published on a single page (with more chapters being added as the study develops further). Importantly, Jupyter notebooks are free and do not charge for open access as most publishers do, pointing towards a possible solution for reduced publishing fees. Existing examples include a Jupyter paper published by Ziatdinov et al.8 and an interactive book Deep Learning for Molecules & Materials created by Andrew White9. The book has a relevant repository on Github and incorporates contributions and feedback from community members.
As well as (@Rothbardian_fanatic, this one relates to your observation regarding the weakness of the peer-review process, as opposed to the scientific method in itself):
Second, we should move to community-based reviewing. For each open-ended study, the authors may choose the referees for the initial review and criticism. However, the work will also remain open to community members for constructive criticism and feedback. Changes can be implemented by authors directly in the published study, with the referees (community members) confirming that the issues have been resolved. Importantly, clearly visible professional feedback from the community will make it easy for the readers to evaluate the study and learn about open questions. Some initial steps in this direction have already been made by eLife10. Furthermore, as such open-ended studies can be published without the initial peer review, the careers of authors will not depend on the reviewing time, journal selection, or preparation of rebuttals. Recently, the European Molecular Biology Organization (EMBO) has adopted new criteria for postdoc evaluation that discourage the accumulation of publications, whereby refereed preprints will be viewed as publications in the assessment process11. This signifies an important paradigm shift in publishing that is likely to become widespread in the future.
As well as
Third, community-based reviewing will help properly document and recognize such activities. A recent study suggests that researchers spend more than 130 million hours reviewing papers each year, with a monetary value of US$1.5 billion in 2020 in the United States alone12. This represents a substantial contribution to science that remains overlooked13. Online open-ended publishing systems can track reviewing activities and provide relevant information (or metrics) for contribution assessment (ORCiD has already implemented a similar system).
I wonder if this could relate to payment of referees' time, see #805976.
Two more suggestions:
Fourth, a specific contribution from each author can become clearly visible when needed. Online open-ended publishing will allow authors to precisely assign different parts of the study as contributions from specific people.
and
Fifth, advanced machine learning algorithms will be able to compile ‘scientific reviews’ for any question asked by the user. This is likely to eliminate papers and reviews as we know them now. Writing and perfecting the text of the manuscript will no longer be needed. Computer algorithms will be able to do it for us, delivering easy-to-understand stacks of information with assigned ‘trustworthy/reproducibility scores’ and providing a clear time-stamped overview of the progress within any field of science.
For this last one, I need to process it a bit more. The writer of the piece is into AI if I'm not mistaken, so he might be a bit biased on that part.