The true percentage of scientific articles that will never be cited

Every so often I see a message passing on social media in which someone claims that ‘90% of scientific articles will never be cited’. While I do not agree with the notion that a publication’s value is solely determined by the number of citations it receives, such statements can be damaging to science, because they suggest to the public that most research is useless, with the underlying implication that many researchers therefore waste public money.

Although the statement has been traced back to an overeager editor of a non-scientific journal, it is actually fairly easy to examine the validity of this claim with a website called Scimago Journal and Country Rank (SJR). This excellent website includes citation information categorized by country and subject area based on data taken from Elsevier’s Scopus. Because SJR categorizes the information by country or subject area, a country has to be selected to obtain the numbers necessary for examining the claim. For the first analysis, I have selected the United States, but using other countries yields similar outcomes.

When reading the statement, it is unclear how ‘never’ and ‘articles’ are defined. Never is a very long time period. Although SJR gives citation information for the period between 1996 and 2015, I have taken articles that were published in 2005 as the reference point for this analysis. It seems unlikely that many articles that have not been cited at all in 10 years will suddenly be cited.

Besides defining ‘never’, it is similarly important to define ‘articles’. Some scientific documents, such as editorials, errata and lists of reviewers, are not intended to be cited and should therefore not be included in the analysis. SJR helpfully distinguishes citable and non-citable documents. The website informs us that, for the year 2005, 443188 citable and 42704 non-citable documents were published by researchers from the United States (8.79%).

Furthermore, the website informs us that, of the documents that had been published in 2005, 379366 had been cited but 106526 remained uncited after 10 years. However, the number of uncited documents includes the non-citable documents. Without those non-citable documents, only 63822 of the documents that had been published in 2005 remained uncited after 10 years. By dividing the number of uncited documents (63822) with the number of citable documents (443188), one finds that only 14.4% had not been cited.

The analysis shows that statement that ‘90% of articles will never be cited’ is simply not true. According to information taken from Scimago Journal and Country Rank, only 14.4% of citable documents (written in 2005 by researchers from the United States and published in journals indexed by Scopus) have not been cited after 10 years.

This analysis can be conducted for every other country. Most countries have similar low rates as the United States (UK: 9.0%; Germany: 19.8%, France: 18.0%, Canada: 11.6%, Italy: 14.1%; India: 17.3%, Spain: 14.0%). Some countries have higher proportions (China: 31.9%, Japan: 23.2%), but none of them has extremely high rates as the one mentioned in the claim. While these 10 countries represent about 69% of all the documents published in 2005, only 17.7% of the citable articles from these countries had not been cited after 10 years.

It is important to note that these analyses only include articles published in journals that are indexed by Scopus. Publications in journals that are not indexed by Scopus are probably less likely to be cited.

While it is fair to assess the impact of research, unfounded statements implying that many academics do not conduct valuable research can damage science. Politicians may be swayed by public opinion to decrease research funding and they may feel comfortable ignoring the opinions of experts when making policy decisions. It is therefore important that false myths, such as the proportion of articles that are never cited, are dispelled.

The true percentage of scientific articles that will never be cited

Do participants perform better at the beginning of the semester?

Due to the fact that journals mostly publish positive findings, researchers might feel that, when they are unable to replicate an effect, there was something wrong with their own study. The preliminary results might have gone in the right direction, but when data collection was completed the effect was no longer there.

These researchers might feel that the disappearance of the effect during the course of the experiment may have been caused by unmotivated participants in the latter half of the study. These suspicions may be supported with some anecdotal evidence. Everyone who has conducted a couple of studies has at least one story about a participant taking a telephone call during the experiment. A good friend told me once about a participant who thought it was appropriate to bring her pet snake to the study. Moreover, due to the recency effect, researchers are more likely to remember instances of unmotivated participants from the second half of the semester.

Such end-of-semester findings in combination with causal observations of unmotivated participants might lead to the feeling that participants perform better at the beginning of the semester than at the end of the semester. However, is this really the case?

Rolf Zwaan argues on his excellent blog (link) that this feeling might be based on a fallacy. Some observations that participants perform better at the beginning of the semester than participants at the end of the semester might only exist, because a person has examined the results halfway through the experiment.

If the effect is not found after the initial data collection, researchers might decide to terminate the study. This failure to replicate is then unlikely to be attributed to the quality of the participants. However, if the effect has been found or if the preliminary results are going in the right direction, the researchers might decide to continue and collect more results. If the effect is then found consistently throughout the semester, the researchers are unlikely to report in their manuscript when the results were collected. However, if the effect disappears in the second part of the semester, the researchers might attribute this failure to replicate to changes in the quality of the participants.

This fallacy highlights the need to base the decision about the sample size on power analyses before one starts to collect data and to pre-register the experiment and the goals of the experiment. The first two recommendations would prevent decisions to terminate studies prematurely and reports of post-priori findings of end-of-semester effects.

One of the studies that have set out to examine the possible influence of the time of semester is Nicholls et al. (2015).* They examined two factors: Time of semester and reward (credit and paid). There were 80 participants. Half of them participated at the beginning and the other half participated at the end of the semester. Furthermore, half of each group received course credit, whereas the other half of each group were paid for their participation.

Participants completed the Sustained Attention to Response Task. For 360 trials, they had to decide whether a quickly masked digit had been the digit ‘3’. If the digit was not ‘3’ (about 89% of the trials), they had to press a button as quickly as possible. If the digit was ‘3’ (about 11% of the trials), they had to withhold their response altogether. After the sustained-attention task that took about 7 minutes, participants’ intrinsic and extrinsic motivation was measured with the Student Work Preference Inventory.

Despite the sustained-attention task only taking 7 minutes, credit and paid participants’ performance had different trajectories across the semester. Whereas the performance of paid participants slightly improved between the beginning and the end of the semester, the performance of credit participants slightly worsened. Although neither effect was significant, the interaction between time of semester and reward was. Furthermore, it was found that, whereas paid participants did not differ in motivation throughout the semester, credit participants had more intrinsic and extrinsic motivation at the beginning than credit participants at the end of the semester.

Another study was recently conducted by the Many Labs project organized through the Open Science Framework (link). In the study, 20 different laboratories participated. In total, there were 2696 participants, who were presented a series of 6 questionnaires and 10 tests, measuring data quality, individual differences and known experimental effects. The entire study took less than 30 minutes to complete. Whereas the project found no end-of-semester effects on the experimental results, it found weak effects on data quality measures and several individual differences. The results of the project suggest that performance does not but motivation can decrease during the semester.

However, the study of the Many Labs project did not really seem to put participants’ motivation to the test. The consequences of poor motivation may only become apparent when the task is long and repetitive. The study of Nicholls et al. (2015) had a relatively short, but quite repetitive experiment (i.e., 7 minutes), whereas the Many Labs project had much variation (i.e., 16 questionnaires and tests in just 30 minutes). Taken together, the results of these studies therefore seem to suggest that, if your experiment is short or varied, motivation might not really be an issue, but, if your experiment is long and repetitive, motivation might still become an issue later in the semester.

References

Nicholls, M. E. R., Loveless, K. M., Thomas, N. A., Loetscher, T., & Churches, O. (2015). Some participants may be better than others: Sustained attention and motivation are higher early in semester. Quarterly Journal of Experimental Psychology, 68, 10-18. doi: 10.1080/17470218.2014.925481

*Disclaimer: Mike Nicholls and his co-authors are colleagues. They work, like me, at the School of Psychology of Flinders University.

Do participants perform better at the beginning of the semester?

Cash Rules Everything Around Me

I have recently been thinking extensively about submitting my work to open access journals, partly due to selfish reasons (i.e., more readers could lead to more citations) and partly due to ethical reasons. Although academics conduct, write, review and edit scientific articles, each university library pays millions of dollars to allow their academics to read the research that they had conducted.

Many researchers and several governments have recognized this issue and started to support open access. As a response to this movement, large publishing companies, such as Elsevier, Routledge, Sage, Springer and Wiley-Blackwell, now offer the possibility to make articles in regular journals open access. The costs to do so are, however, generally much higher than the costs to publish in open access journals.

Large publishers have defended the higher costs by arguing that, besides peer-review management systems, type setting and copy editing, and archiving and hosting, they guarantee higher quality and offer more prestige. Although there are examples of excellent studies in open access journals and examples of horrendous studies in regular journals, I think that regular journals are, at least in my specific area (i.e., autobiographical memory), still higher regarded than most open access journals.

Whereas I have nothing against better quality, I have some concerns about the ability of large publishers to lend prestige to journals and articles that are published in these journals. It might be convenient to use journals’ reputations as a proxy for the quality of individual papers, but there might be long-term consequences.

One of my concerns is that this ability grants these large publishers too much influence on the research agenda. In an attempt to maximize their profits, they can decide with minor changes in their policies which fields or topics receive more attention. I do not think that large publishers have a nefarious research agenda that they would like to see implemented, but they do seem to have the means to do so if they would want to.

One way that large publishers can influence the research agenda is by setting up new journals. They can support those new journals in many ways. They can help promote the new journal at conferences, pay prominent researchers to be part of the editorial board, pay for editorial assistants who help to process the submissions, ensure the journal is included in important databases, etc. This kind of support will offer even more legitimacy and prestige to the new journals and to the researchers who publish in these new journals regardless of the actual quality of the articles.

Moreover, many publishers offer only packages of journals to libraries. If a library wants to have access to one particular journal, they also have to subscribe to several journals, to which they may not want to have access. By packaging a new journal with established journals, publishers can ensure that the new journals will have immediately many institutional subscriptions.

Such new journals will boost an entire field by providing researchers in this field an additional potential outlet, potentially more publications, potentially more citations, and potentially more editorial positions. These opportunities will help researchers in this field to obtain faculty positions and research grants.

Besides through setting up and supporting new journals, large publishers can influence the research agenda in other ways. They often have some sort of say regarding the choice of the editor of a journal. If there are two competing approaches in a certain field (e.g., basic vs. applied), then the choice for a person who advocates one of those two movements (e.g., basic) could increase the difficulty with which researchers who follow the other movement (e.g., applied) will publish in that journal. Excellent research that is conducted with the second approach might still find its way into the journal, but decent research might not receive the benefit of the doubt from the editor.

Large publishers may also influence the research agenda by their choices regarding which articles to promote in popular media. Publishers could decide to write press releases about studies in a certain field or with a certain approach. Stories about these studies in popular media, such as newspapers, can influence the perception of the entire field. They can make research in the field seem innovative and worthwhile.

Although I do not think that large publishers do not have specific ideas that they would like to advance, I do think that they influence the research agenda in subtle and indirect ways towards fields in which there are larger profit margins. With all these policy decisions, large publishers influence what is read. By influencing what is read, they affect what is cited. And by affecting what is cited, they influence what gets funded.

It seems problematic to me that the decision which research will be conducted appears to be partly determined by the commercial interests of large publishing companies. However, I do not have a simple solution for this problem. One could argue that senior researchers, who already have a large number of publications, should start publishing in open access journals, but they could do the junior researchers on the paper a disservice. Hiring committees and grant reviewers, who do not have the time to do their due diligence and to read the studies, continue to use the reputation of journals to make quick decisions about the quality of individual papers and thus the quality of the researchers.

Cash Rules Everything Around Me

More references, more citations?

A good friend of mine recently complained online that no one would read her new paper. Friends immediately responded that they were highly interested in her work. As online posts are wont to do, comments quickly escalated, with one commenter suggesting to cite each other’s work. While this last comment was made in jest and the friend did not seriously suggest setting up a citation ring, it made me wonder whether the number of citations is indeed related to the number of references. Do some researchers cite other researchers, just because they had cited them?

Like so many things, this question has already been examined. Webster, Jonason and Schember (2009) took 562 articles published in Ethology and Sociobiology (1979-1996) and its successor Evolution and Human Behavior (1997-2002) and compared the number of references of each article to the number of citations. Because the distributions were skewered (i.e., the medians were different from the means), they applied log transformations. Webster et al. (2009) found a surprisingly strong correlation of .44, which suggests that articles with more references indeed receive more citations.

The findings of Webster et al. (2009) bothered me more than it should have. Could it really be that some groups of researchers cite each other frequently? There must be another explanation. One issue that was unclear to me is the extent to which the analysis of Webster et al. (2009) included editorials, errata, commentaries, replies, letters to the editor, and book reviews. These kinds of publications tend to have few references and are seldom cited. As such, they represent outliers and their inclusion could have increased the correlation considerably.

To exclude the possible influence of those editorials and such, I quickly downloaded from Thomson’s Web of Science articles that were published in Memory, Applied Cognitive Psychology and Memory & Cognition in 2004, 2005 and 2006. The years 2004, 2005 and 2006 were selected, because any citation that is made for purely reciprocal reasons is likely to have been made within 10 years or so. Editorials and such were omitted from the subsequent analyses.

For each data set, I calculated one correlation between the number of references and citations. Like Webster et al. (2009), I also applied log transformations to account for the skewered nature of the data. The nine correlations are: r(63) = .275, p = .029, r(75) = .109, p = .351, r(76) = .305, p = .007, r(75) = .191, p = .101, r(73) = .479, p < .001, r(84) = .256, p = .019, r(119) = .142, p = .125, r(125) = .265, p = .003, r(149) = .239, p = .003, respectively. When I averaged the nine correlations (M = .246), then the correlation appears to be less strong than the correlation of Webster et al. (2009). Furthermore, for each journal, one of the three correlations was not significant, suggesting that the effect is not very robust.

Although the range of the correlations is lower than the correlation that was found by Webster et al. (2009), the average of these nine correlations seems to suggest that it might indeed be worthwhile to add a few references. However, supplemental regression analyses indicate that, for every additional 4 references, a study will receive 1 extra citation after a 10-year period (B = 0.282). In other words, the effect seems to be there, but it does not seem to be large or robust. Furthermore, there are at least three other explanations that might account for the relation between references and citations beyond purely reciprocal reasons.

First, whereas I omitted editorials and such from the analyses, I did not account for brief reports and reviews. Brief reports (or rapid communications), regardless whether the journal has such a category, tend to be smaller in scope and report preliminary results. If the results are promising, then a larger study is surely to follow. Brief reports therefore tend to have fewer references and to receive fewer citations too. Reviews, on the other hand, are supposed to provide an overview of the literature and therefore include many references. They are also known to receive many citations.

Second, it is possible that the relation between references and citations reflects differences in the interest in the topics. There are few previous studies to which an article about a niche topic can refer. Similarly, there will be few subsequent studies that can cite the article. However, when the topic is popular, there are many previous studies to which an article can refer and there will be many studies which can cite the article.

Third, it is also possible that the relation between references and citations reflects the quality of the articles. A high quality study has a complete literature review, offers support for its assumptions, makes informed decisions about the design of the study, and puts its results into context. A study which addresses these issues is likely to have more references than a study which ignores these issues. It is also likely to receive more citations.

Oddly enough, I find the size of the relation between references and citations reassuring. The effect is sufficiently small to exclude the existence of extensive citation rings or a wide-spread culture of reciprocal citations, at least in cognitive psychology. Moreover, the relation can be explained by benign factors, such as the type, the topic and the quality of articles. As a struggling academic, it is strangely comforting that there does not appear to be a short-cut for success.

References
Webster, G. D., Jonason, P. K., & Schember, T. O. (2009). Hot topics and popular papers in evolutionary psychology: Analyses of title words and citation counts. Evolutionary Psychology, 7, 348-362.

More references, more citations?