A smarter accountability: combining ‘traditional’ and social impact metrics in #OpenScience

10 October 2014 1

 

Wikipedia defines Open Science as:

“… the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.”

From all these transformative aspects here we will address what concerns making scientific research more transparent, more collaborative and more efficient. Here we will compile some of the current discussions about how to have a more comprehensive understanding of scientific impact [for more context here].

CURRENT SCIENTIFIC LANDSCAPE

Since the 1960s citation counts have been the standard for judging scholarly contributions and status, but growing awareness of the strategy’s limitations should lead to acceptance of alternative metrics. (Buschman and Michalek, 2013). The highly popular journal impact factor (JIF), calculated by the scientific division of Thomson Reuters and published in the Journal Citation Reports (JCR), is an average measure of citations within 1 year after the publication of a journal as a whole within the two preceding years. It is widely used as a proxy of a journal’s quality and scientific prestige influencing the decision-making process for research grant allocation, hiring and promotion of academic staff. (Bornmann, Lutz, et al. 2012).

However, as Shema, et al (2014) argue the age of the web has given rise to new venues of discussion and dissemination of scholarly information. And as a consequence, JCR as the sole assessment of journal impact, has raised questions regarding the validity of the Institute of Scientific Information’s Impact Factor (ISI IF), a unique standard by which to judge the impact of a given journal (Bollen, et al, 2005). Buschman and Michalek also add, just because a paper is cited does not mean that it is cited positively; yet, there is no distinction between positive and negative references when evaluating citations counts (2013)

As scholarly communication migrated to the web, so did citations. However, the meaning of web citation remained rather vague because the web is made of much more than formal research discourse, and citations can appear everywhere. Shema, et al (2014)

photo 3

 

 

 

 

In an analysis by PLOS, citation counts only represent a small fraction of how a paper is used; in fact, citation counts represent less than 1% of usage for an article.
(Buschman and Michalek, 2013)

 

NEED FOR A CHANGE

Currently, both citations and peer review are considered mostly as partial indicators of ‘‘scientific impact’’ and also no single metric can sufficiently reveal the full impact of research. Given these limitations, the combination of peer review with ‘‘multi-metric approach’’ is proposed as necessary. (Zahedi, et. al, 2013)

The fast increase in facilities and new tools provided by the internet has lead to the development of alternative metrics (Sidinei and Martens, 2009). Alternative metrics refer to more ‘‘unconventional’’ measures for evaluation of research, including metrics such as usage data analysis (download and view counts); web citation and link analyses or social web analysis (Zahedi, et. al, 2013).

Today the so-called ‘alternative indicators’ in assessing scientific impact has entered the scientific debate, and these new metrics are expected not only to overcome some of the limitations of the previous approaches but also to provide new insights in research evaluation.

Chart published by Buschman, Mike, and Andrea Michalek. “Are alternative metrics still alternative?.” Bulletin of the American Society for Information Science and Technology 39.4 (2013): 35-39.

Here some ‘real-time’ transactions that can be also considered and tracked: Likes, comments, reviews, discussions, bookmarks, saves, tweets and mentions of scientific publications and sources in social media.  (Zahedi, et. al, 2013) … mentions in blogs and other nontraditional formats, open review forums, electronic book downloads, library circulation counts and more. (Buschman and Michalek, 2013).

SAMPLES OF NEW TOOLS

The more traditional metrics based on citations, although widely used and applied in research evaluation, are unable to measure the online impact of scientific literature (for example via Facebook, Twitter, reference managers, blogs or wikis) and also lack the ability of measuring the impact of scholarly outputs other than journal articles or conference proceedings, ignoring other outputs such as datasets, software, slides, blog posts, etc. Zahedi, et. al (2013)

Although altmetrics is still in its infancy here some interesting samples:

Among these tools we find F1000 (http://f1000.com), PLOS Article-Level-Metrics (ALM) (http:// article-level-metrics.plos.org/), Altmetric.com (www.altmetric.com/), Plum Analytics (www.plumanalytics.com/), Impact Story (www.impactstory.org/), CiteULike (www. citeulike.org/), and Mendeley (www.mendeley.com/). Zahedi, et. al (2013)

As well as some references and research of their efficacy:

  • Blog citations (i.e. ResearchBlogging.org) are worth pursuing as an altmetrics source,…[they] take a great deal more time and thought than microblogging, bookmarking, or downloading, even if the latter activities are not automated (Shema, et al, 2014)
  • Impact Story which shows the impact of the ‘artifacts’ according to a variety of metrics such as the number of readers, bookmarks, tweets, mentions, shares, views, downloads, blog posts and citations in Mendeley, CiteULike, Twitter, Wikipedia, Figshare, Dryad, Scienceseeker, PubMed and Scopus. Impact Story is an interesting open source for collecting altmetrics, however, we also see some important limitations particularly regarding the speed and capacity of data. Zahedi, et. al (2013)
  • Mendeley is the major and more useful source for altmetrics data. (Zahedi, et. al, 2013).

2012-11992-1-PB

 

 

Eysenbach (2011) found a correlation between the number of tweets about Journal of Medical Internet Research [JMIR] articles and future citation counts.

 

FUTURE SCENE

Some final ideas for an area that is now getting more and more attention:

  • The greatest opportunity for applying these new metrics is when we move beyond just tracking article-level metrics for a particular artifact and on to associating all research outputs with the person that created them. We can then underlay the metrics with the social graph of who is influencing whom and in what ways even before the system as a whole changes, new metrics are already available (Buschman and Michalek, 2013).
  • These new metrics, may accelerate their widespread use by authors, editors, and librarians as alternatives to this traditional institute (ISI IF) that had a monopoly until recently. (Sidinei and Martens, 2009)
  • The proper use and acceptance of these new tools might experience a time lag. (Sidinei and Martens, 2009)
  • As Gunther Eysenbach conclude ‘rather than as a replacement for citation metrics, which is in some cases weakly correlated with citations, but fundamentally measures something differently’.

* The title is inspired in an article from Nature

References

Bornmann, Lutz, et al. “Diversity, value and limitations of the journal impact factor and alternative metrics.” Rheumatology international 32.7 (2012): 1861-1867.

Zahedi, Zohreh, Rodrigo Costas, and Paul Wouters. “How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications.” Scientometrics (2014): 1-23.

Buschman, Mike, and Andrea Michalek. “Are alternative metrics still alternative?.” Bulletin of the American Society for Information Science and Technology 39.4 (2013): 35-39.

Bollen, J., Van de Sompel, H., Smith, J. A., & Luce, R. (2005). Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management, 41(6), 1419–1440. doi:10.1016/j.ipm.2005.03.024

Shema, H., Bar-Ilan, J., & Thelwall, M. (2014). Do blog citations correlate with a higher number of future citations? Research blogs as a potential source for alternative metrics. Journal of the Association for Information Science and Technology, 65(5), 1018–1027. doi:10.1002/asi.23037

Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), e123.

Thomaz, Sidinei M., and Koen Martens. “Alternative metrics to measure journal impacts: Entering in a “free market” era.” Hydrobiologia 636.1 (2009): 7-10.

One thought on “A smarter accountability: combining ‘traditional’ and social impact metrics in #OpenScience

  1. Pingback: A smarter accountability: combining ‘traditional’ and social impact metrics in #OpenScience | Cristobal Cobo | Nader Ale Ebrahim

Leave a Reply