{"id":14083,"date":"2022-07-15T14:21:33","date_gmt":"2022-07-15T14:21:33","guid":{"rendered":"https:\/\/www.dimensions.ai\/?post_type=resource&#038;p=14083"},"modified":"2024-10-24T19:15:14","modified_gmt":"2024-10-24T17:15:14","slug":"university-of-jyvaskyla-developing-a-new-metric","status":"publish","type":"resource","link":"https:\/\/www.dimensions.ai\/resources\/university-of-jyvaskyla-developing-a-new-metric\/","title":{"rendered":"University of Jyv\u00e4skyl\u00e4: Developing a New Metric"},"content":{"rendered":"\n<div class=\"wp-block-dimensions-one-column\"><div class=\"has-background-image-13298 dimensions-section\" style=\"background-color:#304E9D;background-size:cover;background-position:center center;background-image:url('https:\/\/dg.test\/wp-content\/uploads\/2022\/02\/dimensions-print-title-image-09-scaled.jpg')\"><div class=\"dimensions-blocks-section-inner container mx-auto md:w-3\/4 pl-8 pr-8 pt-12 pb-4\">\n<h2 class=\"wp-block-heading has-grey-color has-text-color\">\u201cWorking with a database as comprehensive as Dimensions to create a scientifically justified and transparent metric.&#8221;<strong><em> <\/em><\/strong><\/h2>\n\n\n\n<p class=\"has-white-color has-text-color\"><strong>Janne Sepp<strong>\u00e4<\/strong>nen, Open Science Centre, University of Jyv<strong>\u00e4<\/strong>skyl\u00e4&nbsp;<\/strong><\/p>\n<\/div><\/div><\/div>\n\n\n\n<div style=\"height:4rem\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">The challenge: Journal prestige and how bias has damaged research<\/h2>\n\n\n\n<p>For a long time, scientific research has been guided by journal publication. But <em>\u2018journal prestige\u2019<\/em>, as Janne Sepp\u00e4nen calls it, means good, accurate science is missing the attention it deserves.&nbsp;<\/p>\n\n\n\n<p><em>\u201cWe&#8217;ve maintained a system that selects bad science because we reward each other for getting into high prestige journals. Many scientists acknowledge this is a problem, but there\u2019s no consensus on how to address it. I would like science to be fair and as good as possible.\u201d&nbsp;<\/em><\/p>\n\n\n\n<p>Despite the fact that prestigious science journals struggle to reach even average reliability, and are often considered <a href=\"https:\/\/www.forbes.com\/sites\/madhukarpai\/2020\/11\/30\/how-prestige-journals-remain-elite-exclusive-and-exclusionary\/\">\u2018elite, exclusive and exclusionary\u2019, <\/a>an unconscious bias exists, favouring the research they feature, and the academics behind that research. This skews the research that gets read, biases hiring and headhunting processes and in some cases, results in scientists leaving the field, when their potentially ground-breaking work becomes effectively invisible to the community.<\/p>\n\n\n\n<p>Janne Sepp\u00e4nen at the Open Science Centre, University of Jyv\u00e4skyl\u00e4, Finland, wanted to find a new way to evaluate the influence of a piece of research &#8211; one that allowed everyone to discover good, accurate science, regardless of who wrote it and where it was published.&nbsp;<\/p>\n\n\n\n<p><strong>The need for new metrics<\/strong><\/p>\n\n\n\n<p>Janne is not alone. 21,829 individuals and organisations in 158 countries have now signed <a href=\"https:\/\/sfdora.org\/\">DORA<\/a>, a Declaration On Research Assessment, pledging to improve how the output of scientific research is evaluated.&nbsp;<\/p>\n\n\n\n<p>At one time the preferred method was the <a href=\"https:\/\/journals.plos.org\/plosbiology\/article?id=10.1371\/journal.pbio.1002541\">Relative Citation Ratio (RCR)<\/a>. Developed in 2016, its algorithm uses citation rates to measure influence at the article level, comparing the number of citations a target paper receives to a comparison group formed by papers that are cited alongside that target paper.&nbsp;<\/p>\n\n\n\n<p>For Janne, the foundational idea of RCR was great, but the implementation was poorly executed in many ways. First, distributions of citation rates are highly skewed &#8211; only a small number of papers get cited extremely often and thus have a disproportionate impact on the mean &#8211; yet RCR compared the article&#8217;s citation rate to the arithmetic mean of the comparison group. Worse, RCR did not even use the actual article-level citation counts, but used average citation rates of journals represented in the comparison group. Citation rates were calculated in terms of citations per calendar year, combining papers published in January with papers published in December. Finally, the implementation available at iCite was based on citation data available in PubMed only.<\/p>\n\n\n\n<p>Janne says: <em>\u201cIn most fields of research, PubMed, which is used as the source data for RCR in its available implementations, is very sparse. So it doesn&#8217;t index most of the publications. And even if it indexes your target publication, it certainly doesn&#8217;t index all of those that cite it, which means that it&#8217;s missing a lot of data\u2026 I wanted to build a new metric that used this basic idea but had the true article level citation data as inclusive as possible, and calculated the rates on the best available resolution rather than in terms of citations per calendar year . Working with a database as comprehensive as Dimensions, I could see how many times a piece had been cited per day since it appeared and then calculate percentile rank by comparing it to a fair peer group.\u201d<\/em><\/p>\n\n\n\n<p>To find good, relevant sources the algorithm needed a far greater data set and metrics that covered more than just citations.&nbsp;<\/p>\n\n\n\n<p><em>\u201cSome people think the answer is to do away with metrics altogether, simply letting research speak for itself. But there just isn\u2019t enough time to read everything out there even if you are an expert, so scientists need some direction in what to pay attention to. And there are lots of non-experts &#8211; science journalists, policy-makers, teachers, for example &#8211; who also need some way to judge what science to pay attention to.<\/em><\/p>\n\n\n\n<p><em>We need more metrics, but they need to be better, more diverse metrics. Scientifically justified and collected using transparent methods that show academic impact, but also societal impact and educational impact and so on.\u201d<\/em><\/p>\n\n\n\n<p>This thought process led to a new collaboration with Dimensions.<\/p>\n\n\n\n<p><strong>Bringing missing data to life<\/strong><\/p>\n\n\n\n<p>Using the powerful API and <a href=\"https:\/\/docs.dimensions.ai\/dsl\/\">search language<\/a> from <a href=\"https:\/\/www.dimensions.ai\/sector\/academic-institutions\">Dimensions<\/a>, Janne developed the Co-Citation Percentile Rank (CPR), which uses Dimensions extensive database, the largest of its kind in the world, to compare the citation rate of any article that has a DOI and is indexed, to the actual citation rates of articles that are co-cited along with that article.\u00a0<\/p>\n\n\n\n<p>From here, the CPR system analyses the comparative set of peer articles and reveals a percentile ranking which is fair as well as comprehensive. In Janne\u2019s words this&nbsp;<em>\u201cmakes intuitive sense and allows for a truly journal-independent research field &#8211; normalisation for quantitative comparison of academic impact.\u201d<\/em><\/p>\n\n\n\n<p>For users, the results are exciting. Not only can they find data previously missed, they can understand its importance quickly.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"924\" height=\"507\" src=\"https:\/\/www.dimensions.ai\/wp-content\/uploads\/2022\/07\/icite.png\" alt=\"\" class=\"wp-image-14084\" srcset=\"https:\/\/www.dimensions.ai\/wp-content\/uploads\/2022\/07\/icite.png 924w, https:\/\/www.dimensions.ai\/wp-content\/uploads\/2022\/07\/icite-300x165.png 300w, https:\/\/www.dimensions.ai\/wp-content\/uploads\/2022\/07\/icite-768x421.png 768w, https:\/\/www.dimensions.ai\/wp-content\/uploads\/2022\/07\/icite-547x300.png 547w\" sizes=\"auto, (max-width: 924px) 100vw, 924px\" \/><figcaption class=\"wp-element-caption\"><em>Dimensions found 68 citations for a recent sports psychology article, a piece of writing that was cited 88% more often than similar articles on the topic.&nbsp; Using iCite, the article does not even exist.<\/em><\/figcaption><\/figure>\n\n\n\n<p><strong>Science for everyone<\/strong><\/p>\n\n\n\n<p>For Janne, the creation of a fairer, more equal landscape for scientific research went hand in hand with a publicly accessible platform that would allow everyone to benefit.&nbsp;<\/p>\n\n\n\n<p><em>\u201cI wanted people to be able to use CPR freely, and we also published an explanation of the metric. The source code is openly available so other universities and funders could build their own versions too.\u201d<\/em><\/p>\n\n\n\n<p>Dimensions agreed. Working collaboratively they created <a href=\"https:\/\/oscsolutions.cc.jyu.fi\/jyucite\/about\/\">JYUcite<\/a>, a publicly available demonstration that supports fairer evaluation of scientific research.&nbsp;<\/p>\n\n\n\n<p>Janne says: <em>\u201cThe team at Dimensions has always been eager to help. You get the feeling that they want academics to use their data to build things and they want to make it possible.\u201d&nbsp;<\/em><\/p>\n\n\n\n<p>Going forward there is scope to use CPR within universities, for research funding decisions, recruitment, for science journalists, or really anyone seeking to find focus in the large body of relevant scientific literature in any topic.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Find out what Dimensions can do for you<\/strong><\/p>\n\n\n\n<p>Would you like to learn about how <a href=\"https:\/\/www.dimensions.ai\/sector\/academic-institutions\">Dimensions<\/a> can support research within your organisation? <a href=\"https:\/\/www.dimensions.ai\/request-a-demo-or-quote\/\">Get in touch<\/a> and one of our experts will be happy to speak to you.<\/p>\n\n\n\n<div class=\"wp-block-dimensions-cta dimensions-cta flex justify-center dimensions-cta-blue-light\"><a href=\"#\">Request a demo<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Hear how the University of Jyv\u00e4skyl\u00e4, Finland, has used Dimensions to develop a new metric to support fairer evaluation of scientific research.<\/p>\n","protected":false},"featured_media":14089,"menu_order":0,"template":"","resource_type":[4],"resource_audience_segment":[21,24],"class_list":["post-14083","resource","type-resource","status-publish","has-post-thumbnail","hentry","resource_type-case-studies","resource_audience_segment-academic-institutions","resource_audience_segment-researchers"],"acf":{"sidebar_title":"","sidebar_images":[{"image":false,"caption":""},{"image":false,"caption":""}],"sidebar_text":"","sidebar_links":false,"header_image":"","cta":""},"_links":{"self":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource\/14083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource"}],"about":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/types\/resource"}],"version-history":[{"count":0,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource\/14083\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/media\/14089"}],"wp:attachment":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/media?parent=14083"}],"wp:term":[{"taxonomy":"resource_type","embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource_type?post=14083"},{"taxonomy":"resource_audience_segment","embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource_audience_segment?post=14083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}