Bibliometrics and Research Evaluation

Uses and Abuses

Nonfiction, Reference & Language, Education & Teaching, Educational Theory, Evaluation, Language Arts, Library & Information Services
Cover of the book Bibliometrics and Research Evaluation by Yves Gingras, The MIT Press
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Yves Gingras ISBN: 9780262337663
Publisher: The MIT Press Publication: September 30, 2016
Imprint: The MIT Press Language: English
Author: Yves Gingras
ISBN: 9780262337663
Publisher: The MIT Press
Publication: September 30, 2016
Imprint: The MIT Press
Language: English

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.

The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.

Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.

The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.

Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

More books from The MIT Press

Cover of the book Food Justice by Yves Gingras
Cover of the book Things That Keep Us Busy by Yves Gingras
Cover of the book Infectious Behavior by Yves Gingras
Cover of the book Families at Play by Yves Gingras
Cover of the book Framing Internet Safety by Yves Gingras
Cover of the book 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 by Yves Gingras
Cover of the book In 100 Years by Yves Gingras
Cover of the book Modeling and Simulating Software Architectures by Yves Gingras
Cover of the book The Illusion of Conscious Will by Yves Gingras
Cover of the book Persuasive Games by Yves Gingras
Cover of the book Global Carbon Pricing by Yves Gingras
Cover of the book Paradox by Yves Gingras
Cover of the book Titans of the Climate by Yves Gingras
Cover of the book The Hidden Sense by Yves Gingras
Cover of the book What We Know About Climate Change by Yves Gingras
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy