top of page

Research misconduct: Why doesn't it end?

Writer: Hirokazu KobayashiHirokazu Kobayashi

Updated: Jul 10, 2024

Hirokazu Kobayashi

CEO, Green Insight Japan, Inc.

Professor Emeritus and Visiting Professor, University of Shizuoka

 

The "STAP cell case" involving Haruko Obokata (1983-), which came to light ten years ago, was a sad case that even resulted in the suicide of a related scientist. Researchers are subjected to the screening process of research performance when seeking employment and obtaining research funding. Research institutions also depend on their research performance for their survival. Therefore, they try to improve their research performance, and this is where the difficulty arises. Initially, the fascination with scientific research lies in the thrill of unraveling a mechanism that no one else knows. A newborn child is interested in information that comes to them through sight and sound. As they grow, they try to touch the object. They also want to know what goes on inside. This curiosity drives scientists to unravel the mysteries hidden in natural phenomena. It was a different story when it became a profession: until about 1800, scientists were limited to those who could obtain patronage and the clergy. Later, when researchers became professionals, they were subject to competition, as were artists and literary figures. The recipient evaluates art and literature. On the other hand, it isn't easy to prove the accuracy with which a study reaches its results. This leaves room for fraud.


The research scale has recently expanded, and it is not unusual to find academic papers with more than 100 co-authors. Even in such cases, experimental results are not always double- or triple-checked. Even in the case of co-authored papers, everyone is responsible for their part of the paper, and the whole is formed as a collection of these parts. Therefore, it is difficult to know in advance if there are irregularities in the parts for which an individual is responsible. Research misconduct includes plagiarism, fabrication, and falsification. In the case of image fraud, it is easy to identify when the same image is reused as the result of another experiment. Forty journals were examined for this type of fraud: almost zero percent in 1995, 5.5 percent in 2006, and then maintained in the 4 percent range. So, how does one detect research misconduct that cannot be detected by image analysis? The more attractive the research findings, the more researchers in related fields try to unravel the mysteries behind them. The first step is to reproduce the experimental results. If there is research misconduct, the results cannot be reproduced here. Hundreds of cases of research misconduct uncovered in this way can be found on the Internet. From another perspective, fraud in research that is not interesting to other researchers is overlooked. Quantitative analysis requires statistical processing of multiple results. In addition, comprehensive analysis of the expression of genetic information involves repetition of material and analysis. Suppose an experiment is repeated ten times, and if it is reproducible up to eight times, do we judge the two times that are not reproducible to be experimentally inadequate and use only the eight times for statistical processing? What if reproducibility can only be achieved six times out of ten? This is where subjectivity comes in and becomes a gray area called "optimization." Furthermore, even if the principal investigator prepares the research plan and methods and the laboratory members are in charge of the experiments, it isn't easy to detect the optimization process at the level of the laboratory members. Thus, there is room for errors in research other than research misconduct.

 

Research publications are subject to peer review. This is usually done anonymously by several researchers in related fields who have a good understanding of the research. If the researchers compete with, or conversely close to, researchers in related fields, there is a high probability that they will be unable to make a fair judgment. Therefore, although we can name the reviewers you want removed or included at the time of submission, it is up to the editor to decide who the reviewers will be, including researchers other than those listed above. Research performance is highly valued when the research is published in highly ranked journals, but on the other hand, the number of papers published can be used as an indicator of research performance. Thus, academic journals have emerged that are published without peer review. These are called "predatory journals". There are more than 1,000 such journals, and a list is available online. On the other hand, in the case of the COVID-19 pandemic, there was an increased need to get the results of life-threatening research to researchers in related fields as quickly as possible. Peer review usually takes several weeks. Therefore, preprints, which are not peer-reviewed, are increasingly being published online. The emergence of this type of preprint can be traced back to 1991, with Cornell University's "arXiv" (pronounced archive, meaning "archive"). There is also a growing movement to share large amounts of analytical data among related researchers without organizing them, especially in instrumental analysis. In Japan, the "Advanced Research Infrastructure for Materials and Nanotechnology (ARIM)" is being promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). These activities are called "open science," and researchers are now expected to be able to pick out what they need from the jumble of information.

 

To prevent and improve research misconduct, it is desirable to enhance the competence of reviewers in hiring and promoting researchers so that they can judge their research achievements and abilities through interviews and other means. In addition, when acquiring research funds, the total amount of research funds in the "Grant-in-Aid for Scientific Research" class of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) should be increased rather than the number of significant research funds. According to statistics from the Cabinet Office, Grants-in-Aid for Scientific Research is the most cost-effective research fund. Again, the reviewers will be anonymous researchers in related fields. The review items and their weighting should be further reviewed to ensure that the review is conducted with emphasis on proposal content rather than previous research performance.




 
 
 

Comments


logo.jpg

© by Hirokazu Kobayashi, Green Insight Japan.

bottom of page