Posted on : June 16th 2022
Image manipulation is becoming more prevalent with editors and authors becoming increasingly aware of the issue. The use of digitally manipulated images in life sciences manuscripts has been estimated to occur in up to a fifth of all published papers in this field, according to studies1. Researchers may make minor alterations for relatively benign reasons — for example, by boosting the contrast or color balance to emphasize a vital point. However, they can also employ image-editing software to create entirely fabricated results.
Linear contrast, brightness, and/or color adjustments are generally permissible as long as the adjustments are made to view aspects that are already available in the data, and the changes are made to the entire image rather than just select parts. The removal or deletion of elements from photos, as well as masking, duplication (copying and pasting), addition, selective enhancement, or shifting of elements within images, are all considered unacceptable image adjustments.
It is the obligation of editors to provide authors with guidance on how to properly manage image data. While some image alteration is common to practice (e.g., contrast adjustments, image cropping or minor brightness, etc.), clear rules are essential. Authors must comprehend the borderline between acceptable and inappropriate manipulation.
Image manipulation is the process of changing the appearance of a picture-format image in order to achieve the desired result. While it is frequently necessary for researchers to manipulate images in order to generate publication-quality figures, improper manipulation of images can result in paper rejection and cast doubt on the research's legitimacy. While deliberate fraud is unusual, inadvertent manipulation of images caused by ignorance is frequently observed.
The use of electronic submissions has become increasingly popular among journals and publishers since 2002. This development, in addition to the widespread distribution and use of image editing software, laid the groundwork for a significant increase in the manipulation of scientific images.
Hwang Woo-Suk, a researcher from South Korea, was found guilty in 2009 of research misconduct, which included malfeasance and unethical procurement of human eggs. The manipulation of images to show negative staining for a cell-surface marker was one of his less-publicized ethical violations. Min-Jean Yin5, a cancer researcher at Pfizer, was fired in 2016 for duplicating Western Blot images. Likewise, Sonia Melo6, a Portuguese scientist, was denied grant funding for the same reason.
Despite the detection of such scandals, many such image integrity violations have been revealed by readers post-publication. This is partly because identifying unethical image processing can be extremely difficult.
Due to the sensitive nature of the issue, it is critical that all parties involved exercise caution and due diligence in the detection and communication of possible inappropriate and fraudulent manipulation of image data. It is critical that authors of a manuscript understand what image data manipulations are acceptable, and do not engage in unacceptably or fraudulently manipulating image data. Peer-reviewers must have the necessary experience to evaluate the quality and uniqueness of image data associated with the manuscripts they peer-review critically and constructively. Journal editors should also question authors and seek additional information as necessary, based on their own analysis and that provided by the peer-reviewers.
The Council of Science Editors (CSE) has published guidelines established by Rockefeller University Press that define what constitutes acceptable and unacceptable image manipulation. Inappropriate and fraudulent manipulation are two types of digital image-related misconduct, according to the Rockefeller University Press. Inappropriate manipulation refers to altering image data in a way that contravenes established guidelines but has no effect on the data's interpretation. Examples include blending together images from various microscope fields to generate a unique image that appears to be a single field. Fraudulent manipulation is the process of altering image data in such a way that the data cannot be interpreted correctly. Examples include removing a band from a gel in order to "fix" a failed negative control or adding a band to a gel in order to show the existence of a product that was not actually present.
Additionally, the CSE has developed a procedure for handling guideline violations, which is intended to guide reviewers and journal editors. Editors, according to the CSE, are responsible for establishing guidelines for authors on how to properly handle image data. Because some image manipulation is common practice (image cropping or minor brightness and contrast adjustments), clear guidelines are essential. Authors must be able to distinguish between acceptable and unacceptable manipulation.
Whatever the rationale for submitting altered or duplicated images to a journal, it is important for these to be recognized early in the article review process to take appropriate action before publishing and, ideally, before peer review. The STM Standards and Technology Committee (STEC) formed a working group to address questions about automatic image alteration and/or duplication detection. In December 2021, the working group, which currently operates as part of the STM Integrity Hub initiative, released recommendations for handling image integrity issues.
The best-practice recommendations7 provide a systematic method to aid editors and others conducting image integrity checks as part of pre-publication quality control or post-publication investigations of image integrity issues in scholarly publications. It also provides journal editors with recommendations for safeguarding the scholarly record. The recommendations pertain to figures in research publications or preprints, as well as the raw data underpinning these figures, wherever available. It, however, excludes reanalysis and forensic examination of raw data and big datasets.
The proposed measures are based on the cumulative experience of members of an STM working group of publishing and image integrity specialists, with some policies having been previously published in the members' publications. The guidelines apply to a variety of image discrepancies and are congruent with and supplement those of the Committee on Publication Ethics (COPE). This guidance is meant to support journal editorial policies and aid editors in making these complex judgments, but it does not supersede any existing journal policies.
The Working Group evaluates and updates these recommendations when new challenges arise and best practices in image integrity evolve.
The Office of Research Integrity (ORI) is in charge of conducting oversight assessments of investigations into claims of research misconduct involving research supported, at least in part, by US Public Health Service organizations. According to Dr. John Dahlberg, Deputy Director of the ORI, the rate at which they are being asked to assess research is increasing substantially. A number of computer-aided ways to review data and other research records have been developed over the years by the ORI's Division of Investigative Oversight (DIO). DIO is responsible for establishing whether or not there has been research misconduct in cases where the findings appear to be legitimate. Dr. Dahlberg takes us through some of these tools and processes in an article published by Elsevier.
Forensic droplets: Droplets are little desktop apps included with Adobe Photoshop that process files dragged over the icon automatically. They can be downloaded from ORI's website and enable you to swiftly inspect the intricacies of a scientific image in Photoshop while reading the article in HTML or PDF format in an Internet Browser.
Adobe Bridge: This software allows you to create image libraries for speedy screening. Images can be sorted by date or file size, and the huge thumbnail size allows for a detailed examination of each image. This is especially beneficial when looking for sequential copies of updated files, as they are likely to be extremely similar in size and have closely spaced time-date stamps.
ImageJ: When it comes to making quantitative scans of gel bands, this application is extremely adaptable, and the DIO finds it particularly useful. It is available for free download from the National Institutes of Health (NIH) website.
DIO has also detected research misconduct in PowerPoint images by using the 'Reset Picture' feature. This has uncovered the use of underlying images on multiple occasions. Several times, it was discovered that the underlying images had been imported from irrelevant published articles. Additionally, some PDF files viewed in Adobe Acrobat can also have their images reset.
No suitable method for the automated detection of image alteration exists today, despite numerous efforts. Nonetheless, the current standard for analyzing images in scientific papers is visual scrutiny, which is likely to continue for the foreseable future.
Straive is developing an AI/ML-based application to build its capability in determining whether or not the image has been modified. The application is still undergoing user acceptance testing, but preliminary findings are positive. It examines the input images for noise variance and then classifies them as manipulated or non-manipulated based on a confidence score derived by analyzing the level of error and evaluating the image attributes.
To construct a trained model, new sample images are being added on a continual basis. The application's present accuracy varies between 50 and 60 percent, and increasing this accuracy is an iterative process. Straive is enhancing the tool’s prediction rate by adding more images to its dataset, providing solutions for images in corner situations, and enhancing the processing of sub-images for all other sub-images with suspected problems (e.g. cloning, stretching, partial copy, and rotated images).Additionally, this application can detect and localize image modifications, as well as identify the type of manipulation.
Straive is also evaluating a number of 3rd party tools. Some of these were recommended by existing clients for usage with their artwork images or artifacts, taking into account their current level of confidence in using the tool. This enables Straive to gather further experience and insights for the development of its own robust model.
Although editorial office staff may not be directly accountable for assessing whether a modified image has been unethically manipulated, they should be prepared to offer their opinion if an editor or other author requests it. Additionally, a member of the editorial office should be prepared to approach authors to discuss image data manipulation issues as they arise, and in severe circumstances, to advise authors that their work will no longer be considered for publication due to an ethical violation.
Cutting-edge technology has significantly simplified image editing, which has benefited the publishing sector in a variety of ways. However, it has also made it easier for authors to post images that contain fraudulent data. As time progresses and technology advances, the importance of this topic will only grow in significance.
The availability of research data is essential for ensuring the reproducibility of scientific findings. In recent years, publisher’s submission requirements have encouraged data sharing to improve the transparency and quality of research reporting. Data sharing statements are now standard practice.
Change is a heterogeneous disruption, and digital transformation is no different. It is inevitable to business today as change is to life, but how companies employ it to orient technology for the larger vision of their business makes all the difference.
Peer review is in high demand, despite its inherent flaws, which range from the possibility of bias among peer reviewers to procedural integrity to the stretch of time to publication.
Two new forms of peer review have emerged in the last two decades - post-publication peer review, in which manuscripts are evaluated after publication; and registered reports, in which publications are examined prior to submission to the journal
The push for Open Access publication has been around for more than 30 years now. The past year and a half, however, has produced an exceptional case study on the potential of Open Access.
Our solutioning team is eager to know about your challenge and how we can help.