home

8 Research on Research Integrity Grants Awarded

ORI awarded eight Research on Research Integrity projects through the Extramural Research Program, funding a total of $1,135,316. The RRI grants allows up to $150,000 in direct and indirect funding for projects that foster innovative approaches for research on research integrity and preventing misconduct. 

Principal Investigator Study Title
Mary Walsh
Harvard University
 
"Image Forensics: Quantitative Assessments of Image Duplication"
 
Daniel Acuna
Syracuse University

 
"Automatic Detection, Evaluation, and Tracing of Image and Data Tampering with Humans in the Loop"
 
Allan Loup
University of Notre Dame
 
"Deliberative Sessions on the Protection of Research Misconduct Whistleblowers"
 
Jason Robert
Arizona State University

 
"Integrity, Identity, and Pluralistic Ignorance: When Scientific Vocation Impedes the Reporting of Wrongdoing"
 
Suzanne Rivera
Case Western Reserve University
 
"Fostering Responsible Research Culture through Enhanced Mentoring and Leadership"
 
Ben Vassar
Oklahoma State University 

 
"Evaluation of Twitter as a Post-Publication Peer Review Mechanism to Identify Responsible Conduct of Research Concerns"
 
Karen Geren
University of Missouri

 
"Evaluating Organizational Research Climate to Assess Research Integrity in a University System"
 
Dennis Gorman
Texas A&M 
 
"Protocol-Publication Discrepancies and p-hacking in Public Health Research"
 

ABSTRACT

"Image Forensics: Quantitative Assessments of Image Duplication"
PI: Mary Walsh, Harvard University

We are working to supplement available academic research forensic tools by developing automated approaches to identifying problematic images and assessing overall data similarity. In Phase I of our efforts, our team has been utilizing deep convolutional neural network modeling (ConvNet, CNN) to identify duplicated images in the scientific literature, even when those images were manipulated after duplication. We propose the next phase in CNN algorithm development, where common features of identity-mapped images are utilized to align underlying raw image data and thereby enable the evaluation of image data flagged as potentially duplicated (with or without subsequent manipulation). Endpoint assessments will include a feature-alignment step that produces a visual map of shared image features that link two potentially duplicated images, and a metric assessment of the extent to which image features align. The technology will incorporate our preliminary work in MATLAB, demonstrating proof-of-concept for this approach, translated into the CNN environment. We will construct this enhanced platform with this assistance of our novel training and validation synthetic biological image repositories together with a testing approach utilizing real-world datasets collated from the retracted/corrected scientific literature.

We will continue to facilitate the community-based development of image assessment standards and detection tools by sharing (1) our code, (2) our synthetic image datasets for CNN training and validating, and, as available, (3) our benchmark testing datasets, containing real-world (retracted/corrected) images. We anticipate the product of the proposed project to continue to advance forensic image assessment technology, with the goal of assisting journals, funders, and institutional officials with the evaluation of potentially problematic image data, so that they can determine, with confidence, whether potentially duplicated images have been inappropriately reused to represent the outcomes of independent experimentation.

"Automatic Detection, Evaluation, and Tracing of Image and Data Tampering with Humans in the Loop"
PI:  Daniel Acuna, Syracuse University

Thanks to the development of image processing technology and the presence of advanced photo editing software, it is much cheaper and easier to manipulate image content at high standards. Therefore, the need of an automatic image tampering detector becomes urgent for the research community. Although the detection of resampling and copy-move have had mature solutions, there is still a lack of robust and scalable methods for splice or removal detection. It is clear that a functional image tampering detector cannot be assembled without addressing the last two forms of manipulation. Moreover, as the purpose of scientific images varies from general ones, the criteria of identifying fraud should also be different. That is to say, to tackle the image tampering detection task in the field of science successfully, there must be a detector that targets scientific images specifically. In this work, we propose to develop tools and techniques to detect image manipulation and traces of manipulation in data files that are purported to have produced published figures. Importantly, the primary focus of this application is to make the research integrity officer the central component of the process. To this end, we also plan to develop metrics of image manipulation that are better correlated to how humans perceive manipulation.

"Deliberative Sessions on the Protection of Research Misconduct Whistleblowers"
PI: Allan Loup, University of Notre Dame

This project addresses protections for research misconduct whistleblowers and has two goals. The first is to establish an evidence base for research trainees views concerning optimal policy and practice around protections for research misconduct whistleblowers utilizing a structured deliberative process for eliciting informed and considered views and recommendations on this complex topic. The second goal of this project is to assess this same deliberative process as a technique for training in the responsible conduct of research. This investigation will produce key insights from those best positioned to witness research misconduct to inform advancements in policy and practice; shed light on factors that make it difficult for individuals to report allegations of research misconduct and on how research trainees think through ethical concerns; and serve as a demonstration project for participatory policy development and research integrity training.

"Integrity, Identity, and Pluralistic Ignorance: When Scientific Vocation Impedes the Reporting of Wrongdoing"
PI:  Jason Robert, Arizona State University

Significant research has been devoted to how extrinsic, instrumental gains of the research enterprise can motivate ethical misbehavior, but fewer scholars have examined how intrinsic motivations can lead to ethical malfeasance. Additionally, ethics research has assumed that identifying with intrinsic motivations can help to inoculate researchers against misconduct, as investigators become virtuous by identifying with the altruistic motivations of discovery in the sciences. Our research seeks to challenge that narrative with the driving question: does personal identification with intrinsic value of the scientific enterprise paradoxically leave researchers more at risk of ethical misbehavior? We predict that despite the virtuous intentions of researchers who identify strongly with the intrinsic motivations of the scientific enterprise, the socio-psychological phenomenon of pluralistic ignorance can lead to an increase in unethical decision making on research teams. We propose to empirically test this hypothesis using both traditional social science survey methodologies and Q-method survey methodologies to examine the explicit and implicit motivation of researchers, their perceptions of colleagues’ potential for ethical misconduct, and how likely they are to speak up when encountering ethical misbehavior. Working closely with experts in both ethical formation and organizational identity formation, we will develop recommendations for how to address pluralistic ignorance, group cohesion, and diffusion of responsibility in a way that attends to both the explicit and implicit motivating factors of research misconduct and ethical formation.

"Fostering Responsible Research Culture through Enhanced Mentoring and Leadership"
PI:  Suzanne Rivera, Case Western Reserve University

For more than two decades, universities have been expected to develop programs and curricula for training their constituents about the Responsible Conduct of Research (RCR). Many products have been implemented to teach faculty researchers and trainees about principles of scientific integrity. Despite these efforts, research misconduct rates are increasing and trainees appear to have no greater skills to navigate ethical dilemmas in research than they did in the past. The proposed project seeks to create environments in which well-mentored researchers with greater RCR knowledge and enhanced mentoring and leadership skills increase integrity and reduce misconduct across the University. The objectives are to (1) foster targeted mentoring relationship pairs between senior and junior faculty researchers, (2) facilitate in-depth group discussions about scientific integrity and research team management, (3) provide focused didactic instruction in executive leadership and mentoring skills, (4) cultivate a year-long cohort experience, (5) measure the effectiveness of interventions by collecting quantitative data about RCR knowledge, and mentoring and leadership skills pre- and post-intervention, and (6) produce a sustainable model that can be used at our institution and others. We will certify 20 participants (10 senior researchers, 10 junior researchers) whose labs/research groups will be models for RCR, with the capacity to affect as many as 100 individuals within those groups. Early career researchers and students who train in the labs/groups of participants certified in this program will be better prepared to continue following and teaching responsible research practices throughout their careers.

"Evaluation of Twitter as a Post-Publication Peer Review Mechanism to Identify Responsible Conduct of Research Concerns"
PI:  Ben Vassar, Oklahoma State University

Peer review is a hallmark of the scientific enterprise.  A limited number of case studies have shown that Twitter may be used to detect concerns related to the responsible conduct of research (RCR) earlier than letters to the editor — one traditional form of post-publication peer review (PPPR). However, it is unclear whether Twitter is used on a broad scale to identify concerns about RCR through open, citizen-enabled peer-review of academic research. Thus, there is a critical need to investigate the use of Twitter to identify RCR concerns, since doing so may optimize strategies to incorporate concerns raised on Twitter into the PPPR process.

Our long-term goal is to improve the efficiency of PPPR to identify RCR concerns and correct the scientific record. Our overall objective in this proposal is to investigate the use of Twitter as a PPPR mechanism relative to RCR concerns. The rationale for the proposed work is that RCR concerns are a threat to the scientific record.  ORI has stated that rates of reported misconduct — one form of RCR concern —  are lower than reality, thus introducing the need for improved mechanisms for detection. Our central hypothesis, based on our preliminary data, is that Twitter is a mechanism for early detection for RCR concerns. To accomplish the overall objective and test our central hypothesis, we propose the following independent, yet related specific aims.

Specific Aim 1: To characterize the use of Twitter as a system for PPPR by: a) quantifying the prevalence of tweets that identify the reason(s) for article retraction prior to the date of retraction; b) calculating the average number of days from first post-publication criticism until retraction; c) evaluating the ways hashtags and mentions are used in RCR-related tweets; and d) catalog reasons for retraction identified by Twitter-users.

Specific Aim 2: To conduct a thematic analysis of content from Twitter posts about retracted studies to gain an in-depth understanding of Twitter’s role as a PPPR system.

"Evaluating Organizational Research Climate to Assess Research Integrity in a University System"
PI:  Karen Geren, University of Missouri

The Office of Inspector General (OIG) has determined that almost one quarter of academic institutions are not in compliance with National Science Foundation’s (NSF’s) responsible conduct of research (RCR) requirements.  Moreover, fewer than half of research-intensive universities have developed plans that incorporated at least some of recommended best practices in RCR education. There is a need to identify structures and processes in organizations that affect the integrity of research environments in order to develop and target activities that effectively promote research integrity by collecting reliable data to benchmark baseline conditions, assess areas needing improvement, and tailor activities to improve the research climate.

This proposed cross-sectional study will quantify differences in perceived research climate and measure heterogeneity or homogeneity of research integrity across and within campuses in a multi-campus system that includes one campus that is a member of the Association of American Universities (AAU) and has a health care system. The Survey of Organizational Research Climate (SOuRCe) will be used to assess dimensions of research integrity climate, including ethical leadership, socialization and communication processes, policies, procedures, structures, and processes to identify risks to research integrity.  The population of 18,400 participants will include all graduate students, postdoctoral fellows, and all research personnel.  

This research will contribute to our understanding of how research integrity across campuses and within subunits of campuses are influenced by the research enterprise and how external factors such as the pressure of maintaining AAU membership influences perceived research climate.

"Protocol-Publication Discrepancies and p-hacking in Public Health Research"
PI:  Dennis Gorman, Texas A&M

The primary research question addressed is whether there is consistency between the information contained in study protocols pertaining to behavioral and social science public health research and subsequent publications from these research projects. The proposed study has two aims.  First, it will examine whether there is consistency between the information contained in study protocols pertaining to behavioral and social science public health research and subsequent publications from these research projects. The protocols to be studied are 51, published in 2011 and 2012, identified through a search of the BMC Public Health webpage. Second, we will use p-values associated with estimates reported in these publications to generate p-curves. The relative skewness of these curves will be used to assess the extent of p-hacking and reverse p-hacking in those studies in which there is a discrepancy between the primary and secondary outcomes reported in the BMC Public Health protocol and those reported in subsequent publications. The rationale underlying this aim is that those studies that fail to adhere to the protocol description of their primary and secondary outcomes in subsequent publications were likely to have re-run their analyses (using modified or different outcomes) in response to the initial protocol yielding no statistically significant differences in the case of main effects (p-hacking) or unwanted statistically significant differences in the case of confounding variable (reverse p-hacking).  The modified or flexible analyses would be conducted until statistically significant results emerged (i.e., just below a p-value of 0.05).


Source URL: https://ori.hhs.gov/blog/8-research-research-integrity-grants-awarded