During a session at the 2022 European Hematology Association (EHA) Congress, speakers discussed how artificial intelligence (AI) can help advance the principles of ethical medicine — but also how new technologies are being used to undermine the integrity of scientific research.
The session was part of the YoungEHA track at EHA2022, which is targeted toward scientists and clinicians earlier in their careers and aims to go beyond scientific data to offer a forum for discussion of the changing field of hematology.
The first speaker, Elisabeth Bik, PhD, explained how her consulting work has evolved from the world of microbiology to consulting on suspicions of science misconduct — in other words, she’s now a sleuth on the hunt for potential errors or fraud in research.
Scientific papers are the building blocks of research, Bik noted, and “if those papers contain errors, it would be like building a brick wall where one of the bricks is not very stable… .The wall of science will topple down.”
Science fraud is defined as plagiarism, falsification, or fabrication, but does not include honest errors, Bik explained. The reasons behind cheating can include pressure to publish, the feeling of needing to live up to high expectations after a taste of research success, or even a “power play” in which a professor threatens a postdoctoral researcher with visa revocation if an experiment does not succeed.
Inappropriate image duplication in research papers, which is Bik’s area of expertise, can fall into 1 of 3 categories: simple duplication, reposition, and alteration. A simple duplication can signify an honest error instead of intentional misconduct, but it is still inappropriate and should be corrected, she noted.
The problem, however, is that these instances are tough to spot. On a slide with 8 images, Bik asked the audience if they could identify any duplication. This reporter felt proud of spotting 1 set of identical images until Bik revealed that there were in fact 2 instances of duplication.
Beyond simple duplication, Bik also showed examples of hematology-related images such as Western blots and bone marrow flow cytometry being repositioned, flipped, and altered to disguise their identity, which indicates more intentional deception on the part of the authors.
Complicating the problem is that journals often have not taken action to retract or correct the paper when Bik has brought her concerns to their attention. She advised the audience to look at research figures with a critical eye, especially when serving as a peer reviewer — if data “look too beautiful,” one has a responsibility to raise the issue with the editor privately.
“If you see something, say something, because this happens,” Bik cautioned.
One insidious use of AI to perpetuate scientific fraud is the presence of artificially generated Western blot images in articles published by “paper mills.” Bik has identified over 600 papers using these fake images, which are produced via generative adversarial networks. Unfortunately, the images are unique, so they can not be caught with duplication-detection software.
The cost to society of scientific misconduct goes beyond the potential for readers to unknowingly base their research on papers containing errors or fraud, Bik explained. The presence of fraud undermines the integrity of science and can be misused by those with a political agenda to claim that all science is flawed.
“We need to believe in science, and we need to do better to make science better,” Bik concluded.
The next speaker, Amin Turki, MD, PhD, of Universitätsklinikum Essen in Germany, expanded on how AI can represent both an opportunity and a challenge to ethics in medicine, specifically hematology.
The use of AI in hematology has advanced exponentially in recent years, Turki explained, as prediction tasks have evolved from risk prediction to bone marrow diagnostics, which is time consuming and challenging, due to the complexity of bone marrow with its abundance of cells and structures , but holds the potential to transform clinical practice.
Machine learning has identified new phenotypes in predicting outcomes in chronic graft vs host disease (GVHD), and Turki and colleagues are working on research into predicting mortality after allogeneic hematopoietic cell transplantation, as well as in patients with acute GVHD.
Still, the use of AI in medicine is not without ethical questions, and the number of PubMed-indexed articles on AI and ethics has grown exponentially in recent years. Turki examined AI through the lens of the moral principles of medicine based on the United Nations Declaration of Human Rights published in 1948:
- Autonomy: AI can support patient autonomy through the use of digital agents such as wearable devices.
- Beneficence: AI can improve health by overcoming the limitations of human cognition via improved risk prediction or individualized treatment.
- Nonmaleficence: Toxicity can be reduced through AI-defined dosing and therapy algorithms.
- Justice: Researchers hope AI can be used to reduce the impact of health disparities, but if not done correctly it can increase or perpetuate these disparities (eg, if AI interventions are made accessible only in wealthier countries).
The stakeholders of ethical AI in medicine include the developers, deployers, users, and regulators, and each have unique responsibilities, Turki said. He suggested several ways to overcome ethical challenges, including embedding ethics into the lifecycle of AI development, prioritizing human-centered AI, and ensuring fair representation.
The ongoing digital transformation holds promise for transforming hematology care, Turki concluded, but it also requires us to “never forget the human condition is the basis of our understanding.”