We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Don’t Believe the Hype: Promoting AI Ethics and Principles

Don’t Believe the Hype: Promoting AI Ethics and Principles content piece image
Credit: Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

The following article is an opinion piece written by Dr. Niven R. Narain. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of Technology Networks.

 

These days there is a lot of hype surrounding the use of artificial intelligence (AI) in data-driven drug development. The advances are incredible, and wide-spread, but how much of the information is valid? It’s time to separate fact from fiction, identify some of the issues surrounding the status quo of medical technology, and outline some solutions for the future. 

 

To begin, it’s of paramount importance to recognize humanity, and to recognize what is really at stake here, and the reason we use AI in the first place. Which is to find ways to help people by using all the science and technology at our disposal. So if people are important, if our humanity is a priority, we need a holistic understanding of the state of the industry. We are not debating the use of AI to advance science, but identifying the line between hypothesis-driven and data-driven development.

 

It’s also important that we maintain our humanity by being ethical and honest about what AI can really do. Too often a claim is made about a drug discovery when what has actually been discovered is new biology. This can also be great news, but clarity is critical. There is a lot of work to get from here to the endpoint of an approved drug. A discovery is not truly a discovery until it has been scientifically validated. We are dealing with the science of life and death, and this requires honesty and transparency, as well as an adherence to proper terminology.

 

By a similar measure, we need to manage expectations. AI is a brilliant new tool for us to use, but we have to be realistic about what it can do. We can use it to move the needle at the front end: to speed up preparation for the trials, to make sure we are doing the right trials and getting the right patients involved. But AI will not reduce the time for the FDA to review a new drug. AI will not reduce the time to commercially launch the drug once it’s been approved. We are on ethically shaky ground when we overpromise about how much time and money can be saved using AI, at some point the idealism veers into deception, and it’s plain irresponsible.

 

We also need to be as honest as possible with patients and their families about what they can expect from these technologies. At the end of the day, what matters most to patients is how AI may alter and improve their own personal experience. It’s of upmost important not to overpromise, but we can be optimistic about what improved technologies can bring to patients experientially.

 

So, what is the takeaway? What can we do to avoid the pitfalls of overpromising and under-delivering? My experience has taught me a great deal, and by identifying the weaknesses in this field of emerging technologies, I think we can unite to strengthen the system and weed out imposters to the benefit of everyone. Here are some concrete next steps.

 

First up, we need to get on the same page with regards to terminology. This might entail the creation of a centralized AI dictionary of terms, a first step to control any overstatements and cut the hype before it’s disseminated. Words like discovery, validation, toolset, and model have absolute definitions that we must agree on. We avoid confusion when we speak the same language, and can comprehend each other’s data, identifications and discoveries. 

 

Secondly, our industry needs to come together under the guidance of a new association, focused specifically on ethics in AI. It should comprise lawyers, ethicists, physicians, mathematicians, scientists, bankers and patients – real patients. We need input from everyone who participates in the healthcare journey from the early research through doctors and pharmacists to patients. We can’t expect to solve problems we don’t know; we need feedback to identify issues in the process. A unifying association will make us think about the real challenges surrounding us, and pop the bubble of lofty unsubstantiated AI claims.

 

Third, diversity and inclusion are important. Anytime AI is utilized to build models, you must be fully inclusive and take into account all races, genders around the world and across different socioeconomic levels. Unfortunately, many companies today are guided by tokenism, and only make a perfunctory or symbolic effort to give the appearance of gender or racial equality within a study. However, your research is only as good as the resources you employ, and if your population does not reflect the population you serve, what good are you actually doing?

 

Next, and this will not be a popular idea, but AI needs some degree of regulation. The right types of governance on how we interact within a regulated environment, and a level of regulatory acceptance on algorithms and what we're stating, how this all comes out, what is deemed research vs what is anointed product. Don’t get me wrong, regulation of research can be dangerous, which I am sensitive to, but once there is a product claim, we need clear direction on how to navigate the process.

 

Finally, my last point is really the beginning of the strategic path, and that’s a focus on education. Data scientists currently graduating from computational biology and AI programs need more holistic and realistic training for the industry that awaits them. The work being done at the Schwarzman Center Institute for Ethics in AI at Oxford University, and the MIT Schwarzman College of Computing stand out in the field. They recognize the watershed 40 years ago in the development of medical ethics, and seek to address the ethics and governance of AI from a similarly philosophical stance. We need more programs like these. It’s critical that our next great thinkers understand the importance of remaining in touch with our humanity as we develop socially and ethically responsible technological advances.

 

About the author:

 

Dr. Niven R. Narain is CEO of BERG LLC, a clinical-stage, artificial intelligence-powered biotech leveraging its proprietary platform to map disease and revolutionize treatments across oncology, neurology and rare diseases.