- Translational Tensors
- Posts
- Ethical considerations for medical AI
Ethical considerations for medical AI
An interesting take on stakeholder engagement and clinician involvement in AI deployment
The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models
Torbjørn Gundersen, Kristine Bærøe
The article explores the ethical challenges and concerns arising from the use of artificial intelligence (AI) and machine learning in medical decision-making. It highlights that while deep learning technologies have the potential to improve diagnostic procedures and treatments, they also pose significant ethical problems such as the risk of error, lack of transparency, and disruption of accountability. The paper discusses the role of medical doctors, AI designers, and other stakeholders in making AI ethically acceptable and proposes four models for integrating their input: the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model.
The ordinary evidence model suggests that the responsible use of AI in medicine is ensured by the application of established medical expertise and ethical principles by medical doctors. However, this model faces objections, such as the difficulty of attributing accountability and the opacity of AI. The ethical design model proposes encoding ethical values directly into algorithms to make them ethically acceptable. While this approach takes ethical considerations into account in design, it may lead to a lack of fit between design and medical practice and overlook the need for ethical deliberation.
The collaborative model emphasizes collaboration and mutual engagement between AI designers, bioethicists, and medical doctors to align algorithms with medical expertise and ethical principles. It suggests that meaningful communication and collaboration are crucial in the design and use of medical AI. Furthermore, the public deliberation model involves broad public debate on the benefits and costs of AI in medicine, beyond the involvement of AI designers, medical experts, and bioethicists.
Overall, the article underscores the need for collaboration and ethical deliberation in the design and use of AI in medicine, and it offers a systematic discussion of different models for addressing the ethical concerns posed by medical AI.
Link to Article: The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models - PubMed (nih.gov)
This article was summarized by an AI tool that uses natural language processing. The tool is not perfect and may make mistakes or produce inaccurate or irrelevant information, but is reviewed by the post’s author prior to publishing. If you want to learn more about the article, please refer to the original source that is cited at the end of the article.