The Ethics of LLMs

Ethics of large language models in medicine and medical research

Hanzhou Li, John T Moon, Saptarshi Purkayastha, Anthony Leo Celi, Hari Trivedi

Photo obtained from thedigitalspeaker.com

The article discusses the ethical considerations of using large language models (LLMs) in medicine and medical research. LLMs are deep learning models that generate new text that closely resembles human responses.

Medical practitioners and researchers have been exploring potential applications of LLMs, such as aiding text generation, summarisation, and correction. Although LLMs have the power to revolutionise medicine and medical research, their use raises crucial ethical issues concerning bias, trust, authorship, equitability, and privacy.

Firstly, hidden bias in LLMs could have serious consequences for patient outcomes in safety-critical domains such as medicine and medical research. LLM outputs could perpetuate biases related to race, sex, language, and culture.

Secondly, the use of LLMs in medical writing disrupts the traditional notion of trust. LLM outputs are untraceable, difficult to discern from the voices of actual authors, and might be inaccurate.

Thirdly, authorship is critically important in medicine and medical research as there are legal implications for the authors of medical documentation, but LLMs are not exactly sure what they are doing and cannot be held accountable for the integrity of the work.

Fourthly, there are varying payment models with emerging LLMs, and cost of access might widen existing digital divides for those with less academic support or funding.

Finally, the use of LLMs raises ethical concerns relating to the collection, use, and potential dissemination of the data inputted into LLMs. The authors suggest establishing guidelines that aim to responsibly and effectively use LLMs is crucial instead of an outright ban on their use, which would be shortsighted.