The World Health Organisation (WHO) urges caution while employing extensive language model tools (LLMs) produced by artificial intelligence (AI) to protect and enhance human well-being, human safety, and human autonomy, as well as to preserve public health. Some of the most rapidly growing platforms that mimic comprehending, processing, and creating human communication are LLMs, including ChatGPT, Bard, Bert, and many others. Considerable enthusiasm is generated about their capacity to understand health demands due to their rapid public dissemination and expanding practical application for health-related purposes. It is essential to consider the hazards of utilising LLMs to safeguard people’s health and lessen inequality. It can be crucial for the well-being of people to increase access to health information and improve diagnostic capacity in low-resource settings.
The WHO is dedicated to utilising new technologies, including AI and digital health, to benefit human health. WHO is enthusiastic about using technologies, including LLMs, to support healthcare professionals, patients, researchers, and scientists. However, the usual level of safety that should be applied to new technology is not consistently applied. Adequate caution involves universal adherence to the core principles of openness, diversity, participation by the public, professional oversight, and strict evaluation. The rapid adoption of unproven systems could result in mistakes made by medical professionals, injury to patients, and a loss of trust in AI, undermining the potential long-term benefits of such technology globally.
The various issues necessitate strict regulation for the technologies to be employed safely, efficiently, and morally. The data used to train AI might produce false or erroneous information that could harm inclusivity, equity, and health. The WHO advises that as technology companies try to commercialise LLMs, policymakers should ensure patient safety and protection.
Before they are widely used in ordinary medicine and healthcare, the WHO suggests that issues must be addressed and health system stakeholders can benefit. WHO stresses the significance of following moral guidelines and good governance in the WHO recommendations on the ethics and governance of AI for health. The WHO has identified six guiding principles for AI development: (1) safeguard autonomy; (2) advance human welfare, safety, and the public interest; (3) ensure transparency and understanding; (4) promote accountability and responsibility; (5) foster equity and inclusiveness; and (6) a receptive and sustainable AI.
Dr. Shivangi Agarwal has completed Masters of Public Health .