PATH has developed a set of guidelines for developing and implementing artificial intelligence applications in healthcare. 

MDPR3

Image: PATH develops guidelines for using AI in healthcare. Photo: Courtesy of Lawrence Monk/Pixabay

“The principles were created to help developers and healthcare professionals assure patients and the public that the emerging use of artificial intelligence in healthcare will always be dedicated to providing safe, equitable and highest quality services,” said Jonathan Linkous, Co-founder and CEO of PATH.

The principles include:

First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient’s well-being is the primary consideration.

Human Values: Advanced technologies used to delivery healthcare should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Safety: AI systems used in healthcare should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.

Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.

Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

Responsibility: Designers and builders of all advanced healthcare technologies are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

Personal Privacy: Safeguards should be built into the design and deployment of healthcare AI applications to protect patient privacy including their personal data.  Patients have the right to access, manage and control the data they generate.

Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

Shared Benefit: AI technologies should benefit and empower as many people as possible.

Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed, in ways that allow them to change in conformance with new discoveries.

The principles were developed by members of PATH with additional guidance from other leaders in healthcare and have incorporated parts of existing statements such as the Asilomar AI Principles and the Hippocratic Oath.

PATH is an alliance of stakeholders working together to improve care and build efficiencies using advanced technologies. PATH and its members are working to gain the support of decision makers and the public about the use of advanced technology in healthcare, moving the field beyond research and pilot projects, and laying out a pathway for the integration and use of advanced technologies in the worldwide ecosystem of medicine.

Source: Company Press Release