Artificial Intelligence in Veterinary Medicine Raises Ethical Challenges

01Mar '23

Artificial Intelligence in Veterinary Medicine Raises Ethical Challenges

BY: SAMANTHA BARTLETT, DVM

Artificial intelligence (AI) is being increasingly used as a diagnostic tool in human medicine and is spilling over into veterinary medicine. Most commonly, AI is being used in diagnostic radiology and radiation oncology. However, AI is not limited to these fields in medicine and it can be assumed that more AI-powered algorithms will be implemented in routine clinic-level diagnostics in the future. AI is expected to be more of a tool and time-saver than a true diagnostic. Trained veterinarians and technicians are still required to interpret the results in light of the patient. 

The paper published December 2022 in Veterinary Radiology & Ultrasound entitled, “First, do no harm. Ethical and legal issues of artificial intelligence and machine learning in veterinary radiology and radiation oncology” explores the barriers and challenges facing the use of AI in veterinary medicine. With the growth of AI in medicine comes legal and ethical considerations. Veterinary medicine presents different challenges for AI than human medicine. Most significantly is that there is no regulatory process for bringing AI into the veterinary market and the option of veterinarians to perform euthanasia. 

With no regulatory standard for the development and implementation of AI in veterinary medicine, practitioners are reliant on the ethics and understanding of developers and companies releasing the technology on the market. As such, AI still requires a human in the loop to determine accuracy of results in line with the clinical picture of the patient. Another concern from a liability standpoint is who is considered liable in negative outcome cases. The veterinarian holding the VCPR (veterinary-client-patient relationship) would most likely be the individual holding liability. Is AI considered a consultation or another diagnostic tool? If considered a consultation, the veterinarian should gain consent from the client before implementing AI. IF so, would the VCPR then shift to the AI?

Harm from the use of AI can cover several aspects of patient care. If the AI is not contributing to patient diagnosis or care, can add to the cost to the client. False positive diagnoses can instigate follow-up testing or procedures including euthanasia. False negative diagnoses can delay needed care. 

Currently FDA guidelines for human medical AI products classify them as software acting as a medical device (SaMD) and subject the products to testing, oversight and regulation as such. No such regulations exist for AI in veterinary medicine even though the use of AI in radiology is rarely considered low risk situations in terms of medical condition and many are considered high risk with interpretation of some medical conditions such as pulmonary metastases, etc. leading to euthanasia. 

Before the veterinary profession embraces AI products, a regulatory and testing system should be developed. Definitions regarding VCPR and liability should be established. Importantly, veterinarians need to be trained on the use and limitations of AI so that they can confidently make decisions based on the results of AI evaluations. 

releated posts