The growing incorporation of Artificial Intelligence (AI) in healthcare has sparked a debate about legal accountability in the event of medical mishaps. Experts are advocating for an update in product liability law to ensure AI’s safe use in medicine. This call for change has been notably articulated by Professor Kim Hwa of Ewha Womans University School of Law in his recent publication.
Professor Kim Hwa’s insight on AI in healthcare
In his paper titled “Medical Artificial Intelligence and Civil Liability,” published in the Bioethics Policy Research by the Ewha Institute for Biomedical Law & Ethics, Professor Kim underscores the increasing reliance on AI for medical diagnosis and imaging. He emphasizes the need for clear legal responsibility frameworks, especially as AI begins to relieve medical professionals of certain tasks.
Current legal landscape and the role of human intervention
The prevalent use of AI in medicine typically follows a “human-intervention type” model, where doctors retain final decision-making authority. This model implies that in case of an AI-related medical error, the doctor would be held liable. However, as AI technology evolves, the extent of doctors’ direct intervention could diminish, raising questions about accountability in adverse medical outcomes.
Balancing medical advancement with legal responsibility
Professor Kim asserts the need for incentives to adopt medical AI, which can alleviate the burden on healthcare professionals. At the same time, he acknowledges the complexity in reducing liability for medical errors linked to AI. As AI’s role in healthcare becomes more autonomous, establishing who bears responsibility for any negative outcomes becomes a critical concern.
The proposal for product liability law application
To address these challenges, Professor Kim suggests applying the Product Liability Act. This law, particularly Article 48, paragraph 2, mandates manufacturers to manage risks and ensure safety post-market. Implementing this law for medical AI would place partial liability on manufacturers, holding them accountable for safety management.
Challenges with software-based medical AI
One of the major hurdles in this proposal is the classification of AI, primarily software-based, as a product under existing laws. The dynamic and evolving nature of AI algorithms, which adapt based on external inputs and learning, makes it difficult to pinpoint product defects traditionally. Professor Kim recommends legislative changes to encompass software-based AI within the scope of product liability law.
Redefining product defects and causation in AI
Professor Kim also points out the need for new rules to identify defects and establish causation in AI products. Given AI’s unique characteristics, traditional methods of recognizing defects may not be adequate. Legislative efforts are required to adapt these concepts to the AI context, ensuring that defects in AI can be identified and linked to adverse outcomes.
As AI continues to revolutionize the medical field, balancing innovation with legal accountability becomes crucial. Professor Kim’s proposals offer a starting point for discussions on how to integrate AI safely into healthcare. These developments will play a pivotal role in shaping the future of AI in medicine, ensuring it serves as a beneficial tool while protecting patients and healthcare professionals alike.