Insights

Legal and Ethical Issues with the Use of AI in Health & Aged Care - Legalwise Seminars

Written by Natalie Bamber | Nov 27, 2019 12:10:29 PM

Alison Choy Flannigan, Partner at Hall & Wilcox delves into the legal and ethical issues with the increased use of AI in Health & Aged Care as she questions whether the law can keep up with the pace at which technology is implemented into the industry.

 

The development and use of artificial intelligence in health, aged care and biotechnology is creating opportunities and benefits for health care providers and consumers. Already AI is being used in medical fields such as diagnostics, e-health and evidenced based medicine, although there would appear to be some way to go with the reliability of using AI interrogation of free text data fields in medical records. Deciphering doctor’s hand-written notes is an issue.

Hall & Wilcox has been working with international and Australian clients at the cutting edge of innovation who are using AI to detect falls in hospitals and residential aged care facilities, to determine pain using face recognition software and who are using AI in predictive medicine.

A number of legal, regulatory, ethical and social issues have arisen with the use of AI in the health care sector. The issues is: Can the law keep up with the pace?

 

Ethical issues

There have been a number of working groups established to discuss ethical issues concerning the use of IA in healthcare.

In 2017, the World Health Organisation and its Collaborating Centre at the University of Miami organised an international consultation on the subject. A theme issue of the WHO Bulletin devoted to big data, machine learning and AI will be published in 2020.

The European Union on Ethics in Science and New Technologies published a ‘Statement on Artificial Intelligence, Robotics and Autonomous Systems’ in March 2018.

Whilst Australia is not a member of the EU, its therapeutic goods regulation is heavily aligned with the EU, less so the USA.

The above statement proposed a set of basic principles and democratic prerequisites, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights. These principles and our commentary are set out below.

  • Human dignity – The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ techniques. It implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings when in fact they are dealing with algorithms and smart machines.

Commentary: Should we be transparent in telling people that they are interfacing with AI?

  • Autonomy – The principle of autonomy implies the freedom of the human being, the freedom of human beings to set their own standards. The technology must respect the choice of humans when to delegate decisions and actions to them.

Commentary: What should we delegate to machines? Surely, the best care is the human touch and people should come first?

  • Responsibility – Autonomous systems should only be developed and used in ways that serve the global social and environmental good. Applications of AI and robotics should not pose unacceptable risks of harm to human beings.

Commentary: This is consistent with the principle that we should do no harm.

  • Justice, equity and solidarity – AI should contribute to global justice and equal access.

Commentary: It is important to grant equity of access and that the benefits of AI not only be provided to those countries or people who can pay for the technology.

  • Democracy – Key decisions should be subject to democratic debate and public engagement.

Commentary: The use of AI should be done in accordance with community expectations and standards.

  • Rule of law and accountability – Rule of law, access to justice and the right of redress and a fair trial should provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulation.

Commentary: There should be adequate compensation for negligence.

  • Security, safety, bodily and mental integrity – Safety and security of autonomous systems includes external safety for the environment and users, reliability and internal robustness (e.g. against hacking) and emotional safety with respect to human-machine interaction.

Commentary: The use of AI in health care should be appropriately regulated to ensure that it is safe.

  • Data protection and privacy – Autonomous systems must not interfere with the right to privacy of personal information and other human rights, including the right to live free from surveillance.

Commentary: The protection of privacy and data protection is important.

  • Sustainability – AI technology must be in line with the human responsibility to ensure the sustainability of mankind and the environment for future generations.

 

AI and medical device regulation

In Australia, the Therapeutic Goods Act 1989 (Cth) defines ‘therapeutic goods’ and ‘medical devices’ very broadly, particularly if therapeutic claims are made.

Section 41BD of the Act defines ‘medical device’ as.

‘(a) any instrument, apparatus, appliance, material or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, by the person under whose name it is or is to be supplied, to be used for human beings for the purpose of one or more of the following:

  • diagnosis, prevention, monitoring, treatment or alleviation of disease;
  • diagnosis, monitoring treatment, alleviation of or compensation for an injury or disability;
  • investigation, replacement or medication of the anatomy or of a physiological process
  • control of conception;

and that does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but that may be assisted in its function by such means.’

This includes software and mobile apps that meet the definition of ‘medical devices’.

Mobile apps which are simply sources of information or tools to manage a healthy lifestyle are not medical devices.

Software as a Medical Device (SaMD) is regulated on the basis of risk. SaMD must be included on the Australian Register of Therapeutic Goods before they are supplied in Australia unless an exemption applies (such as a clinical trial).

One of the main regulatory hurdles with registration of AI is that it is fluid and constantly changing whereas the TGA review of medical devices is currently based upon a pre-market product at a fixed period of time. The traditional framework of medical device regulation is not designed for adaptive artificial intelligence and machine learning techniques.

On 2 April 2019, the USA FDA published a discussion paper ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machines Learning (AI/ML) Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback’. In this framework, the FDA introduces a ‘predetermined change control plan’ in pre-market submissions. The FDA expects from manufacturers a commitment on transparency and real-world performance monitoring for artificial intelligence and machine learning-based software as a medical device, as well as periodic updates on what changes were implemented as part of the approval pre-specifications and the algorithm change protocol.

 

Duty of care, negligence and AI

It will be interesting to see how the law of negligence and duty of care will adapt to this new technology. If a patient suffers an injury and that injury arose out of the use of AI who will be liable? The treating clinician who relied upon the SaMD? The developer of the algorithm? The programmer of the software? Proving causation may be difficult when there is machine learning in a multi-layered fluid environment when the machine itself is influencing the output.

Only time will tell, and we are in for interesting times.

With over 25 years of corporate, commercial and regulatory experience, Alison Choy Flannigan has specialised in advising clients in the health, aged care, disability, life sciences and community sectors. Alison leads the firm’s Health & Community industry group. She also provides ongoing support for various industry associations and has enthusiastically taken positions within the Industry. She is the Company Secretary for the National Foundation for Medical Research and Innovation and the Asia Pacific Regional Forum Liaison Officer, Healthcare and Life Sciences Law Committee of the International Bar Association. Alison is also on the Australia Chinese Business Council (NSW) Health & Ageing Subcommittee. Alison was previously General Counsel for Ramsay Health Care Limited and was awarded the ACHSM President’s Award for her contribution to and support of the Australian College of Health Service Management.  She was formerly Company Secretary of Research Australia and on the risk committee of St Vincent’s Hospital Sydney, as well as on the Institutional Ethics Committees of Northern Sydney Local Health District and South Eastern Sydney Local Health District. Alison is a market leader, having been listed in The Best Lawyers in Australia (and the Australian Financial Review) for Health & Aged Care and Biotechnology since 2008. She has been recognised in the Doyle’s Guide to the Australian Legal Profession as a Leading Health and Aged Care Lawyer in 2017, 2018 and 2019. Alison has been a finalist for the Lawyers Weekly Partner of the Year in Health every year since 2016 and won this prestigious award in 2019. Connect with Alison via email