Harrison Clark Rickerbys advise companies on a variety of issues within the MedTech field. This article considers the use of Artificial Intelligence (AI) and medical robotics within the healthcare sector, outlining key trends before discussing potential legal implications resulting from the use of such technology.
Healthcare providers have been using AI for a variety of functions, including:
- Preventative – The use of AI in consumer health technology applications is assisting individuals to proactively maintain a healthier lifestyle, whilst also enabling healthcare practitioners to monitor trends in patient behaviour, resulting in the design of more tailored treatment plans.
- Early detection of diseases – The combination of AI in consumer healthcare wearables is enabling practitioners to monitor and detect conditions, such as heart disease, at an earlier stage when they are more treatable.
- Decision making – Predictive analytics is being used to identify patients at risk of developing certain medical conditions, which supports clinical decision-making, and to prioritise administrative tasks.
- Treatment – Medical robotics have been used in the healthcare sector for more than 30 years and represent a trend which is set to continue.
- Training – AI has enabled medical practitioners to engage with more realistic simulations which was not possible with simple computer-driven algorithms. AI training programmes can also learn from previous responses provided by candidates, meaning the scenarios can be adjusted to meet learning needs.
Medical robots range from those which aid a human surgeon to those which can perform operations on their own. Robotic-assisted surgeries result in patients receiving smaller incisions, experiencing reduced blood loss, being exposed to a minimised risk of infection and benefiting from faster recovery times, in comparison to traditional surgery. Some hospitals are also using robots to disinfect spaces, enabling staff to dedicate more time to patient care.
The use of medical robots in the healthcare sector is a trend which is set to continue in order to improve standards of care across the industry and compensate for staff shortages. It is predicted that robots will soon be perceived as assistants as opposed to tools.
While advancements in the use of AI and medical robotics within MedTech are fast-paced, numerous legal implications have arisen as a result.
- Cybersecurity concerns
A concept that has become increasingly inter-related with AI is the need for robust cybersecurity measures, especially when the collection of sensitive personal data is prevalent within the healthcare sector.
Check Point Software Technologies’ 2022 Mid-Year Report confirmed that healthcare organisations experienced a 69% increase in cyberattacks worldwide, in comparison to 2021. The opportunities for attack have increased since the coronavirus pandemic, due to the increased use of technology across the sector including new healthcare applications and digital consultations, coupled with the fact that many medical devices continue to run on older operating systems which are more vulnerable to ransomware attacks.
As the UK’s healthcare sector faces an average of 785 cyberattacks per week, the next major challenge it faces is how to deal with the continuing threat posed by cyberattacks.
Accountability of AI is a key issue. As a result of the increased independent processing ability of AI, it is becoming increasingly difficult to establish liability, where damage and loss arise as a result of the AI system’s decisions, resulting in overall uncertainty for parties involved in any potential dispute.
The law is developing in this area and an established framework of accountability would be beneficial for both patients and healthcare providers. It is likely that there will be a fine balance between maintaining patient rights and limiting the liability of entities that design, manage, own and distribute intellectual property within AI systems.
- Ethical challenges
Identifying and reducing inequalities within the data used to train AI-based technologies is another key issue. It is becoming increasingly apparent that ethical concerns exist in terms of how technology is trained for AI purposes, due to the inherent bias of those that train the technology. Biased technologies and datasets could ultimately lead to inaccurate conclusions being drawn which could affect overall patient treatment plans, for example.
The concept of inherent bias inter-relates with the notion of AI accountability (outlined above), since an outcome of AI may not have been foreseen by programme developers, or sufficient time and research may not have been undertaken to minimise the result and effects of unconscious bias.
Some potential solutions would be to ensure that:
- A diverse data set is used during the design and development of AI
- Programmers are provided with prejudice bias training.
Increased interest from regulators to reduce bias in medical technologies and the corresponding datasets has been highlighted by the proposed European Union’s Regulation on Artificial Intelligence (AI Act) and the UK’s health data strategy.
The use of AI and medical robotics continues to evolve rapidly within MedTech and it is important for companies to be alive to topics such as cybersecurity, accountability and ethical considerations when implementing AI within this industry.
Companies will also need to keep up to date with the fast-moving legal and regulatory landscape within this area in order to stay competitive and ultimately, achieve better outcomes for patients.