Charting the Course: Regulating AI in Healthcare – Lessons from the Road

March 1, 2024

Post Title

Artificial intelligence (AI) has exploded onto the scene, capturing headlines and fuelling debates about its transformative potential. While excitement buzzes around its capabilities, the legal and regulatory landscape struggles to keep pace and often highlights risks.

In this blog, HIN Chief Executive Dr Rishi Das Gupta and NHS AI experts Dr Haris Shuaib and Dr Hatim Abdulhussein discuss the parallels between traffic regulation and maintaining oversight of these emerging technologies. 

The House of Lords Communications and Digital Select Committee inquiry into Large Language Models (LLMs) was published on 2nd February, 2024 and highlighted that these models, a powerful subset of AI, showcase not only the immense opportunities AI holds but also the potential “technological turbulence” that may arise as they become more pervasive. Contributions made by Rishi to the evidence can be found here.

Navigating the ethical and regulatory landscape surrounding this powerful technology can be daunting. As we steer towards a future intertwined with AI, it’s crucial to establish guardrails that ensure its safe and responsible use. Here, we might draw inspiration from an unexpected source: traffic regulations. While seemingly disparate, regulating AI in healthcare shares remarkable parallels with regulating driving. A colleague in the field commented recently: “If we were as risk averse in road technology as we are in healthcare AI we’d never have let cars on the roads in the city”. Let’s delve into these similarities and explore how they can inform our approach to AI governance.

Shared Ground: Safety, Evolution, and Responsibility

Both driving and AI regulations share three core objectives:

  1. Prioritising Safety: Both aim to minimize harm, whether on the road or in healthcare delivery. Just as reckless driving endangers lives, poorly designed AI can lead to misdiagnoses, treatment errors, and privacy breaches.
  2. Adapting to Change: Traffic laws have evolved alongside technological advancements, from horse-drawn carriages to self-driving cars. Similarly, AI regulations need to be dynamic, anticipating the ever-evolving nature of AI and its integration into healthcare workflows.
  3. Promoting Responsible Conduct: Drivers, companies employing drivers and vehicle manufacturers are held accountable for their actions, and so should AI developers and users. Fostering a culture of responsibility is essential for ethical and trustworthy AI implementation.

Learning from the Road: A Categorical Framework

Traffic regulations categorise offenses based on severity and consequences. While the laws change infrequently (the Road Traffic Act 1988 is now 35 years old), the guidance is updated often (the highway code was updated in 2022). The categories used in the UK are careless driving and dangerous driving. In addition, we have categories related to consequences, that apply to both individuals (e.g. causing death by dangerous driving) and companies operating fleets of vehicles and manufacturers (corporate manslaughter).

We can adapt this structure to AI in healthcare:

  • Careless / Inconsiderate AI: This covers irresponsible data handling or quality, non-compliance with ethical principles, and failure to meet minimum standards for transparency and explainability. This could include:
    • Using biased datasets without mitigation strategies.
    • Failing to obtain proper informed consent for data collection and use.
    • Developing AI models without adequate documentation and explainability tools.
  • Dangerous AI: This includes biased algorithms, lack of robustness, potential for unintended consequences, and vulnerabilities to manipulation. This could include:
    • AI models perpetuating existing societal biases in healthcare decisions.
    • AI systems susceptible to hacking or manipulation, leading to compromised data or altered outputs.
    • Lack of built-in safety features and safeguards to prevent unintended harm.
  • High-consequence AI: This encompasses situations where AI directly impacts patient outcomes, such as misdiagnoses or inappropriate treatment recommendations. This could include:
    • Clinical decision support systems leading to incorrect diagnoses or treatment plans.
    • AI-powered drug discovery tools generating harmful or ineffective compounds.
    • Algorithmic failures resulting in adverse patient events.

Navigating the Road Ahead: A Proposed Approach

Drawing on the lessons from traffic regulations, we propose a three-pronged approach to governing AI in healthcare:

  1. Establish Clear Principles and Transparency: AI developers and users should adhere to well-defined ethical principles, focusing on aspects like data privacy, fairness, and accountability. Transparency in algorithm development and decision-making is crucial for building trust and more work is needed to understand the level of interpretability and explainability AI developers should adhere to.
  2. Implement Minimum Codes of Conduct: Regularly updated codes can ensure responsible data storage, development practices, and deployment across various AI domains within healthcare. These codes could address:
    • Data governance and privacy standards.
    • Algorithm development and testing protocols.
    • Deployment guidelines and risk mitigation strategies.
  3. Focus on Consequences and Evidence-based Use: Companies and healthcare institutions should be incentivised to provide evidence demonstrating the safety and responsible use of their AI models and ensure there is adequate monitoring of these technologies when deployed in practice. This encourages a proactive approach to risk mitigation and promotes continuous improvement. This could involve:
    • Requiring pre-market approval for high-risk AI applications.
    • Implementing post-deployment monitoring and evaluation systems.
    • Holding developers and users accountable for AI-related harms.

Charting the Future: The UK’s Potential Leadership

The UK, with its diverse population, centralized healthcare system (NHS), and robust regulatory framework, is well-positioned to play a leading role in shaping the responsible development and governance of AI in healthcare. By leveraging existing structures like accredited AI testing centres and fostering open dialogue with stakeholders, the UK can pave the road to a future where AI empowers healthcare professionals to deliver better, safer care for all. The analogy to traffic regulation holds here too. We should invest in infrastructure and environment where the need is greatest – our cars today travel faster and are safer than three decades ago. This is due to focusing investment in adapting our environment to make this happen, for example, we put traffic lights at junctions where the risk of collisions is highest or where there is a history of accidents occurring. Investing in the environment and monitoring infrastructure will help the UK be the place to come to develop, deploy and build the evidence for safe AI.

Conclusion

The road ahead for AI in healthcare is full of promise, but also potential pitfalls. Humans in healthcare must be in control of its development to ensure it is safe, effective and ethical. By learning from the established framework of traffic regulations and adapting it to the unique context of healthcare, we can develop a comprehensive and flexible approach to governing AI. Let’s work together to ensure that AI becomes a powerful tool for good, shaping a future where technology and ethics go hand-in-hand to improve patient outcomes and advance healthcare for all.

About the Authors:

Dr Rishi Das-Gupta is Chief Executive of the Health Innovation Network (South London), is on the Board of the NIHR Applied Research Collaboration (South London), DigitalHealth.London and NodeNs Medical and is a member of the NHS London Clinical Senate.

Dr Haris Shuaib is Head of Scientific Computing at Guy’s and St. Thomas’ NHS FT and director of the Fellowships in Clinical AI programme he is also the founder of Newton’s Tree a company focussing on using AI in clinical practice.

Dr Hatim Abdulhussein is Medical Director of Health Innovation Kent, Surrey and Sussex and National Clinical Director for AI and Digital Workforce at NHSE.

Find out more

Please get in touch to learn more about our work on the development and governance of AI in healthcare.

Get in touch
Share: