Find out more
Please get in touch to learn more about our work on the development and governance of AI in healthcare.
Get in touchArtificial intelligence (AI) has exploded onto the scene, capturing headlines and fuelling debates about its transformative potential. While excitement buzzes around its capabilities, the legal and regulatory landscape struggles to keep pace and often highlights risks.
In this blog, HIN Chief Executive Dr Rishi Das Gupta and NHS AI experts Dr Haris Shuaib and Dr Hatim Abdulhussein discuss the parallels between traffic regulation and maintaining oversight of these emerging technologies.
The House of Lords Communications and Digital Select Committee inquiry into Large Language Models (LLMs) was published on 2nd February, 2024 and highlighted that these models, a powerful subset of AI, showcase not only the immense opportunities AI holds but also the potential “technological turbulence” that may arise as they become more pervasive. Contributions made by Rishi to the evidence can be found here.
Navigating the ethical and regulatory landscape surrounding this powerful technology can be daunting. As we steer towards a future intertwined with AI, it’s crucial to establish guardrails that ensure its safe and responsible use. Here, we might draw inspiration from an unexpected source: traffic regulations. While seemingly disparate, regulating AI in healthcare shares remarkable parallels with regulating driving. A colleague in the field commented recently: “If we were as risk averse in road technology as we are in healthcare AI we’d never have let cars on the roads in the city”. Let’s delve into these similarities and explore how they can inform our approach to AI governance.
Both driving and AI regulations share three core objectives:
Traffic regulations categorise offenses based on severity and consequences. While the laws change infrequently (the Road Traffic Act 1988 is now 35 years old), the guidance is updated often (the highway code was updated in 2022). The categories used in the UK are careless driving and dangerous driving. In addition, we have categories related to consequences, that apply to both individuals (e.g. causing death by dangerous driving) and companies operating fleets of vehicles and manufacturers (corporate manslaughter).
We can adapt this structure to AI in healthcare:
Drawing on the lessons from traffic regulations, we propose a three-pronged approach to governing AI in healthcare:
The UK, with its diverse population, centralized healthcare system (NHS), and robust regulatory framework, is well-positioned to play a leading role in shaping the responsible development and governance of AI in healthcare. By leveraging existing structures like accredited AI testing centres and fostering open dialogue with stakeholders, the UK can pave the road to a future where AI empowers healthcare professionals to deliver better, safer care for all. The analogy to traffic regulation holds here too. We should invest in infrastructure and environment where the need is greatest – our cars today travel faster and are safer than three decades ago. This is due to focusing investment in adapting our environment to make this happen, for example, we put traffic lights at junctions where the risk of collisions is highest or where there is a history of accidents occurring. Investing in the environment and monitoring infrastructure will help the UK be the place to come to develop, deploy and build the evidence for safe AI.
The road ahead for AI in healthcare is full of promise, but also potential pitfalls. Humans in healthcare must be in control of its development to ensure it is safe, effective and ethical. By learning from the established framework of traffic regulations and adapting it to the unique context of healthcare, we can develop a comprehensive and flexible approach to governing AI. Let’s work together to ensure that AI becomes a powerful tool for good, shaping a future where technology and ethics go hand-in-hand to improve patient outcomes and advance healthcare for all.
About the Authors:
Dr Rishi Das-Gupta is Chief Executive of the Health Innovation Network (South London), is on the Board of the NIHR Applied Research Collaboration (South London), DigitalHealth.London and NodeNs Medical and is a member of the NHS London Clinical Senate.
Dr Haris Shuaib is Head of Scientific Computing at Guy’s and St. Thomas’ NHS FT and director of the Fellowships in Clinical AI programme he is also the founder of Newton’s Tree a company focussing on using AI in clinical practice.
Dr Hatim Abdulhussein is Medical Director of Health Innovation Kent, Surrey and Sussex and National Clinical Director for AI and Digital Workforce at NHSE.
Please get in touch to learn more about our work on the development and governance of AI in healthcare.
Get in touch