Could artificial intelligence assist decision-making in clinical practice?

Grafik Medizin Informatik
Image: Bildagentur-PantherMedia/everythingposs

FAU researchers are investigating ethical and legal challenges to using artificial intelligence in clinical diagnostics.

Artificial intelligence (AI) is becoming more common in clinical practice. Increased computing power, greater volumes of data generated, and progress in machine learning promise new possibilities in clinical research and patient care. However these developments also raise certain ethical and legal questions. How will the role of doctors and patients change if AI is used in diagnostic procedures? Who is responsible for the consequences of AI-assisted processes in clinical contexts? A research project financed by the Federal Ministry of Education and Research and led by ethicist Prof. Dr. Peter Dabrock at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has set out to answer these questions.

Artificial Intelligence has already begun to influence many different aspects of our lives. Particularly high expectations have been raised in the area of clinical applications. Computer-based, automated and self-learning systems could potentially analyse patients’ health to optimise the reliability and cost efficiency of clinical decisions.

Although many AI developments are still at an early stage, the first applications are already being used in clinical practice. Diagnostic imaging procedures were among the first applications to capture the interest of major technology companies. Google presented a deep learning application in 2016 that was designed to detect diseases of the retina. Microsoft is also active in the field of automated analysis of radiology images and is working on a method to improve tumour localisation – a complex and time-consuming process that is potentially subject to human error. IBM is using AI to optimise the choice of cancer therapies.

Applications such as these demonstrate the potential of using AI in clinical practice. ‘We also need to bear in mind the challenges facing us on the way to realising this potential,’ explains Prof. Dr. Peter Dabrock, head of the collaborative project and Chair of Systematic Theology II (Ethics) at FAU. Deep learning applications are often described as black boxes. Although they can recognise patterns in vast amounts of data better than humans, it is important to remember that their complexity often means that users and even software engineers do not fully understand how they work – an aspect which should not be underestimated in a clinical context where winning the trust of physicians and patients is paramount.

‘In the field of clinical AI there are at least two key questions: How can automated, self-learning and partially inaccessible systems be designed and integrated in decision processes to improve the quality of patient care while maintaining the sovereignty of the user and patient? And above all: who is morally and legally responsible if AI leads to clinical errors in patient care? Such debates are also at an early stage on an international level,’ says Prof. Dabrock.

Recommendations for using AI in clinical practice

Teams of legal experts led by Prof. Dr. Susanne Beck from Leibniz Universität Hannover, technical experts led by Prof. Dr Sebastian Möller from the German Research Center for Artificial Intelligence (DFKI) and physicians led by Prof. Dr. Klemens Budde from Charité Berlin who have used AI-assisted systems in nephrology will assess the normative, legal and technical aspects of using AI-based technology to support clinical diagnostics. The group aims to propose recommendations and regulations on the use of AI technology in clinical applications. The project ‘vALID: Artificial-Intelligence-Driven Decision-Making in the Clinic. Ethical, Legal and Societal Challenges’ is being funded by the Federal Ministry of Education and Research with 750,000 euros for the duration of three years.

Further information

Prof. Dr. Peter Dabrock
Phone: +49 9131 8522724
peter.dabrock@fau.de