This website uses cookies to improve your experience.

Cookie policy

Surgical digest

The role of artificial intelligence in diagnostic medical imaging and next steps for guiding surgical procedures

Barbara Seeliger MD, PhD, FACS

Digestive Surgeon
Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
Department of Digestive and Endocrine Surgery, University Hospitals of Strasbourg, Strasbourg, France
ICube, UMR 7357 CNRS, University of Strasbourg, Strasbourg, France
IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France

Alexandros Karargyris PhD

Machine Learning Researcher
Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
ICube, UMR 7357 CNRS, University of Strasbourg, Strasbourg, France

Didier Mutter MD, PhD, FACS, FRSM

Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
Department of Digestive and Endocrine Surgery, University Hospitals of Strasbourg, Strasbourg, France
IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France

Funding: This work was supported by the French Agence Nationale de la Recherche (ANR) under the project references ANR-22-CE17-0019-01 and ANR-10-IAHU-02, as well as French state funds managed within the “Plan Investissements d’Avenir”.


Surgery is witnessing the transformation of operating theatres into smart infrastructures with interconnected cutting-edge devices, where numerous highly specialised professionals collaborate for the benefit of patients. Consequently, along with the continuously generated large amount of data and the advances in technology, Surgical Data Science has evolved, an interdisciplinary field that aims to improve safety and outcomes of modern surgery with artificial intelligence (AI) algorithms integrating multimodal medical data. AI-based assistance systems have the potential to automate diagnosis from various medical imaging studies currently performed by radiologists, in a time-critical manner. The accuracy of such diagnostic and assistive software must match or exceed that of the healthcare professional to be clinically trustworthy, while being fast, cost-effective, widely accessible, and unbiased. Hence, clinically useful integration of smart assistance by AI tools for diagnostic, interventional and intraoperative imaging requires transdisciplinary collaborative exchange between medical and computer science professionals, ideally in shared workspaces.

Diagnostic medical imaging and AI assistance

Clinicians rely on medical imaging to determine treatment strategies and surgical planning. The available technologies provide ever higher resolution, enable three-dimensional reconstructions, and increasingly include functional imaging. Examples of anatomical imaging modalities are ultrasound (US), X-Ray, fluoroscopy, computed tomography (CT), and magnetic resonance imaging (MRI). Nuclear medicine hybrid scanning techniques fuse functional imaging, e.g., single-photon emission computed tomography (SPECT) or positron emission tomography (PET), with anatomical imaging, to obtain SPECT/CT and PET/CT, which are important tools in cancer assessment.

While computer-aided diagnosis (CAD) systems have been developed since the 1960s it is quite recently (2010s) with the explosion of Deep Learning (DL) methods that these systems started achieving unprecedented levels of diagnostic accuracy matching or even exceeding the performance of human experts1. AI applications have the capacity to provide in-depth analysis of various imaging modalities. Such AI algorithms can be trained to distinguish between normal and abnormal findings, thus automating the detection of pathologies or lesions at an early stage, monitoring of existing diseases, and uncovering information that is invisible to the human eye2. An up-to-date list of commercially available AI tools related to radiology and other imaging domains is accessible on the American College of Radiology Data Science Institute webpage AI Central3.

A recent systematic review and meta-analysis summarised the current evidence on the diagnostic accuracy and value of DL in a variety of medical imaging modalities, showing achievements in disease classification in respiratory medicine, ophthalmology, and breast cancer surgery, the fields with the largest number of reported studies. Only few studies compared the diagnostic accuracy of expert human clinicians with the one of the DL algorithms4. Reliable and objective assessment of AI performance currently lags behind the development of new image processing algorithms. Risk of bias assessment tools and reporting guidelines have emerged to address the variability of reporting quality, mitigate validation issues, and improve the suitability of AI studies for implementation in clinical practice5, 6.

Development of computer-aided diagnosis systems: challenges and opportunities

To better understand AI-based computer-aided diagnosis systems, one has to look at the details of their development. The four distinct phases – inception, development, validation, and deployment – have their respective challenges, as well as potential opportunities for healthcare stakeholders to influence the development of CAD systems (Figure 1). Close collaboration between healthcare professionals and computer scientists is therefore required to obtain clinically useful AI solutions.

Figure 1: Challenges and opportunities in the development of computer-aided diagnosis (CAD) systems.

Currently, DL algorithms require data annotation, which is a complex and costly task. Most AI models are trained on manually annotated data with labelling of specific structures, such as normal and neoplastic tissue within an organ or different organs in 3D-reconstructed preoperative CT and MRI. AI models learn from these annotated training data and can then automatise such labelling (i.e., model prediction)7, 8. Improved machine learning techniques such as semi-supervised and self-supervised algorithms remove this burden without compromising performance. Standardised formats for image data storage and transfer, such as DICOM (Digital Imaging and Communications in Medicine), NIfTI (Neuroimaging Informatics Technology Initiative), and OHDSI (Observational Health Data Sciences and Informatics) provide access to the large amounts of data needed for successful machine learning 9-11. In addition, automation of machine learning algorithms (AutoML) and clinical validation (e.g., MedPerf12) will improve the speed and efficiency of AI development in the near future with clinician involvement.

Standardisation of imaging studies, image analysis, and structured reporting is key to international comparability and collaboration, to reduce interpretation errors and facilitate computer assistance. However, there are discrepancies in global availability and use of referral guidelines, radiology quality and safety programmes, and reporting systems13. Smart assistance for time-consuming image analysis and structured reporting is very welcome. Two nuclear medicine imaging experts turned to ChatGPT to test its ability to write a radiology report within a few seconds, answer specific technical questions, or make recommendations, and point out its current limitations, namely that convincing-sounding output is not necessarily correct or up-to-date14.

Integration of AI and imaging into diagnostic imaging and interventional practice

In a recently published survey, the majority of the 149 participating radiologists in training expressed interest in using AI, in collaborating on AI projects, and in integrating AI training into their curriculum15. In a survey of medical students on diagnostic and interventional radiology (IR), one third saw AI as a threat to radiologists, while overall there was a belief in the future of IR and a call for its more detailed inclusion in the medical curriculum16. Given the technical improvements and increasingly user-friendly software formats, the involvement of clinicians in training and specialists in the CAD development and validation workflow will have a considerable impact on its adoption.

AI assistance may reduce radiologist workload or enhance clinician performance. An automated AI tool can already differentiate between normal and abnormal X-Rays. With a sensitivity of more than 99% for both abnormal and critical X-rays, autonomous reporting of normal X-rays could free up a considerable amount of radiologist time for other tasks17. Early and automatic detection of disease in various imaging modalities can go as far as predicting lung cancer risk 1-6 years into the future, as shown with the model Sybil screening low-dose chest CTs running in the background at radiology reading stations, without radiologist annotations or access to clinical data18. AI-powered software has recently been shown to reduce the number of missed liver metastases in contrast-enhanced CT, which is particularly useful in hard-to-detect small lesions with low contrast19. Another recent study assessed AI-enhanced identification of focal liver lesions with intraoperative ultrasound used to guide open liver resections20. Although the accuracy of liver lesion classification with standard abdominal ultrasound was not reached, and there was no differentiation between different lesion types, this proof-of-concept study points towards further research to optimise AI assistance in intraoperative imaging20. Automatic volumetric reconstruction of diagnostic imaging studies serves to create a patient-specific virtual model, allowing identification of normal, variant and pathological anatomical findings, surgical planning and navigation and thus intraoperative guidance8, 21-23.

Radiology is currently the most advanced medical specialty in the use of AI applications related to image interpretation for diagnostic imaging performed in the radiology suite. There is a need to transfer the indicators and metrics developed for diagnostic radiology into the operating room (OR)  imaging environment, where currently AI support is relatively scarce. Considering the density of data generated by various imaging modalities in ORs, the potential of AI for data management has resulted in the creation of Surgical Data Science21, 24.

In addition, in the OR the patient is positioned differently from routine imaging, and registration of preoperative imaging to the changed anatomical situation in the OR is challenging due to these positional changes of the target anatomy. Immediate preoperative or intraoperative imaging in the OR is used to align reconstructed routine imaging studies with the surgical position and continuously re-align if necessary for intraoperative adaptive diagnostic support.

Manufacturers and suppliers of interventional imaging equipment are increasingly providing multimodal integration to complement intraoperative data with information from various preoperative studies, e.g., with segmentations of target anatomy and preoperative planning. In the Hybrid Operating Room (OR) of IHU Strasbourg, jointly built with the support of Siemens Healthineers (Figure 2), MRI, CT, cone-beam CT/angiography, and ultrasound imaging complement each other in one OR suite25. AI tools are currently being developed to support precise data matching and image fusion between these modalities. In particular, image fusion approaches enable intraoperative integration of soft tissue contrast and functional imaging provided by the MRI in the adjacent theatre, thus circumventing the material constraints in the OR imposed by a magnetic field. Dynamic respiratory cycle and organ movement data acquired in high speed and high resolution with a CT can be extracted and transferred to the angioCT for needle-guided procedures (gating).

Figure 2. Hybrid Operating Room (OR) of IHU Strasbourg combining MRI, ultrasound, CT and cone-beam CT (Siemens Healthineers, Forchheim, Germany).

Based on the Hybrid OR and infrastructure projects such as the Surgical Control Tower for intraoperative workflow support to increase safety and efficiency, the OR of the future is emerging24-26.

Although it contains a variety of imaging modalities and other technologies, it is a clutter-free environment focused on information sharing and data analysis, designed to present coordinated and integrated information to each interventional team 27. Interactive screens display patient-specific information and imaging studies, as well as AI-assisted access to relevant information on, for example, international standards and guidelines or a comparison between normal and variant anatomy encountered. It enables interactivity between teams and equipment, fusion of different imaging data (e.g., ultrasound and CT, CT and laparoscopy), simulation and planning of procedures, and real-time documentation and analysis (Figures 3 and 4).

Figure 3. The next-generation hybrid operating room integrating AI and robotics for diagnostic imaging, procedure planning and execution.
The OR of the future is envisioned as the centre of a technology ecosystem. Illustrated technology include advanced interactive digital displays with real-time connectivity and AI analytics, mixed-reality environments, and robotic applications for various interventions, imaging (ultrasound, cone-beam CT, intraoperative CT/MRI, etc.), nursing assistance and sterile instrument management, as well as a predictive logistics supply system with Automatic Guided Vehicles (AGV). (Copyright Barbara Seeliger/ Carlos Amato; Chengyuan Yang; Niloofar Badihi; IHU Strasbourg and Cannon Design USA)
Figure 4. The next-generation command and control room for the hybrid operating room.
The traditional control room is re-envisioned to include miscellaneous robotic system controls for diagnostic and interventional imaging and surgical procedures (surgical robot, scrub nurses and instrument tables, automatic guided vehicles, etc.), interactive screens, advanced procedure guidance (monitoring/planning/analysis) and workflow analysis with real-time AI assistance, simulation with virtual and mixed reality scenarios, connectivity with the surgical simulation and training site to enable observation and multidisciplinary collaboration in videoconferencing or metaverse environments. (Copyright Barbara Seeliger/ Carlos Amato; Chengyuan Yang; Niloofar Badihi; IHU Strasbourg and Cannon Design USA)


Medical decision-making goes beyond one data modality at a time, and narrowly defined tasks such as detecting a nodule in one imaging modality. The many complementary sources of information clinicians routinely rely on need to be integrated into healthcare AI applications. Multimodal medical AI21 will be able to process multiple data sources to reproduce and potentially exceed the kind of data processing currently performed by clinicians when treating and following up their patients, to detect disease early in high-risk individuals or to monitor disease activity over time, and tailor treatment indications to the evolution of lesions over time. Predicting individual treatment outcomes that may require treatment adjustment remains a challenge, and algorithms to improve such predictions, e.g., for radiotherapy28, are very welcome.

The cost of research and development of AI can be high and affect the retail price of the system, making it potentially costly for healthcare stakeholders. Therefore, identifying a suitable reimbursement system for medical AI is necessary to achieve higher adoption. Data sharing agreements such as the one between Mayo Clinic and Google are a clear sign that tech giants seek to use healthcare data to develop AI technologies, and that health research is counting on advances using AI29. The creation of large datasets focused on accessibility of information and transparency implies new perspectives as much as it challenges the traditional legal frameworks of data protection with the prohibition of the use of patient data for commercial purposes. The French “Instituts Hospitalo-Universitaires (IHU)”, where academic clinicians and scientists collaborate at the intersection of the public and private sectors, are further candidates to support the medical, technical, and regulatory revolution related to the use of health data research with AI. The broader development of AI algorithms in academic-industry partnerships will go beyond interpreting data separately to linking them and discovering pathophysiological patterns, up to creating digital twins.


The availability of software and hardware computing resources as well as large digitised medical datasets has led to a proliferation of data-driven methods (e.g., deep learning), ushering in an era of AI applications in diagnostic and interventional medical imaging. At this stage, closer collaboration between clinicians and engineers in academic centres and industry is needed to build more clinically relevant, trustworthy AI systems. The development of medical AI systems must continue to be driven by economic benefits and meaningful healthcare outcomes, validated with the rigorous methodology of clinical trials.


Part of the charitable activity of the Foundation, BJS Academy is an online educational resource for current and future surgeons.

The Academy is comprised of five distinct sections: Continuing surgical education, Young BJS, Cutting edge, Scientific surgery and Surgical news. Although the majority of this is open access, additional content is available to BJS subscribers and strategic partners.

Discover the Academy
Surgeon Training & Surgeons in Surgery