Reducing imaging errors in computed tomography
Sharper images during surgery
Computed tomography (CT) is one of the most important imaging methods used in hospitals. It is a quick way of taking images of the inside of the body that doctors can use to see features such as bones, vascular obliterations and implants. During operations, a special kind of CT that also works during surgery is used. In this method many different 2D images are combined by a computer program to produce a 3D image.
Researchers at FAU have played a considerable role in the development of this technology. However, ensuring that the same image quality is achieved under these conditions as with standard CT is still a challenge. A new project funded by the German Research Foundation (DFG) aims to use the image data collected during CT scans even more effectively.
The first generation of CT images were created in the 1970s using parallel X-rays that penetrated a cross-section of a patient’s body from all angles. Today it is also possible to take 3D images of patients during surgery using a C-arm CT scanner – named after its shape – that compiles several 2D images to create a 3D image by rotating once around the patient within a few seconds. C-arm scanners are not as heavy as normal CT scanners and can be used more flexibly. This is important as there is not much space in an operating theatre.
However, as they have a longer capture time – around 20 seconds compared with 60 milliseconds for standard CT – imaging errors can occur. This often happens when the patient moves while the image is being taken. This results in 3D images that are blurry or where edges are shown twice.
Researchers at FAU’s Pattern Recognition Lab have now developed a new algorithm to reduce the occurrence of these errors, known as motion artefacts. ‘If you take two X-rays of me, both images will show the same body, just from different angles. If a patient moves during a CT scan, we see inconsistent image pairs,’ explains André Aichert from the Pattern Recognition Lab.
‘We can express these inconsistencies mathematically in what are known as epipolar consistency conditions, allowing us to identify which images do not go together. The algorithm also allows us to find out how the patient must have moved afterwards. This gives us clearer, sharper images.’
A team of researchers led by Prof. Dr. Andreas Maier at the Pattern Recognition Lab are investigating how this procedure can be improved in a new project that is being funded by the DFG for the next three years with 286,000 euros. One of their aims is to validate new algorithms using existing real data provided by their colleagues who are collaborating with them on the project – Prof. Dr. Arnd Dörfler from the Department of Neuroradiology at Universitätsklinikum Erlangen and Prof. Rebecca Fahrig from the Radiological Sciences Laboratory at Stanford University.
Prof. Dr. Andreas Maier
Phone: +49 9131 8527883
Phone: +49 9131 8528977