Endoscopes traversing body cavities such as the colon are routine in medical practice. However, they lack any autonomy. An endoscope operating autonomously inside a living body would require, in real-time, the cartography of the regions where it is navigating, and its localization within the map. The goal of EndoMapper is to develop the fundamentals for real-time localization and mapping inside the human body, using only the video stream supplied by a standard monocular endoscope.
In the short term, will bring to endoscopy live augmented reality, for example, to show to the surgeon the exact location of a tumour that was detected in a tomography, or to provide navigation instructions to reach the exact location where to perform a biopsy. In the longer term, deformable intracorporeal mapping and localization will become the basis for novel medical procedures that could include robotized autonomous interaction with the live tissue in minimally invasive surgery or automated drug delivery with millimetre accuracy.
Our objective is to research the fundamentals of non-rigid geometry methods to achieve, for the first time, mapping from GI endoscopies. We will combine three approaches to minimize the risk. Firstly, we will build a fully handcrafted EndoMapper approach based on existing state-of-the-art rigid pipelines. Overcoming the non-rigidity challenge will be achieved by the new non-rigid mathematical models for perspective cameras and tubular topology. Secondly, we will explore how to improve using machine learning. We propose to work on new deep learning models to compute matches along endoscopy sequences to feed them to a VSLAM algorithm where the non-rigid geometry is still hard-coded. We finally plan to attempt a more radical end-to-end deep learning approach, that incorporates the mathematical models for non-rigid geometry as part of the training of data-driven learning algorithms.