Purpose The surgical instrument tracking framework, especially the marker-free surgical instrument tracking framework, is the key to visual servoing which is applied to achieve active control for laparoscope-holder robots. This… Click to show full abstract
Purpose The surgical instrument tracking framework, especially the marker-free surgical instrument tracking framework, is the key to visual servoing which is applied to achieve active control for laparoscope-holder robots. This paper presented a marker-free surgical instrument tracking framework based on object extraction via deep learning (DL). Methods The surgical instrument joint was defined as the tracking point. Using DL, a segmentation model was trained to extract the end-effector and shaft portions of the surgical instrument in real time. The extracted object was transformed into a distance image by Euclidean Distance Transformation. Next, the points with the maximal pixel value in the two portions were defined as their central points, respectively, and the intersection point of the line connecting the two central points and the plane connecting the two portions was determined as the tracking point. Finally, the object could be fast extracted using the masking method, and the tracking point was fast located frame-by-frame in a laparoscopic video to achieve tracking of surgical instrument. The proposed object extraction via a DL-based marker-free tracking framework was compared with a marker-free tracking-by-detection framework via DL. Results Using seven in vivo laparoscopic videos for experiments, the mean tracking success rate was 100%. The mean tracking accuracy was (3.9 ± 2.4, 4.0 ± 2.5) pixels measured in u and v coordinates of a frame, and the mean tracking speed was 15 fps. Compared to the reported mean tracking accuracy of a marker-free tracking-by-detection framework via DL, the mean tracking accuracy of our proposed tracking framework was improved by 37% and 23%, respectively. Conclusion Accurate and fast tracking of marker-free surgical instruments could be achieved in in vivo laparoscopic videos by using our proposed object extraction via DL-based marker-free tracking framework. This work provided important guiding significance for the application of laparoscope-holder robots in laparoscopic surgeries.
               
Click one of the above tabs to view related content.