Integration of force feedback into virtual reality based training systems for simulating minimally invasive procedures

Cagatay Basdogan
Laboratory for Human and Machine Haptics and Research Laboratory of Electronics 
Massachusetts Institute of Technology 
Cambridge, MA 02139
basdogan@mit.edu 
1. Abstract

The availability of shared medical image data sets from National Library of Medicine has resulted in the expansion of research in medical simulation and visualization. Simulation of surgical procedures in virtual environments involve real-time rendering of computer generated images, physically and physiologically-based modeling of soft tissue behavior and haptic rendering of 3D anatomical organs to simulate touch and force sensations of users. Currently, research in this area has mainly focused on developing 3D geometrical models of the human body from 2D medical images, visualization of internal structures for educational and preoperative surgical planning purposes and graphical simulation of soft tissue behavior. Simulation of force sensations through force-reflecting robots has not been investigated in detail. However, surgeons and medics use the surgical instruments frequently to explore and manipulate internal organs in minimally invasive surgery (MIS). Therefore, an important aspect of surgical simulation is to provide the user with estimated reaction forces that arise from "surgical instrument-soft tissue" interactions. In this paper, we discuss the steps for developing a surgical simulation and training system that includes a force-feedback component. 


 
 
2. Motivation

Computer-based simulation has already been accepted as a powerful tool in training. For example, flight simulators have been successfully used to train pilots. We envision that the use of surgical simulators can achieve the same level of success. Analogous to a flight simulator, surgeons and medical personnel can practice for several hours with a surgical simulator and can be trained with virtual patients before they treat live ones. Moreover, recent advances in virtual reality technology including haptic interfaces and significant reduction in the cost to performance ratio of computers have enabled researchers to perform real-time interactive simulations on desktop computer stations. We believe that the medical community can greatly benefit from these recent developments in the areas where "hands-on" experience of medical personnel is crucial. The benefits of computer-based medical simulation and training systems can be itemized as follows: · Computer-based simulation systems can be used to certify medical personnel based on their practical skills in executing a surgical procedure. We believe that an enhanced training environment will be as valuable as "hands-on" experience. It can even be better because the training course can be customized to each individual being trained, based on that trainee's skills, deficiencies, and progress. · Computer-based simulation systems will reduce the use of animals in the training of medical personnel. In addition to the sensitivity of public to animal-based training, this type of training is expensive, does not represent human anatomy, and only one animal can be used per training session. · Surgeons can master new surgical procedures and instruments with the simulator. Since the user can repeat the same simulations with various levels of difficulty, computer based training will improve the decision making skills of surgeons. · Computer-based simulation systems will make a major impact on the quality of health care eventually. As the surgeons and surgical techniques get better, stay period of patients at hospitals will be shorter and the cost of health care will reduce. · High performance computing and networking technology will link medical centers and universities to each other in the future. Anatomical models of human body will be shared to be used for various surgical simulation applications. Improvements in digital data transfer protocols, and surgery simulation systems may lead to remote diagnosis and telesurgery.


 
 
3. Development Steps 

In general, simulating surgical procedures in VEs include (1) the development of 3D geometrical models of human body parts using medical imaging and computational geometry techniques, (2) development of physically-based models to simulate the surgical interventions and soft tissue behavior (3) integration of hardware and software components to develop a multimodal training environment, and (4) design of simulation scenarios and development of training measures to demonstrate the practical utility of the developed system in training. 

3.1. Geometrical Modeling of Anatomical Structures

  • Acquisition and formatting of image slices Visible Man and Woman data sets, from National Library of Medicine, became standard sources for high resolution medical images of a human body (Ackerman, 1994). They were created from a healthy male and a female and available in the format of CT and MRI images and photographs of actual body slices. The data can be downloaded from Internet or obtained from other sources available in the market (e.g. Visible Human CD).
  • Identification of contour lines for each anatomical structure on the 2D image slices includes the outlining of object shapes on image slices and saving the data as 3D mesh points. The edges (contours) of anatomical structures are identified one by one by sequentially going through the image slices to construct a mesh of 3-D points. The technical details of segmentation process can be found in Canny (1986) and Kass et al. (1987). 
  • Construction of 3D polygonal surfaces of structures from contour data includes the fitting polynomial surface patches to 3D mesh points (Schroder et al., 1996). The surface patches are then tessellated into triangles and the regions of the models where the surgical instruments frequently interact with virtual surfaces are constructed in high resolution.
  • Creating tissue texture maps includes the scanning of tissue photographs from medical books or other sources to generate computer texture maps. These texture maps are then wrapped around 3D polygonal surfaces using texture mapping algorithms to make the computer generated models look more realistic (Bier and Sloan, 1986). 
3.2. Development of Algorithms for Simulating Instrument-Tissue Interactions
  • Computer-aided design of surgical instruments includes the CAD generated real size 3D models of the surgical instruments. Although the models of surgical instruments appear as complex 3D objects to the user they are in fact assumed to be made of simple geometric primitives to reduce the number of computations and achieve fast collision detection rates. 
  • Collision detection includes the real-time detection of collisions between the simulated models of the surgical instruments and 3D geometrical models of the organs. Collision detection algorithms return the collision point, index of the polygon, depth of penetration, and the normal vector at the collision point. Although collision detection has been extensively studied in computer graphics (Lin, 1993; Hubbard, 1995; Mirch, 1997), algorithms need to be slightly modified to work with haptic interfaces (see the sections related to haptic rendering in this book). Following the detection of collision, the subroutine that is related to the selected surgical instrument is called to simulate the surgical procedure. Surgical instruments of minimally invasive surgery can be studied in two groups to detect collisions and simulate their mechanical actions in VEs: The first group includes the long thin straight probes that are mainly used for palpating, puncturing, coagulating, and injection (e.g. puncture needle, palpation probe, injection needle) can be modeled as a line segment for fast collision computations. The second group includes the tools that have opening-closing arms and used for pulling, clamping, cutting, and gripping soft tissues such as biopsy and punch forceps, hook scissors, and grasping forceps can be modeled as a line segment and a polygon for fast detection of collisions (Basdogan et al., 1998). 
  • Graphical and haptic display of soft tissue behavior includes the development of algorithms for simulating the displacement and force response of the soft tissues during virtual surgical interactions. Graphical rendering of soft tissues includes the development of techniques for modeling and visually displaying the physically-based behavior of virtual organs. Haptic rendering of soft tissues includes the development of algorithms to convey tool-tissue interaction forces to the user. The three major challenges in developing organ-force models are (1) displaying tissue behavior in a realistic manner, (2) achieving real-time rendering rates, and (3) reflecting stable forces to the user. An integral part of simulating tissue-instrument interactions is the development of techniques for tissue deformation, tearing, cutting, suturing, and piercing. Deformable objects and modeling techniques have been extensively studied in computer graphics (see Gibson and Mirtch, 1997 for the summary). One way to categorize the existing modeling techniques in computer graphics is according to the approach followed by the researchers to modify the shape of 3D models: geometrically-based or physically-based. In geometrically-based techniques, the shape of the object is modified through control points or vertices purely based on geometric transformations. On the other hand, physically-based techniques aim to model the physics involved in the motion and dynamics of interactions (see the lecture notes of Witkin et al., 1997 on physically-based modeling in computer graphics). In general, geometrically-based techniques are faster, and are relatively easier to implement. But they do not simulate the underlying mechanics of tissue behavior. Hence, the emphasis is on visual display and the goal is to make display appear smoother to the end-user. Yet, it is obvious that internal organs of the human body show physically-based behavior when they are manipulated with surgical instruments (see Cover et al, 1993 for energy minimizing surfaces; Lee et al., 1995 for spring-damper models; Bro-Nielsen, 1997 for finite element models; Basdogan et al., 1998 for physically-based spline models in surgical simulation). Sophisticated physically-based techniques such as finite element models (FEM) although necessary for simulating the dynamics of realistic interactions, are not suitable for fully interactive, real-time simulation of multiple deformable objects in virtual environments due to the current limitations in computational power. Hybrid approaches that can take advantage of physically and geometrically based techniques can be followed to achieve real-time rendering rates. In general, organ-force models that are developed for surgical simulation purposes do not necessarily have to be highly accurate, but they should be sophisticated enough (1) to handle constraints and boundary conditions, (2) to update topological changes in real-time (e.g. cutting requires updating graphical and haptic databases) and (3) to simulate inertial organ dynamics. More discussion on soft-tissue modeling for surgical simulation can be found in Delingette, 1998. 
3.3. Integration of Software and Hardware Components

This includes the integration of graphical and haptic interfaces, and the design of peripheral components of the system. 

  • Hardware Components: Selection and design of hardware components depend on the type of training. The hardware components of a part-task trainer can simply include a computer with a high-end 3D graphics accelerator for visualization of virtual organs and force feedback devices to simulate haptic sensations. For example, force reflecting devices and actuators can be embedded into a mannequin in a manner similar to MIS. During the simulations, the user manipulates the surgical instruments that are attached to force-feedback devices to simulate the movements of surgical instruments and to feel their interactions with the computer generated anatomical organs. The associated deformations of the organs are displayed on the computer monitor (as are displayed on the video monitor in MIS) and reaction forces are fed back to the user through the haptic interfaces. Using this set-up, the trainee can touch and feel normal/abnormal anatomy, learn how to execute a specific procedure. The performance of the trainee can be monitored and recorded during the training session for further analysis. Team (full-task) trainers, in comparison to part-task trainers, require a larger space and enriched sensory feedback to simulate operating room (OR) environment. Sensors and mechanical actuators (e.g. mechanical lungs, voice output and drug delivery systems, etc.) that are sensitive to the actions of the trainee can be placed inside the mannequin and around the table to create a more realistic OR environment for team training. Or, we can envision a more sophisticated team-trainer such that multiple trainees equipped with stereo glasses, head trackers, and exeskeloton type of haptic devices enter into an immersive room (e.g. the CAVE, developed at UIC, provides a large scale virtual environment). Visual images are seamlessly projected onto the walls of the room to display a 3D anatomical model, information charts, and vital signs of a floating virtual patient. This set-up, for example, can be used to teach human anatomy, principles of life support, and diagnosis of medical cases. 
  • Software Components: Software integration of visual and haptic modalities in an efficient manner requires to construct hierarchical classes and databases for storing geometrical and material properties of objects (Ho et al., 1998), a client-server model (see Chen and Markus, 1998), and programming with multi-threading/processing techniques to separate visual and haptic servo loops. Each sensory loop has its own requirements and demands a CPU time accordingly. Although a graphics update rate of 30 Hz is sufficient for flawless display of visual images, the haptic servo rate should be as high as 1 kHz for stable force interactions. In multi-threading structure, a separate thread can be assigned to each sensory modality. Moreover, a priority level for each thread can be set in advance or dynamically during simulations. Threads can share the same database, but their proper synchronization in accessing the shared database is important for achieving faster graphical and haptic rendering rates. An object-oriented approach can be followed to group anatomical models and surgical instruments according to the level of detail required in simulations and their functionality respectively (Basdogan et al., 1997).
3.4. Design of Simulation Scenarios and Training

The discussions with the medical experts, video tapes, and preliminary literature surveys are important for designing simulation scenarios that are effective in training. The following objectives should be kept in mind while designing a surgical training systems: 

  • Visualization and Exploration: The training environment should display and highlight anatomical landmarks, enable the user to navigate through 3D anatomical models, and to visualize 2D cross-sections for educational purposes. Haptic explorations can be made to feel the material and surface properties of organs. 
  • Planning and Rehearsal: The training environment should enable the trainee to plan/rehearse the surgery. In addition to visual rehearsal techniques, a haptic device can be programmed to serve as a closed loop controlled robot arm to improve the motor-control skills of the trainee. The trainee (or multiple trainees over a network) may hold the end effector of the device and carefully follow its commands as the device goes through the steps of the surgical plan to rehearse a particular procedure. 
  • Part-Task versus Team Training: The major goals of a part-task training session are (1) to familiarize trainee with instruments and anatomical landmarks (2) to teach the basic surgical procedures (e.g. suturing, cauterizing, cutting, manipulating and transferring internal organs) (3) to improve the hand-eye coordination of trainee, and (4) to enable trainee learn from his/her mistakes. The major goals of a team trainer are (1) to improve the team skills of the trainee and (2) to teach crisis management (e.g. equipment malfunction, consequences of an unintentional damage to an organ or major artery, trauma management, etc.). 
  • Training Effectiveness: includes the measures (e.g. charts, questionnaires, and recordings) for evaluating the effectiveness of the simulator (Johnston et al., 1996; O'Toole et al., 1997; Weghorst et al., 1998). In general, the training environment should evaluate and improve the knowledge-based (e.g. learning anatomy and anatomical landmarks, learning the steps of a surgical procedure, getting familiar with surgical instruments, etc.) and practical (e.g. ability to perform the procedure, learning hand-eye coordination, etc.) skills of the trainee. For this purpose, taxonomy of standards for (a) cognitive, (b) behavioral, and (c) motor performance needs to be identified. 

 
 
4. Current Needs

Further research is required in many areas to develop a cost effective and efficient surgical trainer:

  • Biomechanical properties of organs: To improve the realism of simulations, we need characterization of the mechanical behavior of tissues and organs. Depending on the type of tissue, this can be accomplished by performing in-vitro measurements or by attaching miniature sensors to instruments for measurement of forces and tissue displacements in vivo (see Gupta et al., 1997 and Carter, 1998). Alternatively, a rough estimate of tissue properties can also be extracted from CT/MRI images. 
  • Physiologically-based tissue models: The physiological response of human body (e.g. the changes in vital signs such as heart rate and blood pressure due to bleeding or breathing) needs to be simulated to achieve more realistic simulations. However, the integration of these procedures into simulations can be computationally very expensive. For example, simulation of bleeding is a challenging task because of the complexities of the circulatory system and the physics of viscous fluid flow (Basdogan et al., 1996, 1999). · 
  • Fidelity issues: It is still not clear how much fidelity (e.g. how much accuracy is required in our organ-force models?) is required to develop a cost-effective trainer. Realism and real-time display requirements are two main factors that affect the selection of models and modeling techniques. For example, high fidelity modeling techniques such as FEM are not absolutely necessary when we carefully consider the real-time requirements and the training goal. A similar debate can be made for the selection of 3D geometric organ models. For example, surface models do not provide direct information about the tissue layers and may not be the proper choice for simulating all procedures, but they are easier to handle and computationally less expensive to render than the volumetric models. Further research on training effectiveness may be helpful to resolve some of these issues. ·
  • Studies that investigate human-factors are needed to understand which sensory cues a surgeon utilize and how the sensory information is fused during a real-life surgery. For example, we observed that visual cues are significantly more utilized than haptic cues during a minimally invasive surgery which suggests that vision requires a higher fidelity than haptics in surgical simulations. The users in our simulations were easily able to detect unrealistic visual display of organ deformations, but their ability to discriminate force magnitude was not so strong. In spite of the fact that soft tissues have nonlinear characteristics, even simple linear force models can be sufficient to satisfy the criteria of "suspension of disbelief" if a photo-realistic visual display is provided to the user. · 
  • Physically-based soft tissue models that can be generated automatically from 2D images will be useful. Currently, the development of organ-force models for surgical simulation purposes is achieved in multiple steps (see Section 3). However, if these steps can be combined and integrated into image segmentation and anatomical modeling stages, the generated 3D geometric models at the end of the process will intrinsically show physically-based characteristics (see Cohen and Cohen, 1993; McInerney and Terzopoulos, 1995). · 
  • Further research in hardware development is necessary to improve the design of electro-mechanical devices used for surgical simulation purposes. Modular force loaders (e.g. active graspers that can be attached to a haptic device), flexible force reflecting haptic devices, and torque displays are necessary for simulating various surgical procedures. Progress in this area will make a direct contribution to the advancement of research in other emerging areas including telesurgery and bio-robotics (e.g. using force cues to control and navigate micro probes or robots inside human body) Micro electromechanical systems (MEMS) technology is promising and can be helpful in miniaturization of sensors and actuators to be used for minimally invasive surgery applications. · Multi-threading (or parallel processing) techniques and efficient data structures are important for the successful integration of vision and touch in surgical simulation. It is quite likely that a more efficient simulator for training purposes can be developed by effectively synchronizing haptic and visual models than by improving the fidelity of the each modality. · 
  • Proper selection of surgical procedures and careful design of simulation scenarios are necessary for effective training. For this purpose, the technological skills of the engineering team should be blended with the medical expertise of surgeons. At this stage, procedures of MIS seem to be the most appropriate ones for simulating in VEs. The surgeon can not directly interact with internal organs through his bare hands in MIS as in open surgery. Although the use of external devices to gain access to internal organs reduces the haptic feedback during MIS, it is now relatively easier to simulate these procedures in VEs. We create computational and mechanical models of instruments instead of our hands, which are less dexterous and easier to simulate. 
References 

1. Ackerman, M. J., 1994, "The Visible Human Project.", Medicine Meets Virtual Reality II: Interactive Technology and Healthcare, 5-7. 

2. Basdogan C., Ho, C., Srinivasan, M.A., 1999, "Simulation of Tissue Cutting and Bleeding for Laparoscopic Surgery Using Auixilary Surfaces" To appear in the proceedings of the Medicine Meets Virtual Reality VII Conference, San Francisco, CA. 

3. Basdogan C., Ho, C., Srinivasan, M.A., Small, S., Dawson, S., 1998, "Force interactions in laparoscopic simulations: haptic rendering of soft tissues" Medicine Meets Virtual Reality VI, pp. 385-391, San Diego, CA, January 19-22. 

4. Basdogan C., Loan J.P., Rosen J.M., Delp S.L., 1996, "An interactive model of the human lower limb for simulation of surgical procedures in virtual environments", the Winter Annual Meeting of ASME'96, BED-Vol. 33, pp. 439-440, Nov. 17-22, Atlanta (see http://touchlab.mit.edu/people/publications.html)

5. Bier, E.A., Sloan K.R., 1986, "Two-Part Texture Mapping", IEEE Computer Graphics and Applications, September, pp. 40-53. 

6. Bro-Nielson, M., 1997, "Fast Finite Elements for Surgery Simulation" Medicine Meets Virtual Reality V, pp. 395-400.

7. Canny, J., 1986, "A computational approach to edge detection.", IEEE Trans Pat An Mach Int., 8, 679-698. 

8. Carter, F. J., 1998, "Biomechanical Testing of Intra-Abdominal Soft Tissue", Tutorial Notes, Int. Workshop on Tissue Deformation and Palpation, Oct. 10, 1998, Cambridge, MICCAI'98. 

9. Chen, E., Markus, B., 1998, "Force Feedback for Surgical Simulation", Proceedings of the IEEE, Vol. 86, No. 3, pp. 524-530. 

10. Cohen, L.D., Cohen, I., 1993 "Finite Element Methods for Active Contour Models and Balloons for 2D and 3D images", IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, pp. 1131-1147. 

11. Cover, S.A., Ezquerra N.F., O'Brien J. F., Rowe R., Gadacz T., Palm E., 1993, "Interactively Deformable Models for Surgery Simulation.", IEEE Computer Graphics and Applications, pg. 68-75.

12. Delingette, H., 1998, "Toward Realistic Soft-Tissue Modeling in Medical Simulation", Proceedings of the IEEE, Vol. 86, No. 3, pp. 512-523. 

13. Gibson, S., Mirtch, B., 1997, "A Survey of Deformable Modeling in Computer Graphics", Technical Report No: TR97-19, MERL, http://www.merl.com/reports/index.html. 

14. Gupta, V., Reddy N., Batur, P., 1997, "Forces in Laparoscopic Surgical Tools", Presence: Teleoperators and Virtual Environments, Vol. 6, No. 2, pp. 218-228. 

15. Ho, C., Basdogan, C., Srinivasan, M.A., 1998, "An Efficient Haptic Rendering Technique for Displaying Polygonal Objects with Surface Details in Virtual Environments" submitted to Presence: Teleoperators and Virtual Environments. 

16. Hubbard, P., 1995, "Collision Detection for Interactive Graphics Applications", IEEE Transactions on Visualization and Computer Graphics, 1(3), Sept., pp. 219-230. 

17. Johnston, R. et al., 1996, " Assessing a Virtual Reality Surgical Skills Simulator", Medicine Meets Virtual Reality IV, pp. 608-617. 

18. Kass, M., Witkin, A., Terzopoulos, D., 1987, "Snakes: Active contour models", Int. J. of Computer Vision, 1 (4). 

19. Lee, Y., Terzopoulos, D., Waters, K., 1995, "Realistic Modeling for Facial Animation", Proceedings of SIGGRAPH'95, ACM, pp. 55-62. 

20. Lin, M., 1993, "Efficient Collision Detection for Animation and Robotics,'' Ph.D. thesis, University of California, Berkeley.

21. McInerney T., Terzopoulos, D., 1995, "A Dynamic Finite Element Surface Model for Segmentation and Tracking in Multidimensional Medical Images with Applications to Cardiac 4D Image Analysis", Comp. Medical Imaging and Graphics, Vol. 19, No. 1, pp. 69-83. 

22. Mirtch, B., 1997, "Efficient Algorithms for Two-Phase Collision Detection", Technical Report No: TR97-23, MERL, http://www.merl.com/reports/index.html.

23. O'Toole, R., et al., 1997, "A Novel Virtual Reality Surgical Trainer with Force Feedback: Surgeon vs. Medical Student Performance", Proceedings of the Second Phantom User's Group Workshop, Published as MIT Artificial Intelligence Lab Technical Report No. 1617, and RLE Technical Report No. 618. 

24. Schroeder, W., Martin, K., Lorensen, B., 1996, "The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics", Prentice Hall, Inc., New Jersey. 

25. Weghorst, S., Airola, C., Oppenheimer, P., 1998, "Validation of the Madigan ESS Simulator", Medicine Meets Virtual Reality VI, pp. 399-405, San Diego, CA, January 19-22. 

26. Witkin A., Barraff D., Kass M., 1997, "An Introduction to Physically-Based Modeling", SIGGRAPH'97 Course Notes, http://www.cs.cmu.edu/afs/cs.cmu.edu/user/aw/www/awhome.html


 

Contact: Cagatay Basdogan, Ph.D.

MIT, Room: 36-758, Cambridge, MA, 02139.
basdogan@mit.edu
http://touchlab.mit.edu/people/cagatay