Surgical Planning

2.3.1 Virtual planning

Historically, and, to some extent, currently, physicians have relied upon training and expertise, sketches, simple models and mental visualization to plan for their procedures. While this approach remains, to a degree, the standard of care today, experience indicates that the bar will be raised once surgeons fully appreciate the power of the computer and imaging modalities described previously in planning and executing surgical interventions. As the trend towards less invasive, more precise treatments continues, advanced technologies, capable of integrating all stages of a procedure (planning, delivery, follow-up), will assume central importance.

Software systems exist that can load, integrate and manipulate different types of image data to construct complex virtual models. By stacking co-registered tomographic image slices into a single volume of data, 3D renderings and oblique slice reconstructions become possible (Figure 2.1). Radiologists often evaluate such displays in conjunction with the original image slices for diagnostic purposes. Most modern CT and MR scanners are equipped with such software at the operator's console, so that secondary reconstructions can be calculated from the

Figure 2.1 Computer-generated 3D rendering of bone, based on CT scan images

primary image data at the time of their acquisition. Qualitative evaluation of these reconstructed images and volume renderings increasingly plays a role in the radiologist's practice. Certainly, the ability to acquire and process images at higher and higher spatial resolution has pushed the development of new technologies and applications (Gateno, Teichgraeber and Xia, 2003a). Physicians have been forced to follow suit.

There are noteworthy limitations to the current status of 3D image reconstructions and display, however. In particular, the systems do not provide tactile interaction with the data, and renderings on computer screens do not show true 3D relationships, instead utilizing computational models of light and shade to render images on a flat monitor that the observer perceives as 3D. While technologies such as haptic interfaces, whose motors and sensors provide the basis for mechanical feedback, and volumetric displays are emerging, they are not in widespread use.

One of the earliest applications of computer-aided quantitative treatment planning in medicine was arguably the design of radiation therapy dose plans (Worthley and Cooper, 1967). Computers were applied to the task of predicting, through complex calculations, radiation doses deposited in tissue when exposed to combinations of radiation beams. Radiation oncologists and physicists would rely on these computational results to design beam arrangements that delivered a prescribed dose of radiation to a target volume while avoiding overdose in the surrounding healthy tissue. The earliest versions of such programs calculated doses at single points and relative only to coarse representations of patient anatomy such as a single CT slice or a digitized patient contour (acquired with a plaster of Paris strip). Current radiation therapy treatment planning systems calculate full 3D distributions of radiation doses and can generate detailed renderings built from co-registered sets of multimodal image data. What is noteworthy in this discussion about radiation therapy treatment planning is that image data are used to construct a virtual model of a specific patient and complex algorithms are used to compute results (radiation dose distributions) of a given treatment option. Typically, several treatment designs are simulated and the plan that best fits the physician's prescription is implemented.

Image-guided surgery (IGS) systems were developed in the early 1990s as a technique to link image data and virtual models with actual patient anatomy (Smith, Frank and Bucholz, 1994). IGS systems incorporate spatial tracking systems, typically optical or magnetic, with software systems that handle medical image data. The result can be considered analogous to a global positioning system (GPS) for the operating room (OR). In GPS, satellites orbiting the Earth can localize the position of a GPS transmitter in a car, correlate its position to a street map and provide directions to the driver. In IGS, localizer technology can track the location of an instrument in a surgeon's hand, correlate its position relative to preoperatively acquired images and provide guidance to the surgeon. IGS systems are used by neurosurgeons to localize brain tumors for less invasive and more complete resections, by ENT surgeons for safer, more precise sinus operations and increasingly in orthopaedics for guidance in total joint replacements.

By necessity, IGS (so-called 'surgical navigation') systems apply advanced software features to the reconstruction and rendering of complex image datasets. These features permit quantitative planning of surgical trajectories and the measurement of target volumes but, in fact, are typically used in only a few types of procedure. In cases such as stereotactic biopsy, or implantation of deep brain stimulators, planning tools are used to determine settings for traditional stereotactic devices. In most other types of case, such as functional endoscopic sinus surgery, which is a much faster procedure, performed using free-hand instruments, little or no computer planning is performed before the procedure.

2.3.2 Implementation of the plan

Just as the field of radiation therapy is a good example of early computer treatment planning, it is also an appropriate example of the challenges of implementing preplanned treatment parameters (Miralbell et al., 2003). Even in modern radiation oncology departments there is a heavy reliance on traditional manual techniques to create custom patient support and alignment devices designed to enable delivery of treatments designed virtually. Moldable thermoplastic materials, pillows and multiple sessions for simulating treatment set-up and delivery are necessary actually to implement a treatment plan generated in the computer. This reliance on manual techniques to create devices to facilitate implementation of a treatment plan leads to a 'digital disconnect'. Treatments are planned in a virtual environment and delivered using complex computer-controlled devices. However, the intermediate steps to deliver treatment rely on manual transfer of information and fabrication of devices, which obviously can be subjective and prone to human error.

Many types of rigid but adjustable device have been developed to facilitate precise delivery of treatment parameters that were designed prior to the procedure. Stereotactic frames are a good example of this (Figure 2.2). Used for targeting in radiation treatment or surgical intervention, stereotactic frames are rigidly fixed to a patient's head using invasive pins. Imaging studies acquired with the frame in place include the patient's anatomy as well as the stereotactic frame, so it is possible to integrate target point coordinates from the image system into the coordinate system of the frame. With the position of the surgical target established relative to the position of the externally fixed frame, it is possible to adjust the settings of the stereotactic aiming device to reach these targets. The drawbacks to such techniques are that imaging studies must be acquired with the frame attached, which is cumbersome and uncomfortable, and that only

Figure 2.2 Stereotactic frame for surgery

one point or trajectory can be targeted at a time. Also, because they are invasive, it is not possible to use such frames in fractionated treatments. A myriad of devices for the non-invasive, repeat fixation of stereotactic implements have been proposed and patented (Sweeney et al., 2001). Most use moldable bite blocks, plaster or fiberglass casts, straps and/or ear plugs to stabilize the patient's head during imaging and treatment. These tend to be very dependent on the skill of the user, rely on patient compliance and are less precise than true stereotactic frames.

Surgical navigation systems are thought to be an advance over stereotactic frames, since they avoid bulky invasive devices and can track instruments in real time. However, they do have some significant weaknesses, chief among which is the free-hand nature of the instruments. The surgeon must manually align an instrument to the preplanned trajectory, using on-screen displays for reference. This can be difficult and draws the surgeon's attention to the computer screen and away from the actual patient. Even when the surgeon aligns the instrument to preplanned parameters, this position is lost when the surgeon sets the instrument down. The practicalities of surgery require that surgeons often switch instruments and that they maintain attention on the patient directly, rather than to a virtual representation of the patient. For example, in most types of surgery, tasks like the management of bleeding are not guided by presurgical planning and surgeons must focus on the patient's anatomy as it appears in front of them, rather than on a computer screen or some other representation of the patient. This causes an 'attention split' problem, as surgeons must divide their attention between the virtual patient and the actual patient. While using the computer monitor to implement the plan and align the instrument, the surgeon is still responsible for performing the basic surgical management tasks at hand.

There are a few developing technologies that have been applied to the task of implementing a preoperative treatment plan. Robotics would seem to be a means of compensating for some of the limitations mentioned, but for the most part it has not been embraced by the medical community (Honl et al., 2003). The ability to drive a trajectory guide or instrument carrier into a position defined by a preprocedural plan would avoid the limits of free-hand navigation (Choi, Green and Levi, 2000). A robotic arm could be repeatedly moved away from the surgical field, then replaced, thus alleviating the difficulties of working with, for example, a stereotactic frame. Despite its potential advantages, robotic technology has not gained much acceptance, likely because of high costs, limited applicability (devices that have been marketed tend to be suitable for only a small number of procedures) and concerns over reliability and the loss of human control.

Another possible solution to the attention split problem is augmented reality, that is, the ability to project computer-generated data into a surgeon's field of view in real time. This is a developing technology used in defense and industry that would provide guidance by overlaying graphic representations of virtual preplanning directly into the surgeon's field of vision. In other words, preplanned parameters would appear to the surgeon directly in the field, rather than on a screen. However, no physical guidance is provided and instruments must still be manipulated free-hand, imposing limitations similar to some of those referenced above.

Was this article helpful?

0 0
Arthritis Relief and Prevention

Arthritis Relief and Prevention

This report may be oh so welcome especially if theres no doctor in the house Take Charge of Your Arthritis Now in less than 5-Minutes the time it takes to make an appointment with your healthcare provider Could you use some help understanding arthritis Maybe a little gentle, bedside manner in your battle for joint pain relief would be great Well, even if you are not sure if arthritis is the issue with you or your friend or loved one.

Get My Free Ebook


Post a comment