Introduction

The science of anatomical replication, referred to as biomodelling, or medical modelling, is enabled via the union of 3D medical imaging with rapid prototyping (RP), or 3D printing (3DP). Specifically, biomodelling is the science of converting scanned morphological data into exact solid replicas via specialized software and digital layer based freeform fabrication systems. BioBuild from Anatomics is such a software package designed specifically for the particular data processing requirements of biomodelling. What has always differentiated biomodelling from most RP in terms of data processing is that it must utilize a stack of 2D medical scan sections as its input data source, as opposed to the traditional CAD data source, which is usually inherently 3D in representation. This creates specific requirements relating to data acquistion, import and processing, and ultimately the quality of resultant physical biomodels. These issues and how to manage them were identified and addressed during clinical research into biomodelling throughout the 1990s, and led to the development of BioBuild by Anatomics. A brief historical overview of foundation technology may give the reader a perspective on the key developments of biomodelling. An examination of the existing BioBuild software and its utility for common biomodelling tasks, followed by future enhancements being developed for the software, will be the focus of this section.

The continuous incremental increase in computer processing power available for both patient data acquisition and subsequent image processing and biomodel production has helped make biomodelling increasingly more practical, and thus increasingly relevant to surgical practice in the twenty-first century. The simultaneous but independent development of high-resolution CT scanning and RP technology made current-generation biomodelling technically feasible, about 10 years after it was first conceptualized. CT scanning was introduced in 1972 [1], although applications in 3D imaging for surgery did not emerge in clinical practice until the system was sufficiently advanced by the early 1980s when researchers took advantage of new

Advanced Manufacturing Technology for Medical Applications Edited by I. Gibson © 2006 John Wiley & Sons, Ltd.

hardware and software [2-4]. Alberti [5] first proposed the concept of producing physical models from CT scans in 1980, but the technologies available to process the anatomical data and then produce the biomodels were both very limited. Attempts by researchers in the 1980s to create biomodels predated RP technology, and thus the results were rudimentary, although often still considered useful for some surgical planning. Methods ranged from the stacking of life-size aluminium or polystyrene cut-outs of CT slice bone contours [4,6] to the use of three-axis computer numerically controlled (CNC) milling to create two-part moulds for master prostheses and model castings, as described in the 1982 White technique [7]. The advent of five-axis CNC milling improved accuracy and allowed more complex biomodel construction without moulding [8]. The resultant models were of sufficient resolution to be useful for surgical planning in complex cases. However, it was evident that the complex geometries of anatomy were not ideally suited for even five-axis machining, particularly for replicating internal structures and thin walls. The required high-resolution 3D CT scanning also presented significant challenges, with concern regarding high radiation doses and long image acquisition and reconstruction times limiting the use of the 3D imaging necessary for biomodelling.

The applicability of volumetric medical imaging to surgery was to be greatly enhanced with the introduction of slip-ring spiral CT scanning in 1987 by Siemens and Toshiba [9, 10]. This enabled high-speed volumetric imaging with acceptable resolution for practical 3D imaging for the first time in clinical radiology. High-resolution (1.0 mm slice spacing) CT volumes could now be acquired with relative ease. Clinicians also benefited from the advent of IV-contrast enhanced CT angiography via dynamic scanning, with scans at 0.5 mm resolution allowing the visualization of fine cerebral vessels, previously only visible via traditional invasive angiog-raphy procedures. Simultaneous advances were also made in 3D magnetic resonance (MR) angiography. 3D reconstructions from both of these scanner types would soon establish themselves as routine radiology options for surgeons by the mid-1990s [11]. Radiation doses were also controlled via improvements in CT X-ray detector sensitivity. Additionally, the ability to space scans arbitrarily to achieve fine contiguous or overlapping slices retrospectively after a spiral scan block acquisition, not during acquisition as was previously the case with axial scanning, allowed for a 3D scan with a dose lower than its non-spiral equivalent. This also allowed for multiple reconstructions of a single spiral data block at differing slice spacings. Spiral CT scanning basically made 3D imaging 'radiology department friendly' to acquire, and significantly easier to process via related developments in 3D image processing workstation systems. Such imaging workstation systems became common accompaniments to spiral scanning systems, moving 3D imaging out of the research labs and into the normal clinical environment.

The emergence of 3D rendering techniques for voxel-based data in the 1980s allowed the shaded surface display of image volumes to demonstrate anatomy in a life-like 3D view, usually via surface shading with a virtual light source [3, 12]. Volume rendering techniques were also developed that allowed visualization of volumes via ray casting processes utilizing different rendering algorithms, without the need for surface extraction, as the volume as a whole is rendered [12-14]. These approaches examine every voxel in the volume, and were very computationally expensive at the time, with expensive specialized graphics hardware required for graphics processing. This usually limited volume rendering techniques to highend graphics workstations in research environments.

In surface rendering techniques, the desired structure or intensity range is first delineated from the image volume via a simple threshold operation. 'Threshold segmentation' creates the object to be rendered from all voxels whose values are greater than or equal to a user-supplied threshold value. The user normally determines the intensity threshold by empirical inspection of the image volume, typically to isolate one tissue type from the others. Once the desired greyscale density values are identified via segmentation, the surface of the object must be described. This can be done via surface 'tiling' between adjacent slice contours [3], or by forming a polygon surface from exposed faces of individual voxels. This latter approach was first described by the 'marching cubes' algorithm [15], and was devised to perform surface rendering on an image volume, representing the object via a triangulated surface mesh. Such a triangulated mesh was virtually identical to the surface mesh used to describe CAD objects for stereolithography via the STL (stereolithography file). In their 1987 paper, Lorensen and Cline identified that their methods 'use polygon and point primitives to interface with computer-aided design equipment' [15]. As most CAD systems supported the new STL format for RP model creation, a software interface between 3D medical image volumes and SLA had been created as a byproduct of this landmark object surface description technique.

These developments in volumetric image acquisition and 3D image processing occurred in parallel with each other, alongside the release of the first commercial rapid prototyping system in 1986 [16]. The new stereolithographic apparatus (SLA) allowed submillimetre layer based fabrication of arbitrarily complex shapes, thus bypassing the toolpath and other restrictions of CNC milling.

The first report of the use of 3D CT with SLA for biomodelling from 1990 [17] identified SLA as superior to milling for biomodelling, and importantly recognized the similarity between CT slice data and the SLA build layer 'laser hatch' data necessary to generate a SLA model layer by layer. Mankovich chose automatically to segment the CT slices to isolate the desired bone contours, and transfer the contours directly to the SLA, after adding the required laser hatching information to each layer. As each SLA layer was 0.25 mm thick, each 2.0 mm CT slice had to replicated 8 times to allow the biomodel to be built up from this data. Problems were encountered relating to the lack of interpolation of the CT data owing to this slice replication, and also, in the physical part, support requirements for the 'contour stack'.

Klein published a paper in 1992 [18] that compared medical biomodels from milling and SLA. He identified SLA as superior, but with problems of cost and computer processing time. To counter this, he described the use of the marching cubes algorithm to create a triangulated surface description of the object. Further work was done on the contour-based (or so called 'direct layer interface') approach as explored by Mankovich in the early 1990s by Belgian company Materialise [10] which first utilized an algorithm to produce a 'stack' of interpolated RP contours in a 'stereolithography contour' (SLC) file, based on the original 'stack' of 2D images making up the 3D CT volume. This algorithm eliminated the need for any 3D surface description as well as solving the 'in between' Z plane slice interpolation problem by using cubic interpolation. It was implemented in the CT modeller (CTM) module of the Mimics software from Materialise in 1992. This is currently part of the 'RP slice' module. The major benefit of this approach was that large image volumes could be processed at high resolution, and the resultant contour files always described a 3D anatomical object more efficiently than did a STL surface file of equivalent resolution. Large triangle numbers in STL files, and long processing times required to create the necessary stack of layers from STL files for building via 'slicing' algorithms, also made the contour interface more attractive. However, the direct interface via contours approach of CTM meant that the object always had to be built in the same orientation as how it was scanned, as the contours once generated could not be rotated to optimize build height. Using the contour format also meant there was a requirement for special support structure generation software.

Australian researchers, led by Barker [19], described a process in 1993 that used powerful 3D medical imaging tools in the form of Analyze software [13] developed by the Biomedical Imaging Resource, Mayo Clinic (Rochester, USA). This approach entailed using the volumetric imaging tools of Analyze for image editing and processing, isolating the anatomical structures for biomodelling via threshold segmentation and object connectivity algorithms, then outputting the object for SLA in the STL surface format via the marching cubes algorithm. This allowed advanced 3D visualization via voxel gradient shading volume rendering [20], volumetric editing and object connectivity algorithms. This also gave users full access to the comprehensive suite of imaging functions available in Analyze. The technique was further validated for accuracy by Barker et al. [21] and then D'Urso [22] in separate accuracy studies.

The Barker technique was then utilized by the Brisbane biomodelling group [23] to process cases, leading to the development of cranio-maxillofacial applications as reported by D'Urso et al. [24], Arvier et al. [25] and Yau et al. [26]. As this data processing technique utilized the marching cubes algorithm to produce STL surface mesh files, the resultant RP build file sizes were very large owing to the triangle count created by the algorithm. This produced significant overheads relating to visualization and preprocessing for SLA, even on high-end UNIX graphics workstations used at the time. D'Urso, however, recognized the processing advantages in using the advanced volumetric tools of Analyze in combination with the layer-based contour interface to SLA. D'Urso thus adapted the Barker technique to use Analyze on the 'front-end' to interface to medical imaging, and thus enable 3D visualization and image processing, combined directly with CTM on the 'back-end' to interface to SLA and produce the smaller, more efficient SLC contour files [22]. Furthermore, D'Urso also developed a system for optimizing image volumes, allowing scans to be exported to the contour interface at an optimized orientation in terms of physical build height, as well as file size. Using Analyze in this fashion allowed the Brisbane biomodelling group to identify the necessary image processing toolkit required specifically for biomodelling. This would lead to the tailoring of a biomod-elling imaging toolkit, and culminate in the development of stand-alone specialized software package designed specifically for biomodelling. That software would iteratively become BioBuild.

During the initial software development period, RP build files and biomodels produced with the new software were benchmarked against those produced via the established process, to ensure continuity of biomodel quality. Analyze continued to be used as the interface to imaging while the critical image resampling, volumetric rotations and RP file generation functionality was implemented. These 'back-end' functions were developed in a cross-platform environment, to support both the SGI UNIX platform (IRIX), as well as 32 bit Windows. Consequently, the core volumetric processing engine of BioBuild is able to utilize high-end multi-CPU workstations on both Windows and UNIX platforms. Then the required image processing functions were developed and tested in conjunction with a user interface designed specifically for biomodelling. Build height optimization [27] via automatic volumetric rotation was also implemented specifically to reduce biomodelling build times. Support for both surface STL and contour SLC output was included in the software, as, although the contour interface was more optimal, the STL interface had become dominant. This was due largely to its prevalence in RP and CAD in general, where it had become a de facto standard. The ability to manipulate such files more readily than contour files, the format portability across RP machine types and the continual increase in computing power available to process the larger STL files also contributed to its dominance. This increase in processing power and memory available in commodity hardware also allowed the deployment of the software on a PC platform in the 32 bit Windows environment, making it more accessible and affordable.

6.2 BioBuild Paradigm

The BioBuild software was specifically developed to integrate all the functionality required to import, visualize, edit, process and export 3D medical image volumes for the production of physical biomodels. An overriding design goal has always been compatibility with as many different medical scanners, and as many different image formats, as possible. Consequently, BioBuild is able to bridge the gap between patient scans and physical biomodels in many varying environments. This has seen BioBuild used in conjunction with CT data [28, 29], magnetic resonance imaging [30] 3D ultrasound data [31] and 3D angiography data.

The general process for producing a physical biomodel from a patient scan is straightforward. First, the patient dataset is imported. The data is then converted into a volume for inspection and processing. After completing the necessary processing, a 3D surface is extracted that represents the physical biomodel. To ensure that all regions of interest have been correctly modelled, it is important first to visualize and inspect the surface. Finally, the software model must be exported to a format suitable for physical biomodel production. Anatomics BioBuild software was designed to integrate each of these steps into a single user-friendly system. Further, these steps form the basis of the BioBuild paradigm for biomodel production.

The process is complicated by the many varying data formats and vendor-specific peculiarities that arise when loading data from varying scanners. There are also potentially many different biomodelling applications. Because of this, BioBuild provides the user with a vast array of options for modifying and transforming a volume. Generally, however, only a few of those operations are ever likely to be performed on any single volume. The number of possible options can initially intimidate novice users, as there is often more than one way to perform any given task. However, once novice users become accustomed to the BioBuild paradigm, they quickly become comfortable with the interface.

The usual procedural steps required for processing image volumes using BioBuild can be summarized as follows:

  • import and reduce volume, and confirm orientation;
  • inspect anatomy and find intensity threshold;
  • edit and optimize volume;
  • 3D visualization;
  • RP build optimization;
  • RP build file generation.

Because BioBuild was designed with ease of use in mind, virtually all volume processing can be accomplished via simple point-and-click operations. Although providing advanced editing features, most of the functionality of BioBuild is accessed through toolbars composed of simple and intuitive icons. Each toolbar can be placed at the user's desired location, but will always remain on top of the volume display. This significantly reduces the learning curve for novice users, and makes common editing operations completely automatic for experienced users.

The major steps in producing a biomodel are described below, beginning with importing a dataset.

6.2.1 Importing a dataset

3D datasets can be imported into BioBuild in several ways. Data can be loaded directly from a series of DICOM or generic 'raw' image files on the local computer, on a network drive or a remote DICOM server. Loading data from many different and varied data sources is one of the strengths of BioBuild.

Figure 6.1 shows the powerful open files dialogue, which supports real-time regular expression searches on filenames, the addition and removal of custom file filters and the ability to automatically search a directory for recognizable files and provides selection feedback, such as the number of files currently selected.

When opening a dataset, it is not necessary for it to contain all critical volume information, such as voxel size or slice spacing. If these properties cannot be automatically found within the dataset itself, BioBuild will prompt the user to enter the missing information, as shown in Figure 6.2. This option needs to be used with extreme caution, however, as entering incorrect values will result in inaccurate biomodels. Such cases require confirmation of the scanning parameters from the original scan source.

Figure 6.1 Open files dialogue features advanced search and selection capabilities

Jf Image Information Missing

Critical Image Information Missing

Please complete fields witti a "0" value Pixel Spacing car be automatical!/ generated Tom Rows. Cols, and Display FOV

Pixel Spacing x|o-ocora (mrnj y}ö»Sö (mrn)

Display FOV

(mm)

Was this article helpful?

0 0
Arthritis Relief and Prevention

Arthritis Relief and Prevention

This report may be oh so welcome especially if theres no doctor in the house Take Charge of Your Arthritis Now in less than 5-Minutes the time it takes to make an appointment with your healthcare provider Could you use some help understanding arthritis Maybe a little gentle, bedside manner in your battle for joint pain relief would be great Well, even if you are not sure if arthritis is the issue with you or your friend or loved one.

Get My Free Ebook


Post a comment