Open Access
1 November 2007 Transmitted light brightfield mosaic microscopy for three-dimensional tracing of single neuron morphology
Marcel Oberlaender, Randy M. Bruno, Bert Julian Sakmann, Philip Julian Broser
Author Affiliations +
Abstract
A fundamental challenge in neuroscience is the determination of the three-dimensional (3D) morphology of neurons in the cortex. Here we describe a semiautomated method to trace single biocytin-filled neurons using a transmitted light brightfield microscope. The method includes 3D tracing of dendritic trees and axonal arbors from image stacks of serial 100-μm-thick tangential brain sections. Key functionalities include mosaic scanning and optical sectioning, high-resolution image restoration, and fast, parallel computing for neuron tracing. The mosaic technique compensates for the limited field of view at high magnification, allowing the acquisition of high-resolution image stacks on a scale of millimeters. The image restoration by deconvolution is based on experimentally verified assumptions about the optical system. Restoration yields a significant improvement of signal-to-noise ratio and resolution of neuronal structures in the image stack. Application of local threshold and thinning filters result in a 3D graph representation of dendrites and axons in a section. The reconstructed branches are then manually edited and aligned. Branches from adjacent sections are spliced, resulting in a complete 3D reconstruction of a neuron. A comparison with 3D reconstructions from manually traced neurons shows that the semiautomated system is a fast and reliable alternative to the manual tracing systems currently available.

1.

Introduction

The accurate tracing of single neurons is one prerequisite for the determination of anatomical features of different neuronal cell types required for biophysical modeling of single cells and signal processing in small circuits. Several automated reconstruction approaches have been reported previously. These reconstructions, made with two-photon,1 confocal 2, 3, 4, 5, 6, 7, 8, 9, 10 or brightfield images,10, 11 focus mainly on the dendritic tree and the extraction of geometrical features such as volumes or surface areas of dendrites or spines. Hence, these approaches completely lack a reconstruction of the axonal arbor. The reason is that the axonal branching pattern is more complex and that axons spread over a much larger volume (cubic millimeters) compared to dendrites (a few hundred cubic micrometers). Furthermore, axonal staining is fainter than that of dendrites because of the smaller diameters of axons and their greater distance from the soma where a tracer is loaded into the cell. Thus, no successful automated tracing of these widely spreading neuronal projections has yet been reported.

Nevertheless, the determination of individual cells’ projection patterns,12 of connections between different areas of the brain, the number of synaptic contacts between cells and the difference in axonal growth under various developmental conditions12, 13, 14, 15 are problems that illustrate the need for detailed three-dimensional (3D) reconstruction of axonal arbors. The method presented here focuses on the accurate tracing of all neuronal projections. The dendritic tree and the exten-sively spreading axonal arbor are traced and reconstructed simultaneously.

Typically the cells are filled in vivo with a tracer, like biocytin.16 The brain is then perfused with fixative and cut in sections of about 100-μm thickness. The current approach for tracing these neurons and their axons in three dimensions is a manual one, based on the Camera Lucida technique (e.g., The Neurolucida System,17 FilamentTracer.18 Here, neuronal structures in each section are traced manually. A human user interacts with a microscope that is enhanced with computer imaging hardware and software.11, 13 The user recognizes patterns and marks neuronal structures on a live camera image displayed on a computer screen. By moving the stage in the x and y directions and focusing through the brain slice in the z direction, a progressively larger volume is inspected. The 3D tracings of neuronal branches from this volume are collected by the computer system interfaced to the camera and result in a 3D graph representation of neurons.

Manual tracings of dendritic trees are very reliable. The reliability is due to the localized branching pattern and the relatively large diameters (2to5μm) of dendrites. However, the axonal arbor frequently extends further away from the soma. The average volume that has to be inspected by the user is for most cell types around 1mm×1mm×100μm per brain section. For accurate tracing, this volume should be inspected in a raster scan order, moving from one field of view to the next and progressively focusing through the specimen.

Using a 40× objective and a standard charge-coupled device (CCD) camera [e.g., Q ICAM (Q Imaging, Surrey, British Columbia, Canada)], the number of fields of view is approximately 5 500, with an average sampling along the optical axis of 1μm . Because axons can have diameters less than 1μm , a 100× objective is usually used for manual tracing. In this case, approximately 37 000 fields of view have to be inspected. Taking a typical cell (e.g., layer 23 pyramidal neuron of rat cortex) that spreads over 10 to 20 brain sections, the number of fields of view that have to be inspected is of the order of 105 . The manual reconstruction of axonal branching patterns is hence tedious and time-consuming. Therefore, correct manual tracing of axons requires experienced users to reach a reliable level of reconstruction quality.

We present a semiautomatic “reconstruction” pipeline (Fig. 1 ) that traces reliably both dendrites and axonal arbors by extracting their skeleton (approximate midlines). In addition, the method presented here needs significantly less time compared to manual tracing. The automated tracing is carried out on large image stacks acquired by mosaic scanning19 and optical sectioning20 using a transmitted light brightfield (TLB) microscope. The manual inspection of thousands of fields of view is replaced by the acquisition of a stack of mosaic images. A rectangular pattern of overlapping mosaic tiles (adjacent fields of view) is scanned, covering an area of a brain slice sufficient for the tracing of axonal arbors usually 1×1mm2 [Fig. 2a ]. The 3D information is obtained via optical sectioning, meaning the recording of such mosaic planes at multiple focal positions that are ideally separated by 0.5μm .

Fig. 1

Semiautomated neuron tracing. (a) Idealized TLB microscope equipped with a motorized x-y-z stage (1). A 546nm±5nm bandpass illumination filter (2) is attached to the diaphragm of the lighthouse and provides quasi-monochromatic (green) illumination of the specimen. The green illuminating light is transmitted via a high NA (1.4) oil immersion condenser (3). The specimen is imaged by a high NA oil immersion objective [ 100× (1.4 NA) combined with a 0.5 TV mount or 40× (1.3 NA) combined with nonmagnifying TV mount] (4). The images are recorded by a CCD camera (5). (b) Schematic drawing of reconstruction pipeline. The key step comprises a high-resolution image restoration (deconvolution). The improved image quality is sufficient for an automatic neuron tracing. Reconstructions from adjacent brain slices are edited and spliced manually. (c) Schematic drawing of the automatic image processing, comprising image restoration and neuron tracing. Image restoration is based on treating the TLB image like a fluorescent image. Inversion of the gray values is followed by a subdivision of the mosaic image into bricks that can be processed by the HUYGENS software. This software package executes a linear deconvolution. After a maximum down sampling of the bricks, the neuron tracing is executed. It comprises various raster image–based segmentation algorithms that extract candidates for neuronal structures from the background. These candidates are converted into a vector image representation. Various algorithms are applied to extract a smooth midline (skeleton) representation of the foreground objects. The resultant graph is written into a date file for further manual editing and splicing steps. (High resolution image available online only. 1 10.1117/1.2815693.1 1)

064029_1_025706jbo1.jpg

Fig. 2

Mosaic scanning and optical sectioning. (a) Overview of a biocytin-filled neuron (pyramidal neuron located in layers 2 and 3 of rat cortex) from a brain slice taken with a 100× objective and a 0.5 TV mount. The black box indicates the scanning area (1.5×1.5mm2) . A 3D image stack is acquired by the mosaic scanning and optical sectioning techniques. A rectangular pattern of adjacent fields of view is scanned. (b) The pattern of overlapping tiles (adjacent fields of view) shown in (a) is stitched to a large 2D mosaic image. Mosaics are recorded for all focal planes separated by 0.5μm , resulting in a 3D mosaic image with size of 1.5mm×1.5mm×100μm . (c) Enlargement of the box shown in (b). A faint axonal branch is seen about 1mm away from the soma. This illustrates the requirement to scan large volumes at high resolution. (d) Result of the semiautomated reconstruction pipeline. All neuronal projections, the localized dendritic tree, and the widely spreading axons, within the original mosaic image stack in (b) are traced and reconstructed. The soma (solid circular structure in the center) was manually edited for illustration. (High resolution image available online only. 2 10.1117/1.2815693.2 2 )

064029_1_025706jbo2.jpg

This large 3D image is then deconvolved21 based on measured optical properties of the microscope, such as lack of aberrations within the optical pathway. This guarantees an improvement of signal-to-noise ratio and resolution, in particular along the z direction. A local threshold function that checks for connectivity extracts neuronal structures from the background. The extracted foreground objects are transformed into thinned approximate midlines (the skeleton) and yield a graph representation of dendrites and the widely spreading axonal arbor. The automatically reconstructed branches from serially sectioned brain slices are than manually edited and spliced using NEUROLUCIDA software for editing.

2.

Methods

2.1.

Sample Preparation

All cells were filled in Wistar rats (P28 to P31; Charles River Laboratory). Experiments were carried out in accordance with the animal welfare guidelines of the Max Planck Society.

Briefly, animals were anesthetized with urethane or isoflurane. Body temperature was maintained at 37°C by a servo-controlled heating blanket. A metal post for positioning the head was attached to the skull overlying the cerebellum by dental acrylic. A craniotomy was made overlying the left barrel cortex (each <0.5mm2 ) or the ventral posteriomedial nucleus of the thalamus (<2.0mm2) . Thalamic craniotomies were centered at 3.5mm posterior to bregma and 3.0mm lateral of the midline. The dura was removed for some of the experiments. The craniotomy was kept covered with artificial cerebral spinal fluid [aCSF; in millimoles: 135 NaCl, 5.4 KCl, 1.8 CaCl2 , 1.0 MgCl2 , and 50 N -(2-hydroxyethyl)piperazine- N -2-ethanesulfonic acid (HEPEs); pH 7.2].

Cells were filled with biocytin either extracellularly using juxtasomal recording and electroporation22 or via whole-cell recording.23 In both cases, we used patch pipettes, pulled from unfilamented borosilicate glass on a Sutter P-97 puller (Sutter Instruments). The outside diameter of the shank entering the brain varied from 25to75μm , and the tip opening had an inside diameter less than 1μm .

Whole-cell recording pipettes were tip-filled with (in millimoles) 135 K-gluconate, 10 HEPEs, 10 phosphocreatin-Na, 4 KCl, 4 adenosine triphosphate–Mg, 0.3 guanosine triphosphate, and 0.2% biocytin (pH 7.2, osmolarity 291). Pipettes were inserted perpendicular to the pia under high pressure (200to300mbar) , and cells were searched for blindly under low pressure (20to30mbar) .23 Recordings were made in bridge mode for 30to60min during which biocytin passively diffused into the cells. Seal resistance was > 1GΩ , access resistance was 1to100MΩ , and spike height and overall Vm were stable throughout the recording. No holding current was used.

Juxtasomal fillings were made with pipettes filled with aCSF containing 1% biocytin. No pressure was applied to the pipette at any time. Once a cell was isolated extracellularly, current pulses ( 1to5nA , 200ms on, 200ms off) were applied continuously for several minutes.22

At least 1h was allowed to pass after cell filling and prior to tissue fixation to ensure sufficient diffusion of the biocytin throughout the axonal arbor. The animal was deeply anesthetized and perfused transcardially with phosphate buffer followed by 4% paraformaldehyde. Cortex and thalamus were cut tangentially and coronally, respectively, in 100-μm vibratome sections. Biocytin in these sections was stained with the chromogen 3,3 -diaminobenzidine tetrahydrochloride using the avidin-biotin-peroxidase method.16 Sections were sometimes counterstained for cytochrome oxidase.24 Processed sections were then mounted on slides and coverslipped with Mowiol (Hoechst, Austria). For comparison with our reconstruction approach, four well-filled cells from four different rats were manually reconstructed using a Neurolucida system with a 100× oil immersion objective [Olympus 100× Plan; 1.25 numeric aperture (NA).]

2.2.

Image Acquisition

A standard TLB microscope (Olympus BX-51, Olympus, Japan) equipped with a motorized x-y-z stage (Maerzhaeuser Wetzlar, Germany) was used for image acquisition [1 in Fig. 1a]. A 546-nm±5-nm bandpass illumination filter (CHROMA AF-analysentechnik, Germany), which is attached to the diaphragm of the lighthouse, provides quasi-monochromatic illumination of the specimen [2 in Fig. 1a]. This bandpass filter minimizes the chromatic aberrations of the imaging system and simplifies the theoretical description of the optical pathway from a polychromatic to a monochromatic one.

The light is transmitted by a high NA (1.4 NA) oil immersion condenser (Olympus, Japan) [3 in Fig. 1a], ensuring parallel illumination of the specimen under “Koehler”-conditions.25 The specimen is imaged by a 100× high NA oil immersion objective (Olympus 100× UPLSAPO; 1.4 NA) in combination with a 0.5× television (TV)-mount (Olympus U-TV0.5XC-3) or a 40× objective (Olympus 40× UPLFLN; 1.3 NA) in combination with a nonmagnifying TV mount [4 in Fig. 1a]. The immersion oil has a refractive index of noil=1.516 similar to glass.

The stage is navigated in three spatial directions by OASIS-4i-controller hardware and software (Objective Imaging Ltd., Cambridge, United Kingdom). It allows the acquisition of large mosaic images19 at different focal planes. Mosaic in this context refers to a two-dimensional (2D) image of overlapping tiles (i.e., adjacent fields of view) that are aligned automatically and then stitched during the image acquisition, resulting in a large composite image [Fig. 2a]. The user-defined scan area is automatically divided into a series of overlapping fields of view that we call tiles. For each tile location, a stack of images is acquired using optical sectioning20 with a typical separation of 0.5μm between the focal planes. For each focal plane, the corresponding tiles are then stitched together, resulting in a 3D stack of 2D mosaic images for each focal position [Fig. 2b]. This process is executed by the Surveyor Software (Objective Imaging Ltd.).

The images are recorded by a Q ICAM Fast 1394 camera (QImaging) [5 in Fig. 1a] equipped with a CCD chip, which in combination with the 100× objective and the 0.5 TV mount yields an xy sampling of 92nmperpixel ( 116nm per pixel if the 40× objective and a nonmagnifying TV mount are used). Because the illumination is limited to one wavelength, eight-bit gray value images, rather than red-green-blue color images, are acquired. Other cameras, such as the Retiga 2000R Fast 1394 Mono Cooled (QImaging), were tested as well and showed no significant improvement in resolution or signal-to-noise ratio.

To guarantee a similar dynamic range of the gray values for mosaic image stacks from different brain slices and animals, the exposure time of the CCD camera is set semiautomatically by the Surveyor Software. Therefore, the mean gray value in a typical field of view within the scan area is calculated and is set to be 190. Typical here refers to a field of view without any stained neuron somata or blood vessels. This highly important feature of the image acquisition is the basis for an optimal deconvolution and, therefore, a robust neuron tracing. However, significant exposure gradients within the scan area are a considerable constraint to our method. The consequences and limitations for neuron tracing will be discussed later.

In summary, image acquisition results in a high-resolution 3D mosaic image. The typical image stack for neuron tracing that we use is 1mm×1mm×100μm . However, the mosaic pattern can be adjusted to smaller or larger areas. Using the 100× objective in combination with the 0.5 TV mount or the 40× objective with a nonmagnifying TV mount, the sampling is 92×92×500nm3 or 116×116×500nm3 per voxel. Hence, the data volume for such a stack is approximately 15GB .

2.3.

Hardware and Software Requirements for Automated Image Processing

In general, the data size is between 10 and 30GB per section. Therefore, a fast image processing pipeline based on multi processor computing that can handle such large data sets was developed. The algorithms are written in C++ .26 The raster image file input/output, iteration through a raster image and the dilation as well as the closing filter, use the ITK Image Processing Library.27 The algorithms were parallelized by applying the OPENMP standard.28 They are executed on AMD dual-core 64-bit Opteron servers, equipped with either 4 central processing units (CPUs) and 32GB memory (DELTA Computer Products GmbH, Reinbek, Germany) or 8 CPUs and 64GB memory (fms-computer.com, Netphen, Germany).

Once the mosaic image stack has been saved to disk, a software daemon (a custom written PERL script) detects new data on the hard drive and starts the processing pipeline. Additional image stacks are added to a queue and processed sequentially. The script initiates the subsequent image processing steps: inversion of the gray values, subdivision into bricks, deconvolution, down sampling, and finally the 3D neuron tracing [Figs. 1b, 1c, 5]. Therefore, the automatic tracing routine can run 24h a day, because the queue of scanned image stacks is sequentially processed. The next step needing user interaction is the manual postprocessing of the final 3D reconstructions.

2.4.

Image Restoration

Diffraction as well as interference phenomena decrease the image quality and resolution of image stacks acquired through a TLB microscope. The resolution along the optical axis ( z axis) is usually insufficient for an automatic tracing of faint axonal branches surrounded by more prominent neuronal structures.11 Image restoration (deconvolution) can partly compensate for this limited resolution. Deconvolution is the most important step for the performance of the automatic tracing procedure.29

The image formation is described by a convolution of the 3D intensity distribution within the object with a point spread function (PSF).30 The PSF (Fig. 3 , see Color Plate 1) describes the optical properties of the imaging system. The convolution results in the 3D intensity distribution within the image. The intensity distribution within the object is then restored by deconvolving the recorded image stack using the PSF. Deconvolution is essentially an inversion of the image formation process. Thus deconvolution yields the best results provided the PSF can be determined exactly.

Fig. 3

Illustration of PSF modeling. (a) The x-y view of the cone of light from a dendrite. Its intensity distribution is approximately symmetric with respect to the focal plane and the optical axis. (b) Modeled PSF obtained by treating the TLB microscope as if it would be a fluorescent microscope that suffers from no primary aberrations. These assumptions have been verified (Ref. 21). (High resolution image available online only. 3 10.1117/1.2815693.3 3 )

064029_1_025706jbo3.jpg

2.4.1.

Inversion of gray values

The TLB microscope is treated like a fluorescence microscope that lacks primary aberrations. Image formation can then be simplified and modeled as a monochromatic point source that is imaged by the circular entrance pupil of the objective. This can be calculated analytically for low NA objectives25 and numerically for high NA ones.31 Justification for these simplifications, as well as verification, can be found in Ref. 21. Simplifications are essentially based on considerations of propagation of mutual coherence32 and measurements of spherical aberrations, using a Shack-Hartmann wavefront sensor.33 Treating the TLB images like fluorescence data, the gray values have to be inverted. This inverted image stack will be referred to as the “original image” stack.

2.4.2.

Subdivision into bricks and linear deconvolution

The deconvolution is carried out by the HUYGENS software.34 With the assumptions described above, the PSF is calculated in a wide-field fluorescence mode (Fig. 3), which is suggested by the HUYGENS software. An important parameter is the refractive index (n) of the specimen. Aberration mea-surements21 yield a uniform value of nspecimen=1.44 for the specimen that is a 100-μm -thick brain slice embedded in Mowiol nmowiol=1.49 . Because the simplifications of the image formation process are justified, the imaging system is well described by the modeled PSF (Fig. 3).

This PSF is applied to the recorded image stack by a linear Tikhonov-Miller restoration filter.35 It is derived from a least squares approach. This approach is based on minimizing the squared difference of the acquired image and a blurred estimate of the original object. The resultant image stack is improved significantly in terms of resolution and signal-to-noise ratio (Fig. 4 ). This deconvolved imaged stack is then of sufficient quality to apply an automatic tracing of neuronal structures (Figs. 5, 6, 7 ).

Fig. 4

Illustration of image restoration (deconvolution). (a) The x-z view of various cones of light from dendrites and axons from one brain slice of 100-μm thickness. The intensity is measured along the z axis through a dendrite (white vertical line). (b) The x-z view of the image stack after the application of a linear deconvolution filter. The longitudinal intensity is measured along the same line. (c) Intensity line plots before and after deconvolution. The signal-to-noise ratio is improved significantly, and the dendrite’s depth is decreased by a factor of more than 2.5. (one voxel is 0.5μm wide.) (See Color Plate 1.) (High resolution image available online only. 4 10.1117/1.2815693.4 4)

064029_1_025706jbo4.jpg

Fig. 6

Image processing. (a) Maximum intensity z projection of inverted image stack cropped from the mosaic image shown in Fig. 2b (box). Clearly visible is a fragmented axon with intensely filled boutons (swellings with likely synaptic contact sites) and a weak stain between them. (b) Maximum intensity z projection of deconvolved, inverted image stack. The image quality is improved in terms of signal-to-noise ratio and resolution, especially along the z direction. (c) Maximum z projection after intermediate voxels (gray) being tested with the first part of the local threshold property function. (d) Maximum z projection after intermediate voxels (gray) being tested with the second part of the local threshold property function. (e) Maximum z projection after hit or miss transformation. Rectangular frame masks of increasing size (1-to-3-voxel radius) delete small and isolated artifacts. (f) Maximum z projection after dilation and closing. A spherical structuring element tends to smooth the foreground objects and fills small gaps in the contour. (g) Maximum z projection illustrating the end-line locations (white) that are local distance maxima from an inner seed compartment. These end-line locations will not be erased and, therefore, guarantee connectivity of objects during thinning. (h) Maximum z projection illustrating the result of iterative layer-by-layer thinning. The end-line locations (white) remain unchanged. (i) Maximum z projection illustrating the validation of the thinned graph. The shortest distance from one end-line location to all others will be calculated (gray value coded). (j) Maximum z projection illustrating the validation of the thinned graph. Compartments and intercompartmental connections that are not part of a shortest connection (gray) will be erased. (k) Maximum z projection illustrating the pruning of the thinned and validated graph. Because end-line locations were local intensity maxima, morphological swellings (boutons, spines) result in short branches that will be pruned if shorter than 3μm . (l) Maximum z projection after midline extraction. The prior fragmented axon is transformed to the thin less fragmented line representation. The scale bar in the lower left corner applies to the panels [(a),(a1) to ((l),(l1)]. Figures (a1) to (11) show maximum x-y projections of the same image stack. Figures (a2) to (12) show enlargements to illustrate the fragmented boutonlike axon filling. The scale bar in the lower right corner applies to the panels (a2) to (l2). (High resolution image available online only. 6 10.1117/1.2815693.6 6 )

064029_1_025706jbo6.jpg

Fig. 7

Image processing. Illustration of the image processing pipeline for axonal branches. For detailed figure legends, see Fig. 6. (High resolution image available online only. 7 10.1117/1.2815693.7 7 )

064029_1_025706jbo7.jpg

Using the Tikhonov-Miller filter, the HUYGENS software is not capable of processing images larger than 2GB at once. Therefore, the mosaic image stack is divided into smaller stacks of appropriate size (e.g., 3100×3100×120 voxels), overlapping by 100 voxels, that can be processed by HUYGENS. These substacks, hereafter called bricks, are regrouped during the neuron tracing, resulting in a deconvolved and segmented mosaic image stack.

2.4.3.

Maximum down sampling

After deconvolution, the data size is reduced to facilitate further computationally intensive steps. Maximum down sampling is applied for each deconvolved image brick in the x-y plane. Two neighboring voxels are merged to one voxel, computing their maximum gray value. Because this procedure is done in both lateral directions ( x and y ), the data set is reduced in size by a factor of 4. Because neuronal structures are local intensity maxima, all branches will be conserved.

As a consequence, the new lateral sampling is 184×184nm2 per voxel if the 100× objective in combination with the 0.5 TV mount is used or 216×216nm2 per voxel if the 40× objective in combination with a nonmagnifying TV mount is used. Therefore the sampling in both cases is still below the physical resolution limit (approximately half the wavelength, 232nm ) according to the Rayleigh criterion.25

2.5.

Neuron Tracing

2.5.1.

Raster image–based segmentation

A major feature of image acquisition is the semiautomated setting of the exposure time of the CCD camera to control the image histogram as described previously. This feature is critical to the downstream neuron tracing algorithms. Their parameters were adjusted and systematically tested to derive the best possible tracings from such images. This issue of neuron tracing will be discussed later.

The down-sampled 3D image bricks are now subjected individually to segmentation algorithms (Figs. 5, 6, 7). The purpose is to separate voxels that represent neuronal structures from voxels that are part of the background.

Local threshold filtering

A target image with the dimensions of a deconvolved image brick is created and initialized with gray value zero. The voxels of the deconvolved bricks are then separated into three groups.

One group, below a lower threshold is set to background (gray value 0 in the deconvolved brick). A second group of voxels belonging to potential neuronal structures, with values above an upper threshold, is assigned as foreground (gray value 255 in the target image). The foreground consists of disjointed voxel regions that will be referred to as foreground objects. Voxels with values between the two thresholds form the third group of intermediate value voxels. The intermediate voxels are tested for local features defined by a local property function to decide whether they belong to foreground objects or the background.

The lower threshold is determined by calculating the intensity distribution of the deconvolved image brick. The deconvolution produces a thin unimodal histogram with the background clustered near 0, structures in the high range, and some remaining intermediate gray values. The lower threshold is taken as the histogram’s mean value plus 1.5 standard deviations (SDs). This gray value is usually between 1 and 15. It was derived after systematic testing and essentially deletes the background noise from unstained tissue.

The upper threshold assigns the prominent and most intense structures to the foreground. The value of the upper threshold is determined by calculating the histogram of the maximum z projection of the deconvolved image brick. The upper threshold is then taken as the mean value plus 3.0 SDs. Systematic testing yielded that this is the best value to detect dendrites, well-filled axons, and the prominent parts of fragmented filled axons [Figs. 5c, 6c, 7c]. This upper threshold usually has a gray value between 30 and 60. The remaining voxels between the background (1 to 15) and the foreground (30 to 60) margins are referred to as the intermediate voxels.

The local property function comprises two steps. First, each intermediate voxel is set as the center of an 11-×-11 -voxel mask in the x-y plane and the mean intensity of the voxels, regardless of group (background voxels were already set to 0), within this 2D neighborhood is calculated. If the centered intermediate voxel has a gray value that is larger than this mean intensity plus an epsilon value of 5 gray values, it is set to an intermediate gray level of 125 [Figs. 5c, 6c, 7c] in the target image. Otherwise it is set to background (gray value 0 in the target image). The optimal value of ϵ was derived by systematic testing. The group of intermediate voxels consists usually of isolated artifacts or dim bridges between bright structures. These dim bridges are often found between axonal boutons (swellings of the axon that are likely sites of synaptic contact [Figs. 6(a2) to 6(e2)] and should therefore be part of the foreground. The resultant target image comprises three gray values: 0, 125, and 255 for the background, intermediate and foreground voxels, respectively.

The second part of the local property function inspects the intermediate voxels of the target image for connectivity to a foreground object. Systematic testing suggested that an intermediate voxel is set to the foreground if 10% of the voxels from a 17×17 mask in the x-y plane centered on this intermediate voxel are part of a foreground object, otherwise it is set to background [Figs. 5d, 6d, 7d].

Hit or miss transformation

The application of the above local threshold filter results in a binary image. This is then subjected to a hit or miss transformation36 with rectangular frame masks of increasing size as structuring elements. The transformation is applied to every image plane. Isolated foreground objects that are completely surrounded by a frame are converted to background. Beginning with a radius of one voxel and increasing the frame size subsequently to three voxels, small, isolated artifacts that were introduced by the linear deconvolution are removed [Figs. 5e, 6e, 7e].

Dilation and closing

Next, dilation and closing filters36 are applied. The dilation filter bridges gaps between the axonal boutons that have not been closed by the local threshold filter. Finally, a closing filter is applied [Figs. 5f, 6f, 7f]. Its geometrical interpretation is that a “sphere” rolls along the outside boundary of a foreground object within the image. Therefore, it tends to smooth sections of contours and fuses narrow breaks as well as long thin gulfs, eliminates small holes, and fills small gaps in the contour.36 The 3D structuring element (sphere) for the closing and dilation has a radius of three voxels. Larger radii could result in a fusion of objects that should not be connected, and smaller radii would have no significant effect on the foreground objects. The radius of three, therefore, proved to be the best compromise after systematic testing.

Object labeling

The brickwise segmentation yields binary image stacks. Regrouping of these segmented bricks leads to a binary 3D image stack with the dimensions of a down-sampled mosaic image stack. This large 3D binary stack is subjected to an object labeling algorithm. Once a foreground voxel is detected, a region growing algorithm36 fuses all connected foreground voxels to a subregion and assigns a unique integer label. The binary image is thus transformed to an image of N individually labeled subregions, each representing a foreground object. Labeled objects are sorted according to their number of voxels. The largest foreground object is labeled as number 1, and the smallest object is assigned number N . The background voxels are labeled as 0.

Raster to vector image conversion

Because the neuron occupies only a small fraction of the scanned image volume, the 3D raster image representation requires inappropriately large data storage. A more sophisticated vector data folder is created that stores only foreground objects but keeps the 3D information of the image. Figure 8 illustrates schematically the architecture of the new data folder. The voxels of each foreground object are replaced by vectors, hereafter called compartments. Each compartment stores the 3D coordinates of the prior voxel and could in the future store additional morphological information, such as local properties (e.g., radii, surface distance).

Fig. 8

Structure of the vector image. (a) Double linked list of objects. Each object represents an individual island of foreground subregions in the segmented raster image. The background is not stored. (b) Items of the object list are double linked lists of compartments. Each compartment represents one voxel of a foreground subregion (object) of the prior image. (c1) A compartment is realized as a vector. The 3D coordinates, as well as additional information, can be stored. (c2). The 3D topology of the image is maintained by intercompartmental smart pointers referencing the neighboring compartments and storing their relative positions (e.g., top left). These connections are used for navigating the vector image. (High resolution image available online only. 8 10.1117/1.2815693.8 8 )

064029_1_025706jbo8.jpg

Furthermore, each compartment is linked to its neighboring compartments by intercompartmental smart pointers, which allow access of neighboring compartments (e.g., top-left, bottom, left). Therefore, the implicit neighborhood representation by voxel coordinates of the 3D raster image is mapped to an explicit neighborhood construct in the compartment representation. The neighborhood pointers between the compartments preserve the 3D topology of any object. Furthermore, fast navigation through the object is possible using these intercompartmental smart pointers.

The compartments representing one object are grouped in a “double linked” list, which guarantees fast access to each compartment. Therefore, the 3D raster image of N labeled foreground objects is converted to N compartment lists. This transformation replaces the 3D raster image of labeled foreground objects by a double linked list, where each list item represents one of these disjointed foreground objects by a list of topology preserving compartments (vector image representation).

2.5.2.

Vector image–based midline extraction

Representing objects by their main structure or skeleton (approximate midline) is a commonly used technique in image processing. A fast and reliable way to calculate the skeleton of an object is thinning. Generally, thinning is a layer-by-layer (boundary voxel/compartment) erosion until only a unit-width skeleton is left.37, 38, 39 Therefore, three different classes of compartment topologies are defined: 6-adjacent (N6) 18-adjacent (N18) , and 26-adjacent (N26) .38 Two neighboring compartments are N6 if their Euclidean distance is equal to 1, N18 if their Euclidean distance is between (or equal to) 1 and the square root of 2, and N26 if their Euclidean distance is between (or equal to) 1 and the square root of 339 (Fig. 9 , see Color Plate 1).

Fig. 9

Neighborhood in 3D images, nonsimple and simple compartments. Panels (a) to (c) show a visualization of different neighborhood topologies in 3D images. Compartments are visualized as boxes centered on their position in image space. The central compartment is always visualized in red. All other compartments are presented in cyan. The boxes have an edge length of 1. In panels (a) to (c), the boxes are slightly moved away from the center box for better visualization. (a) The six adjacent neighborhood (N6) around a central compartment. Compartments are adjacent if their boxes share one face. (b) The 18 adjacent neighborhood (N18) . Compartments are adjacent if their boxes share at least one edge. (c) The 26 adjacent neighborhood (N26) . Compartments are adjacent if their boxes share at least one point. Panels (d) to (f) show different possible configurations of compartments adjacent to each other. (d) This panel shows a central compartment, which is essential to preserve connectivity. If the red compartment is removed, the two cyan compartments are no longer connected. (e) As in (d), the red compartment is essential to preserve connectivity. Furthermore, this example illustrates the definition of an intersection compartment. (f) This panel shows a simple center compartment. If one removes the red compartment, the three remaining compartments are still connected. This is an example of a template, which is used to test during the thinning algorithm whether a compartment is removable, or not. (High resolution image available online only. 9 10.1117/1.2815693.9 9) Color Plate 1

064029_1_025706jbo9.jpg

A thinning algorithm has to obey the following demands that are derived from a 2D definition by Seul :40 (1) Connected objects must thin to connected line structures. (2) The thinned lines should be minimally 26-connected. Two compartments are 26-connected if they are connected by a chain of N26 adjacent compartments. (3) Approximate end-line locations should be maintained. (4) The result of thinning should approximate the midlines of the structures. (5) Extraneous short branches introduced by thinning should be minimized.

To address these demands, we use the template matching algorithm described by Jonker.41 It is one of the most efficient thinning algorithms and can be briefly summarized as follows. First, a set of end-line locations is determined to represent the topology of the object. The end-line locations will not be removed. All other compartments, starting from the object boundary, are tested to check whether or not their removal affects the 26-connectivity of the object (Fig. 9). If compartment removal has no affect on the connectivity, this compartment is termed simple.

To determine whether a compartment is simple or not, its N26 neighborhood is compared with all possible N26 neighborhood templates that preserve local connectivity (Fig. 9). If the compartment neighborhood matches one of these templates, the removal of this compartment will not affect the local connectivity and the global connectivity will be preserved as well.41 Hence, this compartment is assigned to be simple and is removed. Whether or not a compartment is simple may change after deleting compartments. Therefore, the algorithm is iterative and finishes when no more compartments are removed (i.e., the thinning algorithm is an idempotent algorithm. The implementation of this thinning approach is described in detail below.

Detection of end-line locations

The end-line locations are determined by calculating the compartment, which is most distant to the object boundary (compartments with less than 26 neighbors). This is done by calculating the Euclidean distance map42 of the object. The compartment with the highest distance number is selected and called the seed point. In the case of several compartments sharing the highest distance value, one of them is selected randomly. Now the Euclidean graph distance39 from each compartment in the object to the seed compartment is computed. This is the shortest connection along the graph between two compartments. Compartments with a local maximal graph distance are assigned as line endings [Figs. 5g, 6g, 7g].

Thinning

One disadvantage of thinning is the possibility of inward erosion, which could potentially create holes in the object. To prevent these artifacts, an iterative approach is used.38

Using the 6-adjacent (N6) and 18-adjacent (N18) topologies defined above, boundary layers are assigned that are peeled by an iterative algorithm:

  • 1. Collect the set of N6 boundary compartments (compartments where at least one of the N6 neighbors is missing).

  • 2. Delete all simple compartments (according to the N26 topology) in this set, starting with those who have the lowest number of neighbors (i.e., intercompartmental pointers).

  • 3. Collect the set of N18 boundary compartments (compartments where at least one of the N18 neighbors is missing).

  • 4. Delete all simple compartments (according to the N26 topology) in this set, starting with those who have the lowest number of neighbors.

  • 5. Repeat steps 1 to 4 until no more simple compartments are found [Figs. 5h, 6h, 7h].

    A detailed description of such an algorithm can be found in He 38 and Jonker.41

Graph validation and pruning

Imaging and segmentation can lead to artifacts such as loops and clusters in the thinned midline representation of the objects, which have to be removed by postprocessing. Loops are removed by selecting one end-line location and calculating the shortest path from this end-line location to all other end-line locations in the object. Compartments or intercompartmental pointers that are not used by any path within this validation step [Figs. 5i, 5j, 6i, 6j, 7i, 7j] are removed from the object.

Because the approximate end-line locations were local distance maxima from an inner seed point, swellings within the neuronal branches will result in a short subbranch that we regard as artificial [Figs. 5k, 6k, 7k]. Therefore, these short branches are removed within this pruning step from the object if the distance from the according end-line location to the first intersection (Fig. 9) compartment is shorter than 3μm (derived by systematic testing). Nonartificial axonal subbranches below this length threshold will hence be pruned as well. However, for most scientific problems, this error is negligible.

2.6.

Editing and Splicing

The image processing, so far, transformed the deconvolved raster image into a vector image representation of thinned foreground objects. Therefore, each object is represented as a 3D graph with end-line locations (endings). Furthermore, compartments that are connected to more than two neighbors will be set to be an intersection (Fig. 9). This set of endings, intersections, and normal compartments is converted to an ASCII (American Standard Code for Information Interchange) file that can be either visualized in AMIRA 43 or NEUROLUCIDA. Up to this point, all processing steps are automated. Final editing and splicing of the reconstructions is done manually.

2.6.1.

Editing

Artifacts that are similar in shape or gray value to neuronal structure have to be erased. In most cases, these artifacts are astrocytes (star-shaped glial cells). This editing is done by superimposing the reconstruction with the maximum z projection of the original image stack. Within the projection image, neuronal structure can be easily distinguished from artifacts. Neuronal branches appear as elongated structures within the projection image; whereas, falsely traced artifacts are either small spots (most intense parts of unstained neurons) or star-shaped if they are the remains of glial cells. These artifacts are then marked and erased in NEUROLUCIDA, which is used here as a graph editor. Axonal branches that were not well-filled and, therefore, traced as a fragmented structure can be restored by splicing their closest ending points to form a single continuous axon arbor.

Within this step, additional anatomical landmarks, such as section outlines, can be integrated. Superimposing the edited neuron reconstruction with a low magnification image, any visible anatomical structure (e.g., pia outlines) can be manually added to the automatic reconstruction of the neuron.

2.6.2.

Splicing

The results of this semiautomated reconstruction pipeline are 3D tracings of neuronal branches from adjacent, 100-μm -thick sections. To obtain a complete reconstruction of a cell, the reconstructions from different sections have to be aligned and spliced (Fig. 10 ). Fortunately, large radial blood vessels running perpendicular to the pial surface retain their positions within each tangentially cut brain slice, rendering them ideal as position reference points. Therefore, the blood vessel pattern is extracted during the automatic tracing procedure. Blood vessels are detected via a region-growing algorithm in a maximum intensity z projection of the original image stack, prior to the deconvolution. Voxels with values close to 0 (maximum transparent regions) are used as seed points for a region-growing algorithm. The region-growing algorithm seeks dark and almost circular shapes (shape number algorithm)36 and labels these structures as blood vessels. According to the vessel pattern, the reconstructions are aligned coarsely. More accurate alignment is then done by matching the upper branch endings from a lower slice with the lower branch endings from an upper slice. Once all branches are aligned with their counterparts from the adjacent section, the nearest ending points are spliced (Fig. 10). Starting at the slice including the soma, progressively more sections will be connected resulting in a complete 3D cell reconstruction [Fig. 11a ]. If no soma is present (e.g., the cortical part of the thalamocortical axonal arbor), the splicing starts at the deepest (most distant from the pia surface) section [Fig. 11b].

Fig. 10

Splicing of adjacent brain slices. (a) Three-dimensional visualization of automatically reconstructed axonal branches located in two adjacent brain slices. A coarse alignment is made using the blood vessel pattern (circular profiles). Detailed alignment is made by connecting the branch endings. (b) and (c) Enlargements of sample branches from the two sections in (a). The fitting branch endings are spliced and ultimately result in a fully connected neuronal arbor. (High resolution image available online only. 10 10.1117/1.2815693.10 10 )

064029_1_025706jbo10.jpg

Fig. 11

Semiautomatic reconstruction of a pyramidal neuron in layers 2 and 3 and a thalamo-cortical axonal arbor. (a) Final version of an automatically reconstructed pyramidal neuron (in layers 2 and 3) in the rat barrel cortex. Twelve tangentially cut brain sections were scanned. After automatic image restoration and tracing, the user manually postprocessed the reconstruction by erasing artifacts and splicing axonal fragments within a section. After that, the 12 slices were aligned and the branch endings were spliced, resulting in the 3D graph representation of the neuron. (b) Final version of a cortical axon from a thalamocortical neuron. Twenty-eight tangentially cut brain sections were scanned and then automatically traced. Axonal branches were manually edited, aligned, and spliced to result in a fully connected 3D axonal arbor. The axon was reconstructed within 5days , comprising 22h of human interaction. (High resolution image available online only. 11 10.1117/1.2815693.11 11 )

064029_1_025706jbo11.jpg

An international patent application is pending.44

2.7.

Fractal Box Counting for Quantification of Relative Shape Deviation

For comparing the shape of automatically reconstructed branches with manual Camera Lucida–based reconstructions, a relative measure was used.45 The aim was to obtain the percentage of relative deviation in shape at various resolutions. The shape comparison was applied to 2D x-y projections of the traced branches. This reduction to projections is necessary because the embedding medium (Mowiol) may expand or contract over time depending on the ambient room temperature. After several hours at room temperature, the tissue tends to shrink in the z direction (along the optical axis). After cooling the slide, this shrinkage is reversed (Table 1 ). Thus a comparison of the 3D morphology could be confounded by shrinkage and expansion of the tissue.

First, a bounding box is put around the x-y projection of an individual branch [Fig. 12a ]. Second, this box is divided into eight subboxes, and the numbers of boxes that are intersected by the branch are counted [Fig. 12b]. In subsequent steps, each box is divided into eight boxes45 until a box size between 200 and 300nm is reached. This is done for both the manually and automatically traced branches.

Fig. 12

Fractal box counting method 1. (a) A bounding box around an automatically reconstructed axonal branch. The same box was put around the manual branch. (b) Each box is subsequently divided into eight subboxes and the number of structure intersected boxes is counted. In the illustrated case, six of eight boxes are intersected by the axon. (High resolution image available online only. 12 10.1117/1.2815693.12 12 )

064029_1_025706jbo12.jpg

If two reconstructions would match perfectly in shape, the number of boxes intersected by the branch would be the same within each dividing step. If one branch differs in shape from its counterpart, the number of intersected boxes will be different from a certain box size onward. We checked the difference in the number of intersected boxes at a box size (resolution) of 5μm , 1μm , 500nm , and 300nm (Fig. 13 ), which is nearing the physical resolution limit of the imaging system (Rayleigh Criterion).25 The number of intersected boxes from the automatically reconstructed branch at the highest resolution (300nm) is set to be 100% of the branch. Differences in the number of intersected boxes will be given as percentages with respect to this value. This is illustrated in Fig. 14 , where the differences in structure-intersected boxes are shown for an increasingly cropped branch [Fig. 14(a1) to 14(a1d)].

Fig. 13

Results from the box counting method. The overlay of an automatically reconstructed branch (blue) and its manual counterpart (red) are analyzed at resolutions of 5μm , 1μm , 500nm , and 300nm . Identical boxes will be assigned a green color. When more than two braches match, then more green boxes are observed. At low resolution, the branches overlap almost entirely; whereas, at high resolutions, additional boxes within the automatic version increase. This depicts the quite sophisticated fractal box counting method. (High resolution image available online only. 13 10.1117/1.2815693.13 13 )

064029_1_025706jbo13.jpg

Fig. 14

Fractal box counting method 2. (a) An axonal branch reconstructed by the semiautomated system is shown (1). Progressively parts of the branch were erased, indicated in red [(1a) to (1d)]. The manual version of the branch is shown in (2). (b) The decreasing number of boxes becomes significant at a box size of approximately 10μm . (c) Differences between the cropped branches and the original one (a1) in log-log plots. The difference in the number of boxes increases progressively. The number of boxes at the lowest resolution from the original branch is taken to be 100% of the structure. Percentages are plotted as horizontal lines. This gives an estimated measure for the relative deviation in shape between the original and the cropped branches at various resolutions. (High resolution image available online only. 14 10.1117/1.2815693.14 14) Color Plate 2

064029_1_025706jbo14.jpg

3.

Results

3.1.

Image Processing

An important constraint for the image processing is based on the exposure time setting of the CCD camera before the image acquisition process. The mean gray value in a typical field of view within the scan area is calculated and set to be 190. Therefore, the dynamic gray value range for neuronal structures is maximized and comparable within image stacks acquired from different brain slices. Furthermore, we avoid clipping of gray values,34 which prevents artifacts from being introduced by deconvolution.

After acquisition of the mosaic image, the gray values are inverted and the large 3D mosaic image is subdivided into 3D bricks that can be processed by the HUYGENS software in combination with a linear Tikhonov-Miller filter. Because the deconvolution is performed on individual bricks, the gray value ranges for neuronal structure can be slightly shifted between the bricks. This is because HUYGENS estimates the background for each individual brick. The brickwise local threshold filtering compensates for this effect. Hence, the regrouped segmented mosaic image shows a uniform (not brickwise) distribution of foreground objects.

Connected subregions of foreground voxels within the segmented raster image are converted to lists of compartments with explicit intercompartmental neighborhood connections (vector image). The background is usually 103 to 104 times larger than the sum of the foreground voxels. Therefore, the storage size and the processing time decrease significantly after the data conversion from raster to vector image. Instead of processing gigabytes of data, the data set is reduced to a few hundred megabytes.

Template-based thinning reduces each object to its skeleton (approximate midline) [Figs. 5h, 6h, 7h]. The resulting graph is further validated and corrected until all thinning artifacts are erased. This correction usually creates smooth midline representations [Figs. 5l, 6l, 7l], neglecting morphological swellings such as dendritic spines or axonal boutons.

The tracing of poorly stained axons can result in a fragmented reconstruction of these structures (Figs. 5, 6, 7). Further small artifacts from unstained tissue, mainly astrocytes, are traced as well. Splicing of fragmented structures and erasing of artifacts during the manual postprocessing result in a 3D reconstruction of all neuronal branches within the brain section (Fig. 1).

Fig. 5

Image processing. (a) Maximum intensity z projection of inverted image stack cropped from the mosaic image shown in Fig. 2b (box). Clearly visible is a fragmented axon with intensely filled boutons (swellings with likely synaptic contact sites) and weak stain between them. (b) Maximum intensity z projection of deconvolved, inverted image stack. The image quality is improved in terms of signal-to-noise ratio and resolution, especially along the z direction. (c) Maximum z projection after intermediate voxels (gray) being tested with the first part of the local threshold property function. (d) Maximum z projection after intermediate voxels (gray) being tested with the second part of the local threshold property function. (e) Maximum z projection after hit or miss transformation. Rectangular frame masks of increasing size (1-to-3-voxel radius) delete small and isolated artifacts. (f) Maximum z projection after dilation and closing. A spherical structuring element tends to smooth the foreground objects and fills small gaps in the contour. (g) Maximum z projection illustrating the end-line locations (white) that are local distance maxima from an inner seed compartment. These end-line locations will not be erased and, therefore, guarantee connectivity of objects during thinning. (h) Maximum z projection illustrating the result of iterative layer-by-layer thinning. The end-line locations (white) remain unchanged. (i) Maximum z projection illustrating the validation of the thinned graph. The shortest distance from one end-line location to all others will be calculated (gray value coded). (j) Maximum z projection illustrating the validation of the thinned graph. Compartments and intercompartmental connections that are not part of a shortest connection (gray) will be erased. (k) Maximum z projection illustrating the pruning of the thinned and validated graph. Because end-line locations were local intensity maxima, morphological swellings (boutons, spines) result in short branches that will be pruned if shorter than 3μm . (l) Maximum z projection after midline extraction. The prior fragmented axon is transformed to the thin less fragmented line representation. (m) Maximum z projection of final edited and spliced graph. The remaining background objects were manually erased and the axonal fragments were spliced. The scale bar in the lower left corner applies to all panels. (High resolution image available online only. 5 10.1117/1.2815693.5 5 )

064029_1_025706jbo5.jpg

3.2.

Comparison to State-of-the-Art Technique

The state-of-the-art technique for the 3D reconstruction of neuron morphology, especially of the largely extending axonal arbors, is the computer-aided interactive Camera Lucida technique (e.g., Neurolucida).13 Therefore, we compared the performance of the semiautomated reconstruction method to the manual reconstruction approach, as implemented with the Neurolucida system.

First, we checked whether manually traced branches were detected by the automated system and vice versa. We quantified whether branches were missed by either of the reconstruction approaches. In a second comparison step, the relative deviation in shape between manually and automatically traced neuronal structures was determined. Finally, the time frame for reconstructing complete cells with their entire axonal arbor was compared for the conventional method systems. The comparison was done for 54 slices containing 4 different filled cells (from 4 different rats). They were randomly chosen from a pool of already manually reconstructed cells. Each manual reconstruction was done by a different individual. Users manually traced a pyramidal neuron in layers 2 and 3, two star pyramids from layer 4, and one ventroposteromedial (VPM) thalamocortical axon.

3.2.1.

Number of identical branches

Figure 15 shows a comparison between an automated reconstruction of a section containing parts of a thalamocortical axon and its manually reconstructed counterpart. Example slices for the 4 neurons can be found in Figure 16 .

Fig. 15

Comparison of automated and manual tracing of axonal arbors. (a) Three-dimensional visualization of a tangentially cut brain slice containing a filled thalamocortical cell. The reconstruction is done by the semiautomated system. All branches are part of the axonal arbor. (b) Three-dimensional visualization of the manual reconstruction of the thalamocortical cell. The twofold enlargement shows that the two reconstructions are almost identical. Statistics from branch counting of 22 brain sections from this cell showed that the automated system traced 95% of the manually reconstructed branches. The manual tracer found 96% of its counterparts obtained by automated tracing. (High resolution image available online only. 15 10.1117/1.2815693.15 15)

064029_1_025706jbo15.jpg

Fig. 16

Comparison of brain slices for four different rats. The automatically traced brain slices can be seen in the left column, the manual ones in the right column. All scale bars are 100-μm wide and apply to the left and right panels. (a) and (b) The x-y projections of a slice from a thalamocortical cell. The comparison yielded that the performance of the manual and the automatic tracing were 96% identical. (c) and (d) The x-y projections of a slice from a layer 4 cell. The comparison yielded that the performance of the manual system was worse than the automatic one. (e) and (f) The x-y projections of a slice from a layer 4 cell. The comparison yielded that the performance of the manual system was worse than the automatic one. (g) and (h) The x-y projections of a slice from a layer cell in layers 2 and 3. The comparison yielded that the performance of the manual system was much worse than the automatic one, due to the inexperience of the human tracer and some artifacts from unintentionally stained cells. (High resolution image available online only. 16 10.1117/1.2815693.16 16)

064029_1_025706jbo16.jpg

Each automatically reconstructed cell was inspected for missing braches that have been traced by the manual system and vice versa. The amount of manually reconstructed branches that were missed by the automatic system was similar for all four traced neurons. For the thalamocortical VPM cell, 5.4% were missing (Fig. 17 ), 3.5% and 3.4% for the layer 4 cells, and no manually reconstructed branch was missing for the cell in layers 2 and 3. In the case of branches that were missing in the manual tracing but found by the automated system, the result was less homogeneous. Approximately 4% of automatically reconstructed branches were missing in the manual version of the thalamocortical VPM cell, 29% and 47% were missed for the layer 4 cells, and almost 70% of the automatically traced branches were not found manually for the neuron in layers 2 and 3.

Fig. 17

Comparison of automated and manually traced neuron reconstruction. (a) and (b) Tangential view of 22 brain slices from a thalamocortical axonal arbor reconstructed automatically (a) and manually (b). (c) and (d) Coronal view of the 22 brain slices from the thalamocortical axon tree. Clearly visible are the cutting planes between the individual brain sections that need to be spliced. The manual tracing was done by an experienced person (T. K.). Comparison shows that the automatic system (c) performs as well as a well-trained human using a Camera Lucida system (d). (High resolution image available online only. 17 10.1117/1.2815693.17 17)

064029_1_025706jbo17.jpg

For the 4 cells, the automated method missed 29 of 682 manually reconstructed branches in 54 sections from 4 different rats. In other words, the new reconstruction technique found 96% of the manually reconstructed branches. The manual tracings were missing 268 of 923 automatically reconstructed branches from the same 54 slices. This means only 71% of the neuronal structures traced by the automated method could be found in the manual reconstructions. All of the missing branches, independent of whether they were missed by the manual or automatic system, were part of the axonal arbor. The dendritic branches were traced equally reliably by the two methods.

3.2.2.

Fractal box counting

The fractal box counting method was applied to 167 randomly picked branches from the 54 slices that were reconstructed by both methods. The frequency of the same percentage in the difference of the number of intersected boxes between an automatically traced branch and its manual counterpart is shown as a histogram (Fig. 18 , see Color Plate 1). For each of the four box sizes, a histogram of intersected boxes was made and fitted with a Gaussian curve. The mean values of these Gaussian fits yielded the average deviation in shape at various resolutions. The relative deviation in shape is almost 0% at the lowest resolution, meaning a box size of 5μm . With increasing resolution (decreasing box size), the deviation increases to a value of around 2% at a box size of 1μm and further to more than 3% at a resolution of 500nm . At the highest measured resolution of 300nm , the relative shape difference was around 5%. However, it can be seen that some branches deviate a lot more in shape than the mean values suggest. These large deviations were mainly observed for branches from dendritic trees.

Fig. 18

Relative shape deviation. A bounding box is put around the x-y projection of an individual branch. This box is iteratively divided into eight subboxes and the numbers of boxes that are intersected by the branch are counted (Fig. 12). This is done until a box size between 200 and 300nm is reached. The relative difference in the number of boxes that are intersected by automated and manually reconstructed branches is shown for various box sizes (resolution) (Fig. 13). The y axis refers to the frequency of how often the difference between the boxes was of a certain percentage. For box sizes of 1μm , most branches differ by approximately 2%. At the highest measured resolution of 300nm , the relative deviation is approximately 5%. (High resolution image available online only. 18 10.1117/1.2815693.18 18)

064029_1_025706jbo18.jpg

3.2.3.

Reconstruction time

The average amount of time for a manual tracing of 1 cell (10 to 30 sections), including its axonal arbor, is approximately 60 to 90 working hours. This means the reconstruction of a single brain slice takes between 4 and 6h .

Using the automatic system, an image size of 1mm×1mm×100μm proved to be sufficient for most slices. The computing time for an image stack of this size varies from 3to6h , depending on the amount of structure within the volume. The human interaction time with the system is approximately 1h per slice. This comprises the setting up of the scanning pattern (5min) , as well as the manual editing and splicing of the automatically reconstructed slices (5to45min) . Therefore, the average amount of time for reconstructing 1 cell with the semiautomatic system is between 30to90h of computing and 10to20h of human interaction. Given a working day of 8h , the reconstruction using the manual system takes approximately 2weeks , whereas the automatic system yields a comparable reconstruction within 24days (Fig. 17).

4.

Discussion

The method of semiautomated reconstruction is based on the acquisition of stacks of rectangular mosaic images. One of the key steps is the high-resolution deconvolution, improving the image quality of stained neurons significantly. This is sufficient to extract neuronal structures from the background by applying a local threshold filter that seeks connectivity. The segmented raster image, consisting of disjointed foreground subregions is then converted to a vector image of objects. These are converted to thinned midlines, yielding a graph representation of the neuronal structure candidates. These graphs are then manually edited and spliced, resulting in a complete 3D reconstruction of the neuron.

4.1.

Recovery

The results showed that approximately 96% of the manually reconstructed axon branches were traced by the automated system. For each of the 4 cells, the performance of the new reconstruction technique is of a similar quality in terms of tracing branches that were found in their manual counterparts. The 4% of structure details that were not recovered fall into three groups.

The first group ( <1% in the sample of four cells) consists of faint axonal branches. An indicator for faintness is that these branches are not local intensity maxima. Therefore, the deconvolution algorithm tends to weaken these branches instead of amplifying their intensity.

The second group (less than 1%) comprises small axonal endings that are not distinguishable from reconstructed artifacts and, therefore, are erased by the user during manual editing and splicing. To minimize the loss of small axonal endings, we superimpose the maximum projection of the original image stack with the 3D reconstruction.

The third group comprises branches that are not located within the scanning area. By default, an area of 1×1mm2 is scanned. For some sections, this area is not sufficient as the axons spread over a larger area. For these particular brain sections, the missing parts of the slice can be scanned. The branches missed initially are simply added to the rest of reconstructed branches, using NEUROLUCIDA for editing.

Summarizing these three groups, one can state that the 0 to 5% of missing automatically reconstructed branches consist either of small endings, faint branches, or not-scanned branches. By additionally scanning adjacent areas and inspecting the projection image, the number of missed branches can be minimized. Therefore, the performance of the semiautomatic reconstruction pipeline is limited by the first group, which implies the prerequisite that neuronal structures are local intensity maxima.

4.2.

Comparison with Manual Tracing

Only 71% of the automatically traced branches were found by the manual tracers. The number of missing structures varied widely between 4% and almost 70%. The missed branches can be grouped into two cases.

The first group consists of branches that were filled and stained but do not belong to the cell of interest. In some cases, surrounding neurons take up the tracer molecule, and some of their dendrites or their main axon will be traced.

The second group comprises neuronal structures that were missed by the human tracer. The user usually starts tracing at the soma. If an axonal branch ending is missed in that section, the human tracer tends to ignore these areas in the subsequent sections and, therefore, sometimes misses large parts of the axonal branching patterns. In some cases, this error becomes obvious during splicing and the human tracer has to go back to the tissue and to identify the connections missed previously. The size of the group of missing branches closely correlates to the experience of the user. Only training, double checking, and anatomical knowledge will minimize the loss of branches and improve the tracing quality.

Comparing the two groups, an explanation for the large deviation from 4 to 70% in missed branches can be given. The cortical part of the axonal arbor from the thalamocortical cell is far away from the biocytin injection site (soma) in the thalamus.13 Therefore, no artificial branches from other cells were found in cortex. Furthermore, this cell was traced manually by an experienced person in the laboratory (T. K.). Both facts result in the small number (4%) of missed branches.

In contrast, the manual reconstruction of one cell in layers 2 and 3 was missing almost 70% of the automatically reconstructed axonal branches. This probably has two reasons. First, layers 2 and 3 are closer to the site of cell filling. Therefore, some artificial branches from unintentionally filled cells were traced automatically and left out intentionally by the manual tracer. The second reason is that this cell was the first reconstruction of axonal morphology this individual ever attempted.

In conclusion, the manual reconstruction of neurons and their axonal arbors using a computer-aided Camera Lucida system (e.g., Neurolucida) depends strongly on the abilities of the individual tracer, and reconstructions will be somewhat subjective.46 In contrast, the semiautomated tracing system delivers reliable and objective reconstructions even in the hands of new users (S. E.). S. E. was able to reconstruct the axonal arbor presented in Fig. 11b with no significant differences compared to a reconstruction of this neuron done by an experienced tracer (T. K.) (data not presented).

The automated system fails if the staining of the axon is insufficient, if the background is too high or in the presence of a strong gradient in thickness or contrast within the tissue. The latter will result in different optimal camera exposure times depending on the location of the field of view within the scanning pattern. Because the exposure is set to one value for the entire pattern, this may result in a failure of the subsequent deconvolution and neuron tracing. However, we are currently trying to compensate for this problem by applying of an exposure mask that compensates for inhomogeneous tissue.

Another limitation is that, due to the small diameters and tortuous paths of axons combined with physical resolution limits of the system, two close lying axons may occasionally be mistakenly connected by the automatic tracing algorithm. Because of the stereotyped nature of these errors, we may be able in the future to develop an algorithm to correct them. Nevertheless, because action potentials are propagated by saltatory conduction and the axon is surprisingly relatively electronically compact,47 modelers and experimentalists are often unconcerned with the precise locations of all branch points. Rather, the focus is usually on the length and spatial organization of the axonal arbor. Thus, despite this limitation, our current system already measures the two main parameters of interest with high reliability and accuracy.

Indeed, the box counting method yielded small deviations in shape between manually and automatically reconstructed branches. This difference mainly arises from the larger number of points used by the automatic system to represent the graph. At low resolution, this difference is almost 0. At high resolutions (300nm) , close to the physical resolution limit, the relative deviation was approximately 5%. Therefore, depending on the scientific aim, these slight deviations can be irrelevant.

Nevertheless, Fig. 18 shows clearly that some branches, in particular at the highest resolution, deviate substantially from the 5% mean value given by the Gaussian fit. This group consists of dendritic branches. Because the dendrites are not as smooth as axonal branches, their automated reconstruction will also not be perfectly smooth. Reconstructed spine heads that were not removed during the graph validation and/or pruning, result in quite large relative shape deviations (Fig. 19 ). However, because the spines are part of the dendrites, the relative difference in shape can usually be neglected.

Fig. 19

Relative shape deviation for dendrites. (a) Maximum intensity projection of original image stack showing some dendritic branches. (b) Automatic tracing of dendritic branches. Spine fragments and thinning artifacts yield a rugged image. (c) Manual tracing of the same braches shown in (b). (See Color Plate 1.) (High resolution image available online only. 19 10.1117/1.2815693.19 19)

064029_1_025706jbo19.jpg

Table 1

Mowiol shrinkage. Qualitative measurement of the shrinkage effect of tissue embedded in Mowiol. After 3h at room temperature, the specimen showed shrinkage of 2to4μm along the optical axis. After a few hours at 4°C this process is reverted. By keeping the room air-conditioned and using the narrow illumination bandpass filter that protects the specimen from any infrared light, the specimen shrinkage is slowed down. However, once the specimen is at thermal equilibrium with the surroundings, the shrinkage is extremely slow that it can be regarded static over the scanning period.

CellBrain SliceThickness
Directly fromRefrigerator 3h at RoomTemperatureAfter 3.5h Recooling (4°C)
1 55±1μm 51±1μm 54±1μm
2 76±1μm 74±1μm 74±1μm
3 78±1μm 72±1μm 75±1μm
4 47±1μm 45±1μm 48±1μm
5 32±1μm 29±1μm 33±1μm

4.3.

Reconstruction Time

An important issue is the reconstruction time needed to trace a neuron (e.g., the pyramidal neuron in layers 2 and 3 takes 10 to 30 brain slices). Because the automatic tracing is running 24h a day and the human interaction time decreases from approximately 60to90to10to20h , the reconstruction time decreases from weeks to days.

4.4.

Comparison to Other Methods

Reconstructing the branching pattern of neurons is a challenging task. In the literature, several methods for different imaging and processing approaches have been described. 1, 2, 3, 4, 5, 6, 7, 8, 9, 11 Most of the methods available today are based on confocal or two-photon imaging. This is due to the good signal-to-noise ratio and the high resolution along the optical axis for both of these imaging techniques. Rodriguez9 showed that laser scanning microscopy in combination with sophisticated software algorithms is even capable of resolving the neuronal morphology of dendrites at spine level. Their method is capable of reconstructing the dendritic arbor and visualizing the surface of the spines. By scanning several adjacent image stacks, the field of view is increased, but still their method lacks the possibility of reconstructing the axonal arbor. This is due to the limited field of view of laser scanning microscopes, the bleaching of the tissue,48 and the lack of suitable mosaic systems for this imaging technique.

Another method that has been developed to analyze axonal arbors is the method described by Broser 15 Here lentiviral gene transfer is used to transfect a population of pyramidal neurons. The axonal arbors of these cells are subsequently stained. Two-dimensional mosaic microscopy in combination with extended focus imaging (EFI) is used to resolve the axons. Automatic 2D traces then quantify the axonal length in different areas of the tissue.

The appropriate reconstruction method strongly depends on the scientific task. To analyze spine shapes and distributions along a dendrite, the electron or confocal microscope methods described by Denk and Horstmann49 and Rodriguez 9 might be the most suitable. For determination of statistical connectivity patterns, the 2D EFI method proposed by Broser 15 could be used. However, if the precise 3D extension of the axonal arbor is important, one can either use the time-consuming Camera Lucida technique or one can use the alternative method presented here.

4.5.

Future Improvements

We found that the human influence on the reconstruction during the splicing of neuronal branches from individual sections may result in ambiguous tracings. To further minimize the subjective issue of pattern recognition, we currently are developing a semiautomated interactive 3D editing and splicing tool (AMIRA extension: SPATIAL GRAPH)43 in cooperation with the Konrad-Zuse Institute, which will replace the 2D editing environment of NEUROLUCIDA we used so far. The interface file format is the .hoc format (NEURON).50, 51 After editing and splicing in AMIRA, such a .hoc file can then be integrated into morphological databases or used for simulations in NEURON or NEURODUNE.52 Furthermore, an automated extraction of approximate dendritic radii will be added, because this is essential information for the simulation of neuron properties.

4.6.

Conclusions

We conclude that the semiautomated method of 3D tracing of neuronal morphology is a reliable, objective, and less time-consuming alternative to the current manual systems. Furthermore, the quality of reconstruction depends less on the abilities of the human tracer. Our method should facilitate the creation of large databases of neuron morphologies. This will yield interesting possibilities for simulation of information processing in small neuronal circuits. Furthermore, axonal outgrowth under various conditions of development and experience can now be more easily quantified in three-dimensions and statistically analyzed.

Acknowledgments

Thanks to Marlies Kaiser for histology; Merle Kurz, Thorben Kurz (T.K.), and Maren Schmitt for manual reconstructions; Seszgin Erdogan (S.E.) for semiautomatic reconstructions; Enrico Reimer for large support in hardware and software optimization and automation of the pipeline; and Guenther Giese for his support in basic microscopy techniques.

References

1. 

P. J. Broser, R. Schulte, S. Lang, A. Roth, F. Helmchen, J. Waters, B. Sakmann, and G. Wittum, “Nonlinear anisotropic diffusion filtering of three-dimensional image data from two photon microscopy,” J. Biomed. Opt., 9 1253 –1264 (2004). https://doi.org/10.1117/1.1806832 1083-3668 Google Scholar

2. 

X. J. Sun, L. P. Tolbert, and J. G. Hildebrand, “Using laser scanning confocal microscopy as a guide for electron microscopic study: A simple method for correlation of light and electron microscopy,” J. Histochem. Cytochem., 43 329 –335 (1995). 0022-1554 Google Scholar

3. 

L. P. Tolbert, X. J. Sun, and J. G. Hildebrand, “Combining laser scanning confocal microscopy and electron microscopy in studies of the insect nervous system,” J. Neurosci. Methods, 69 25 –32 (1996). https://doi.org/10.1016/S0165-0270(96)00017-9 0165-0270 Google Scholar

4. 

K. A. Al-Kofahi, S. Lasek, D. H. Szarowski, C. J. Pace, G. Nagy, J. N. Turner, and B. Roysam, “Rapid automated three-dimensional tracing of neurons from confocal image stacks,” IEEE Trans. Inf. Technol. Biomed., 6 171 –187 (2002). 1089-7771 Google Scholar

5. 

A. Rodriguez, D. Ehlenberger, K. Kelliher, M. Einstein, S. C. Henderson, J. H. Morrison, P. R. Hof, and S. L. Wearne, “Automated reconstruction of three-dimensional neuronal morphology from laser scanning microscopy images,” Methods, 30 94 –105 (2003). 1046-2023 Google Scholar

6. 

S. Schmitt, J. F. Evers, C. Duch, M. Scholz, and K. Obermayer, “New methods for the computerassisted 3-D reconstruction of neurons from confocal image stacks,” Neuroimage, 23 1283 –1298 (2004). 1053-8119 Google Scholar

7. 

J. F. Evers, S. Schmitt, M. Sibila, and C. Duch, “Progress in functional neuroanatomy: Precise automatic geometric reconstruction of neuronal morphology from confocal image stacks,” J. Neurophysiol., 93 2331 –2342 (2005). https://doi.org/10.1152/jn.00761.2004 0022-3077 Google Scholar

8. 

S. L. Wearne, A. Rodriguez, D. B. Ehlenberger, A. B. Rocher, S. C. Henderson, and P. R. Hof, “New techniques for imaging, digitization and analysis of three-dimensional neural morphology on multiple scales,” Neuroscience, 136 661 –680 (2005). 0306-4522 Google Scholar

9. 

A. Rodriguez, D. B. Ehlenberger, P. R. Hof, and S. L. Wearne, “Rayburst sampling, an algorithm for automated three-dimensional shape analysis from laser scanning microscopy images,” Nat Protoc., 1 2152 –2161 (2006). Google Scholar

10. 

MicroBrightField, “AutoNeuron,” (2007) http://www.mbfbioscience.com/autoneuron/ Google Scholar

11. 

W. He, T. A. Hamilton, A. R. Cohen, T. J. Holmes, C. Pace, D. H. Szarowski, J. N. Turner, and B. Roysam, “Automated three-dimensional tracing of neurons in confocal and brightfield images,” Microsc. Microanal., 9 296 –310 (2003). https://doi.org/10.1017/S143192760303040X 1431-9276 Google Scholar

12. 

P. B. Arnold, C. X. Li, and R. S. Waters, “Thalamocortical arbors extend beyond single cortical barrels: An in vivo intracellular tracing study in rat,” Exp. Brain Res., 136 152 –168 (2001). https://doi.org/10.1007/s002210000570 0014-4819 Google Scholar

13. 

M. Parent and A. Parent, “Single-axon tracing and three-dimensional reconstruction of centre median-parafascicular thalamic neurons in primates,” J. Comp. Neurol., 481 127 –144 (2005). 0021-9967 Google Scholar

14. 

F. Gheorghita, R. Kraftsik, R. Dubois, and E. Welker, “Structural basis for map formation in the thalamocortical pathway of the barrelless mouse,” J. Neurosci., 26 10057 –10067 (2006). https://doi.org/10.1523/JNEUROSCI.1263-06.2006 0270-6474 Google Scholar

15. 

P. J. Broser, V. Grinjevic, P. Osten, B. Sakmann, and D. Wallace, “Axonal arbor plasticity of L2/3 pyramids is differentially controlled in supra- and infragranular layers in rat somatosensory cortex,” Google Scholar

16. 

K. Horikawa and W. E. Armstrong, “A versatile means of intracellular labeling: injection of biocytin and its detection with avidin conjugates,” J. Neurosci. Methods, 25 1 –11 (1988). https://doi.org/10.1016/0165-0270(88)90114-8 0165-0270 Google Scholar

17. 

MicroBrightField, “Neurolucida,” (2007) http://www.mbfbioscience.com/neurolucida/ Google Scholar

19. 

S. K. Chow, H. Hakozaki, D. L. Price, N. A. MacLean, T. J. Deerinck, J. C. Bouwer, M. E. Martone, S. T. Peltier, and M. H. Ellisman, “Automated microscopy system for mosaic acquisition and processing,” J. Microsc., 222 76 –84 (2006). https://doi.org/10.1111/j.1365-2818.2006.01577.x 0022-2720 Google Scholar

20. 

D. A. Agard, “Optical sectioning microscopy: Cellular architecture in three dimensions,” Annu. Rev. Biophys. Bioeng., 13 191 –219 (1984). https://doi.org/10.1146/annurev.bb.13.060184.001203 0084-6589 Google Scholar

21. 

M. Oberländer, “Shack Hartmann wavefront measurements on biological tissue for deconvolution of large 3D mosaic transmitted light brightfield micrographs,” Google Scholar

22. 

D. Pinault, “A novel single-cell staining procedure performed in vivo under electrophysiological control: Morpho-functional features of juxtacellularly labeled thalamic cells and other central neurons with biocytin or Neurobiotin,” J. Neurosci. Methods, 65 113 –136 (1996). https://doi.org/10.1016/0165-0270(95)00144-1 0165-0270 Google Scholar

23. 

T. W. Margrie, M. Brecht, and B. Sakmann, “In vivo, low-resistance, whole-cell recordings from neurons in the anaesthetized and awake mammalian brain,” Pfluegers Arch. Euro. J Physiol., 444 491 –498 (2002). Google Scholar

24. 

M. Wong-Riley, “Changes in the visual system of monocularly sutured or enucleated cats demonstrable with cytochrome oxidase histochemistry,” Brain Res., 171 11 –28 (1979). 0006-8993 Google Scholar

25. 

M. Born and E. Wolf, Principles of Optics, 7th ed.Cambridge University Press, Cambridge (2003). Google Scholar

26. 

B. Stroustrup, The C++ Programming Language, 3rd ed.Addison-Wesley, Boston (2004). Google Scholar

27. 

L. Ibanez, W. Schroeder, L. Ng, and J. Cates, “The ITK Software Guide,” (2005) http://www.itk.org Google Scholar

28. 

R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, and R. Menon, Parallel Programming in OpenMP, Academic Press, San Diego (2001). Google Scholar

29. 

T. J. Holmes and N. J. O’Connor, “Blind deconvolution of 3D transmitted light brightfield micrographs,” J. Microsc., 200 114 –127 (2000). https://doi.org/10.1046/j.1365-2818.2000.00751.x 0022-2720 Google Scholar

30. 

P. J. Shaw, Handbook of Biological Confocal Microscopy, Plenum Press, New York (1995). Google Scholar

31. 

J. Philip, “Optical transfer function in three dimensions for a large numerical aperture,” J. Mod. Opt., 46 1031 –1042 (1999). 0950-0340 Google Scholar

32. 

N. Streibl, “3-dimensional imaging by a microscope,” J. Opt. Soc. Am. A, 2 121 –127 (1985). 0740-3232 Google Scholar

33. 

J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three-dimensional microscope point spread function using a Shack-Hartmann wavefront sensor,” J. Microsc., 205 61 –75 (2002). https://doi.org/10.1046/j.0022-2720.2001.00973.x 0022-2720 Google Scholar

34. 

Scientific-Volume-Imaging “Huygens professional,” (1995–2007) http://www.svi.nl Google Scholar

35. 

G. M. Van Kempen and L. J. Van Vliet, “The influence of the regularization parameter and the first estimate on the performance of Tikhonov regularized non-linear image restoration algorithms,” J. Microsc., 198 63 –75 (2000). https://doi.org/10.1046/j.1365-2818.2000.00671.x 0022-2720 Google Scholar

36. 

R. Gonzalez and R. Woods, Digital Image Processing, 2nd ed.Prentice-Hall, Upper Saddle River, N.J. (2002). Google Scholar

37. 

C. M. Ma, “On topology preservation in 3D Thinning,” CVGIP: Image Understand., 59 328 –339 (1994). https://doi.org/10.1006/ciun.1994.1023 1049-9660 Google Scholar

38. 

X. He, E. Kischell, M. Rioult, and T. J. Holmes, “Three-dimensional thinning algorithm that peels the outmost layer with application to neuron tracing,” J. Comput.-Assist. Microsc., 10 123 –135 (1998). https://doi.org/10.1023/A:1023437403657 1040-7286 Google Scholar

39. 

R. Klette and A. Rosenfeld, Digital Geometry, Geometric Methods for Digital Picture Analysis, Morgan Kaufmann Publishers, San Francisco (2004). Google Scholar

40. 

M. Seul, L. O’Gormann, and M. J. Sammon, “Binary image analysis: Thinning,” Practicle Algorithms for Image Analysis, Cambridge University Press, Cambridge (1995). Google Scholar

41. 

P. P. Jonker, “Skeletons in N dimensions using shape primitives,” Pattern Recogn. Lett., 23 677 –686 (2002). https://doi.org/10.1016/S0167-8655(01)00144-1 0167-8655 Google Scholar

42. 

A. Rosenfel and J. L. Pfaltz, “Distance functions on digital pictures,” Pattern Recogn. Lett., 1 33 (1968). 0167-8655 Google Scholar

43. 

Mercury-Computer-Systems, “Amira,” (2007) http://www.amiravis.com Google Scholar

45. 

J. Panico and P. Sterling, “Retinal neurons and vessels are not fractal but space-fillingm,” J. Comp. Neurol., 361 479 –490 (1995). https://doi.org/10.1002/cne.903610311 0021-9967 Google Scholar

46. 

D. Jaeger, “Accurate reconstruction of neuronal morphology,” (2001) Google Scholar

47. 

Y. Shu, A. Hasenstaub, A. Duque, Y. Yu, and D. A. McCormick, “Modulation of intracortical synaptic potentials by presynaptic somatic membrane potential,” Nature (London), 441 761 –765 (2006). https://doi.org/10.1038/nature04720 0028-0836 Google Scholar

48. 

R. Neher and E. Neher, “Optimizing imaging parameters for the separation of multiple labels in a fluorescence image,” J. Microsc., 213 46 –62 (2004). https://doi.org/10.1111/j.1365-2818.2004.01262.x 0022-2720 Google Scholar

49. 

W. Denk and H. Horstmann, “Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure,” PLoS Biol., 2 e329 (2004). 1545-7885 Google Scholar

50. 

M. L. Hines and N. T. Carnevale, “NEURON: A tool for neuroscientists,” Neuroscientist, 7 123 –135 (1997). 1073-8584 Google Scholar

51. 

M. L. Hines and N. T. Carnevale, “The NEURON simulation environment,” Neural Comput., 9 1209 –1979 (2001). 0899-7667 Google Scholar

52. 

P. Bastian and S. Lang, “DUNE, Distributed and Unified Numerics Environment,” (2007) http://www.dune-project.org/ Google Scholar
©(2007) Society of Photo-Optical Instrumentation Engineers (SPIE)
Marcel Oberlaender, Randy M. Bruno, Bert Julian Sakmann, and Philip Julian Broser "Transmitted light brightfield mosaic microscopy for three-dimensional tracing of single neuron morphology," Journal of Biomedical Optics 12(6), 064029 (1 November 2007). https://doi.org/10.1117/1.2815693
Published: 1 November 2007
Lens.org Logo
CITATIONS
Cited by 47 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Axons

Image processing

Brain

3D image processing

Image resolution

Image segmentation

Back to Top