Characterization of the spread of electrical activity is essential for understanding the mechanisms responsible for normal cardiac rhythm, arrhythmias, and antiarrhythmia therapies. Cardiac optical mapping, in which myocardial electrical activity is simultaneously recorded from hundreds or thousands of sites, has made great strides in furthering our understanding of the initiation, maintenance, and termination of arrhythmias. In optical mapping of transmembrane potential, heart tissue is stained with a voltage-sensitive dye and illuminated with an excitation light source.1, 2 The resulting emission fluorescence is proportional to the transmembrane potential. In contrast to electrode mapping techniques, optical mapping has the ability to faithfully reproduce transmembrane action potential morphology while being optically isolated from the overwhelming electric field applied during defibrillation shocks.1 Therefore, the optical mapping technique is a powerful tool for elucidating the exact physiological mechanisms of cardiac arrhythmias and defibrillation.
The monocular principle is predominantly used for cardiac optical mapping. The mapped region is limited to the field of view of the optical sensor. In cardiac arrhythmias, single or multiple coexisting reentrant wavefronts have been observed in numerous studies.3, 4, 5 The core of the reentrant arrhythmia can be highly unstable and meander across the epicardium.6 Thus, a study using a monocular imaging system cannot collect the full information during arrhythmia if the core of reentry leaves the field of view or if a reentry core is beyond the field of view. This limitation strongly motivated efforts to build panoramic imaging systems that reveal the electrical activity on the entire ventricular epicardium. The first implementations of the panoramic technique7, 8, 9, 10 introduced a cost-efficient method in which the investigators optically mapped electrical activity using a charge-coupled device (CCD) camera and a panoramic mirror arrangement to obtain the full ventricular epicardial view. After image registration, the electrical data was texture mapped onto the reconstructed heart geometry. Kay, Amison, and Rogers11 extended this idea to a panoramic optical mapping system capable of imaging large hearts, which used two CCD cameras and one mirror to obtain four views of the heart.
Optical imaging of the intact heart is usually performed with CCD cameras and photodiode array (PDA) detectors. CCD technology could potentially offer a significant advantage of higher spatial resolution due to the large number of pixels on a CCD sensor. However, the rate of data acquisition is usually substantially lower than that achieved with a PDA system. The rate can be increased by pixel binning, yet this defeats the major advantage of CCD technology, since binning effectively reduces the spatial resolution. This limitation is particularly acute when studying the shock response of defibrillation, since the duration of defibrillation waveforms is only a few milliseconds. Moreover, the transmembrane potential response to strong electric shocks can be very fast (less than )12 and contains much higher frequency components compared to a normal propagated response, reinforcing the need for fast sampling rates.
In addition to higher temporal resolution, PDAs also provide higher quality signals as compared to that of a CCD camera. One of the problems during studies of the biophysical mechanisms of defibrillation is the fact that the imaging sensor can be partially obstructed by the shock electrodes in Langendorff perfused heart experiments that mimic external defibrillation. As a result, the signal-to-noise ratio (SNR) of recordings obtained from the obstructed area are lower than that of recordings from unobstructed areas. Furthermore, when imaging diseased hearts such as a heart with a healed myocardial infarction, the optical signals from the unhealthy regions are also lower as compared to those from normal tissue. In addition to both of these factors, optical signals recorded during an arrhythmia can have extremely low amplitudes near the reentry core, regardless of if the tissue is healthy or diseased. All of these difficulties reinforce the requirement of high signal quality, which may not be achieved with a CCD-based system.
Defibrillation has been studied extensively in various in-vivo and in-vitro heart models. However, many findings have been limited, since these studies used functionally and structurally normal heart models, whereas a large percentage of patients who receive defibrillation therapy are actually suffering from coronary diseases such as ischemia and myocardial infarction. Thus far, vulnerability and defibrillation have not been widely studied by optical mapping at the whole heart level under these disease conditions. In this study, we developed a PDA-based 3-D fast fluorescence panoramic imaging (FFPI) system with high temporal resolution and signal quality, making this system well suited to study the mechanisms of defibrillation in the diseased heart. This system operates at a sampling rate and has in total, providing us with the unique opportunity to visualize the electrical activity on the epicardial surface of the rabbit heart.
Materials and Methods
Isolated Rabbit Heart
The protocol was approved by the Institutional Animal Care and Use Committee at Washington University. The hearts of New Zealand white rabbits were imaged in this study. These animals consisted of healthy control rabbits as well as those with diseased hearts, including healed myocardial infarction and hypertrophic cardiomyopathy.13 The rabbits were injected intravenously with sodium pentobarbital and with 2000 U heparin. The hearts were quickly removed, placed on a Langendorff apparatus, and perfused with oxygenated modified Tyrode’s solution as previously described.12
The hearts were stained by a gradual injection of of stock solution of the voltage-sensitive dye di-4-ANEPPS (Molecular Probes, Eugene, Oregon) diluted in dimethylsulfoxide (DMSO; Fisher Scientific, Fair Lawn, New Jersey), delivered by a micropump over . The excitation-contraction uncoupler 2,3-butanedione monoxime (BDM, ; Fisher Scientific, Fair Lawn, New Jersey) was added to the perfusate to suppress motion artifacts in the optical recordings.
Fast Fluorescence Panoramic Imaging System
As shown in Figs. 1a and 1b , the hearts were positioned in a hexagonal perfusion chamber filled with Tyrode’s solution. Three photodiode arrays (PDAs model C4675-103, Hamamatsu, Bridgewater, New Jersey) were spaced apart and directed toward the center of the chamber. The other three faces of the chamber were used for illumination by three commercially available light emitting diode (LED) arrays (Luxeon Flood 18-up, Quadica Developments, Calgary, Canada). To mimic the electrode configuration of external defibrillation, two stainless-steel mesh electrodes were placed into the solution chamber in an orientation perpendicular to the projection of PDA-1. The hearts were oriented so that the right ventricle faced PDA-1 and the left ventricle faced the mesh electrode distant from PDA-1. The perfusion cannula was connected to a rotation stage. A digital camera (Sony DSC-S70) was used to take images of the heart ( resolution) at increments as the heart was rotated a full . These images were used to reconstruct the 3-D heart geometry. As shown in Fig. 1c, for each individual PDA, the fluorescence emitted from the heart was filtered using an emission filter , and collected by a PDA with built-in first-stage preamplifiers. The outputs of each PDA were fed into a custom-made 256-channel second-stage amplifier (Innovative Technology, Brooksville, Florida) and then recorded by a data acquisition system (National Instruments, Austin, Texas) at with resolution. Instrumentation channels recorded the shock field strength, electrocardiogram, shock voltage, pacing stimuli, and defibrillation triggers, which were saved for off-line data analysis.
A commonly accepted camera calibration model, the affine distortion model, states that a 3-D point is projected with a perspective projection onto an image plane on a 2-D point based on the projection equation:and are the focal distances expressed in units of horizontal and vertical pixels. The principal coordinates and correspond to the image of the optical center. The skew factor encodes the angle between the and pixel axes, allowing the camera model to handle nonsquare pixels. The digital camera we used to document the heart geometry has square pixels so that the skew factor was known to be 0. A more detailed description of this camera model can be found at the following website: http://www.vision.caltech.edu/bouguetj/caliḇdoc/htmls/parameters.html We used this model for heart surface reconstruction and texture mapping, as described next.
Heart Surface Reconstruction
The left panel of Fig. 2a shows the flowchart of the algorithm for reconstructing the heart surface. The occluding contours algorithm14 has been used in a previous panoramic imaging study9 as the principle method to reconstruct the heart surface geometry. The essence of the occluding contours method is to iteratively shave a virtual 3-D cube by silhouette edges to obtain the volume of an object inside the cube. Kay, Amison, and Rogers11 incorporated an adaptive octree mesh refinement algorithm into the occluding contours method to reduce computational load and memory requirements. We implemented these algorithms as follows.
1. The digital camera was positioned at a fixed distance from the solution chamber, while its optical axis was aligned to intersect with the axis of rotation of the rotation stage. After we solved the intrinsic parameters of the camera model, we took 36 images (up to resolution) of the heart while the rotation stage rotated around its axis for a full revolution with a rotation step.
2. The heart boundary in these images was extracted by a combined image processing procedure, including intensity adjustment, intensity thresholding, image opening, and image closing. After heart boundary detection, we created silhouettes (36 in total) for these images by setting the pixels on the heart to a silhouette value of 1, and the rest of pixels to a silhouette value of 0.
3. A virtual 3-D cube just large enough to contain the heart was created. The cube was initially divided into eight voxels.
4. The voxel vertices were projected to the camera imaging plane based on Eq. 1 to determine their silhouette value using bilinear interpolation of the silhouette image at the corresponding rotation angle. The value of of a single vertex was clamped to 0 for the remainder of the analysis whenever was found to be equal to 0, thereby indicating that the vertex was outside the heart. We then rotated the cube around the axis of rotation and computed of the vertices from the next corresponding silhouette image. This procedure was repeated for all the silhouette images. Voxels that had all eight corners outside the heart volume and voxels that had all eight corners inside the heart volume were excluded from further analysis.
5. For the remaining voxels, each of them was further evenly divided into another eight voxels, and step 4 was repeated. The entire process was repeated until the desired resolution was achieved . The centroid of these voxels formed a set of scattered points approaching the heart surface.
We reconstructed the heart surface by defining an analytic function that maps the abstract, topological sphere into . The function was built by blending together multiple, overlapping polynomial functions, called a chart, each of which maps a subset of the sphere to . The result is a rational polynomial embedding of 15 into with guaranteed continuity , i.e., the first derivatives are defined and smooth. The data points found in the previous step were placed on the sphere, respecting their local connectivity. Each chart was then fit to a connected subset of these points to locally approximate the surface. Once the surface is reconstructed, it can be tessellated at any desired resolution with near-equilateral triangles. The charts also provide a local parameterization for every point on the surface, suitable for use in texture mapping and representing other data on the surface.
More specifically, we began by finding a local tangent neighborhood for each data point. We first estimated a surface normal16 if one was not given. The tangent neighborhood consisted of the four (or more) nearest data points, which together formed an enclosing ring around the given data point, when projected onto the tangent plane. We next grouped the points into connected subsets. A chart was created by choosing a seed point, then taking all of the data points within a geodesic distance from the seed point, as measured in the tangent neighborhood graph. We used a greedy algorithm to choose the chart seed points by choosing an uncovered point that is close to the ideal distance from one or more existing chart centers. The goal was to place the chart centers at a distance from each other, where is the desired chart overlap. This tends to produce a hexagonal tiling. Note that data points may (and will) appear in multiple charts. For this dataset, we set to be of the height of the heart.
Once the chart seeds and groupings were identified, they (and the data points) were mapped to an abstract representation of the sphere, preserving local neighborhood information for both.15 This guarantees that the resulting surface will have spherical topology. At this stage, any gaps due to missing data were filled in by adding additional charts. Each chart was then fit to its corresponding data and blended into the final function using a blend function, which is 1 in the middle of the chart and decays to zero by the boundary. Each individual polynomial function approximates its data within a given epsilon (0.1 of the average distance between points); the approximation error of the entire surface is less than, or equal to, the individual function’s error.
Texture Mapping of Optically Recorded Action Potential
The next step, shown in the right panel of Fig. 2a, was texture mapping the optically recorded action potential onto the reconstructed heart surface mesh. We developed a robust algorithm to assign such data to each element in the surface mesh.
1. Register mesh to PDA projections. Shown in Fig. 2b, the reconstructed heart surface mesh was registered to PDA projections by calculating the visible angle between the outward normal vector of each mesh cell and each normalized PDA projection vector . Then we defined the projection angle as:for that view. The view with the maximum projection angle at for a mesh cell provides the best vantage point for viewing the surface of the mesh cell.
2. Single- or dual-projection texture mapping. The texture mapping procedure depends on the registration of mesh cells. For each mesh cell registered to a single PDA projection, its centroid was back projected onto the corresponding PDA imaging plane to determine the fluorescence using bilinear interpolation. For each mesh cell registered to two PDA projections, the same procedure was performed twice but for different PDA projections. We then computed the weighted average fluorescence from the two raw fluorescence signals as the fluorescence of the mesh cell based on the following equation:, , and , are the fluorescence signal and projection angle of the two registered PDAs.
3. Fluorescent signals were scaled to mV, assuming that a normal resting potential of and action potential amplitude of were present at all of the mesh cells.
4. We performed 3-D visualization in Matlab (The Mathworks, Incorporated, Natick, Massachusetts) using the Matlab multifaceted patches function (patch.m).
Experimental Protocol and Data Analysis
A bipolar Ag–AgCl pacing electrode with interelectrode distance was placed at the anterior epicardium. We first recorded sinus rhythm, then the heart was paced at basic cycle length by stimuli and the electrical activity during this epicardial pacing was recorded. After the pacing stimuli, a test shock was delivered to the heart through the mesh electrodes spaced apart inside the solution chamber. Shocks were delivered using a custom-made defibrillator, which consisted of five capacitors ( each) and a triggering circuit controlled by a TTL pulse from the computer. Arrhythmias were introduced by either burst pacing from the bipolar pacing electrode or a T-wave shock. Sustained arrhythmias were recorded, and then an extra shock was delivered to restore the normal rhythm.
The SNR presented in this study is peak-to-peak SNR. We selected a single beat of sinus rhythm, and the peak-to-peak amplitude of the noise was computed during the action potential phase-0 (diastole), and the peak-to-peak amplitude of the florescent action potential was computed during action potential systole.
White-light sources such as tungsten halogen lamps and mercury arc lamps have been widely used in early optical mapping systems in combination with narrow bandpass filters to select the desired excitation wavelength and spectral bandwidth. Light emitting diodes (LEDs) provide an attractive option for excitation light sources,17, 18 since LEDs are significantly less expensive than white light sources. In this study, we compared fluorescence recordings using two excitation light sources: a tungsten halogen lamp and Luxeon Flood LEDs. Figure 3 shows the amplitude changes of an optically recorded action potential from a single site [in Fig. 3a, cyan color] under different LED illumination configurations as well as with the filtered tungsten halogen lamp. Figure 3b shows the SNR within the dashed rectangle in Fig. 3a for each illumination configuration. These results show that the use of Luxeon LEDs as the excitation light source produced fluorescence signals with a higher SNR under most illumination configurations compared to the tungsten halogen lamp.
We then compared fluorescence recordings recorded by a PDA and a CCD camera (CA-D1-0128T-STDL, Dalsa, Waterloo, Ontario, Canada). For this analysis, the heart was illuminated by an LED matrix. As indicated in Fig. 4a , the signals within the white rectangle were used for SNR comparison. The PDA system collected data at a sample rate, whereas the CCD system used . As evident from this figure, PDA imaging yielded not only high temporal resolution, but also a significantly higher SNR compared to this particular CCD camera. Even with binning of the CCD data, the PDA provided a much larger SNR while providing comparable spatial resolution.
One potential advantage of using multiple cameras to visualize an object is that it may be possible to improve the SNR in the areas visible by multiple sensors. To confirm this, all the heart surface mesh cells were registered. Each single mesh cell can be registered in one of the following values: 1 to 3 (only visible to PDA1/2/3), 4 (visible to PDA 1 and 2), 5 (visible to PDA 2 and 3), 6 (visible to PDA 3 and 1), or 7 (not visible to any PDAs). Figure 5a shows the mesh cells registration, values 1 to 6 are represented by different gray colors, and value 7 is represented by black. For all the dual-registered mesh cells with projection angles and , we computed two types of SNRs: and . is the SNR of the fluorescence signal after a weighted averaging process [see Eq. 3], is the SNR of the fluorescence signal from a single PDA that has the larger projection angle. Figure 5b demonstrates that is larger than in a majority of the dual-registered mesh cells, which indicates that we can improve the SNR by averaging the two fluorescence signals recorded by different PDAs. Also, when the absolute difference between and increased, the fluorescence signal from the PDA with the larger projection angle dominates the SNR calculation, thus the difference between and becomes smaller [shown in Fig. 5b].
Figure 6a shows an example of the reconstructed rabbit heart surface and epicardial action potential texture mapping as visualized from the three PDA views. In the left column, the heart surface geometry is shown, and texture mapping of the electrical activity present during epicardial pacing and shock-induced ventricular tachycardia are shown in the middle and right columns, respectively. Individual optical signals from locations a through f are shown in Fig. 6b. The heart was first ventricularly paced , then a shock from two mesh electrodes was delivered at the plateau of the action potentials, which induced a sustained ventricular tachycardia .
In this study, we developed a novel PDA-based fast fluorescence panoramic imaging (FFPI) system operated at with in total. The FFPI system provides high quality fluorescence signals from a majority of the epicardium of the Langendorff perfused rabbit heart.
Previously, two CCD-based panoramic imaging systems have been developed:9, 11 CCD technology has a significant advantage of higher spatial resolution due to the large number of pixels on a CCD sensor. Bray, Lin, and Wikswo10 have demonstrated sufficient spatial resolution of a CCD-based panoramic imaging system in the study of 3-D cardiac electrodynamic behavior. Another CCD-based system developed by Kay, Amison, and Rogers11 demonstrated sufficient spatial resolution ( average spatial resolution before image processing) for the study of ventricular fibrillation (VF) in large heart models. However, PDAs are more commonly used in studies where high temporal resolution and high dynamic range are needed, such as the study of defibrillation. Our results demonstrate that the PDA system reported in this study can achieve a higher SNR at an approximately 10 times faster sample rate compared to the CCD camera tested. However, other commercially available CCD cameras may provide higher SNRs and faster sample rates than the tested camera.
The spatial resolution of optical mapping is dependent on the surface area mapped and the number of available pixels. In our studies, each PDA imaged . This area contained a small portion of the atrial epicardium and most of the ventricular epicardium. Three PDAs provided in total. Approximately 570 of those pixels contained data. On average, each pixel mapped an area of , providing an average spatial resolution of before the application of bilinear interpolation. This spatial resolution is very similar to the spatial resolution achieved in the previous CCD-based system11 in large heart models, and is high enough for the study of wavefront propagation during arrhythmias.19 However, because the spatial resolution of this panoramic imaging system is limited by the number of photodiodes in each PDA, this system cannot be directly used on large hearts. Improvements in complementary metal-oxide semiconductor (CMOS) technology have produced a family of novel image sensors with high speed image acquisition while retaining the quantum efficiency of CCD. CMOS cameras are more costly than both CCD and PDA cameras. However, due to these clear advantages, CMOS cameras should become competitive soon.
During defibrillation, the time constant of the membrane response to a shock depends on the shock strength and refractory stage of the tissue when the shock is delivered. At the early plateau of the action potential, the fastest time constant can be less than when the applied shock is strong enough to create electroporation.20, 21 Another fast membrane potential change occurs when a shock is applied during diastole. Using a sampling rate, Sharifov and Fast22 observed fast activation at about when an intermediate strength shock was delivered during diastole. Therefore, the fastest frequency component could be as high as approximately . According to the Nyquist sampling theory, is needed to accurately record these signals. Although even higher sampling frequencies are desirable, decreased signal quality at higher sampling rates is a tradeoff. Therefore, we used , despite the fact that our FFPI system can be operated as high as .
One of the major advantages of our FFPI system is its ability to record high SNR signals even when the PDAs are partially obstructed by the mesh electrodes used to deliver external defibrillation shocks, making this system well suited for the study of diseased hearts, which can have very low amplitude optical signals. According to our experiments, a clear attenuation effect was observed when the mesh electrode was positioned within of the nearest heart surface. The attenuation effect rapidly decreased as we moved the mesh electrode away from the heart. At distances larger than , we could not see any difference in the morphology of the recorded action potentials with and without the mesh electrode, except in the amplitude of the signal [shown in Fig. 3b.]
In our experiments, each rotation of the heart results in a slight swing. Thus, it takes several seconds to let the heart stabilize before image capture. It takes approximately to complete the full rotation procedure at a step size. Initially, steps were used. The difference between these two step sizes has a minimum effect on the geometric reconstruction, primarily because the curvature of the ventricles is very smooth. Therefore, we selected a step size to expedite the procedure. However, a finer step size is probably needed to accurately reconstruct a more complex anatomical structure.
Many light sources have been used in optical mapping systems, including lasers,23 DC-powered tungsten-halogen lamps,24 and most recently, light-emitting diodes.17, 18, 25 All of these light sources have their own unique properties and limitations. In this system, we used commercially available LED arrays, the Luxeon Flood, as the excitation light source. The Luxeon Flood is constructed by 18 Luxeon emitters (green, typical , spectral half ) mounted on a rectangular PCB to deliver the most light output in the smallest possible space. Compared with the traditional illumination method of a tungsten-halogen lamp accompanied with a bandpass green filter and dichroic mirror, the Luxeon Flood is much more cost effective, ranging from $100 to $200 per LED array, compared to several thousand dollars for the light source. Another advantage is that each emitter has a viewing angle. Thus, the Luxeon Flood provides uniform illumination at a distance larger than . For the FFPI system, we connected three Luxeon Floods in parallel and powered them with a constant voltage power supply at and ( per flood). Although we have achieved satisfactory signal quality [see Figs. 1a and 1b], it is possible to further improve the florescence signal quality by increasing the excitation light intensity [as indicated in Fig. 3b as blue and red signals]. This can be done by increasing the number of floods, increasing the number of emitters on each flood, or by increasing the driving current (up to ).
In this study, we did not directly address the volume change in the Langendorff preparation. An advantage of CCD-based systems is that any change in volume of the heart over the course of the experiment can be examined directly through the CCD camera during the geometric reconstruction phase, as well as throughout the optical data collection phase. Using this method, Bray, Lin, and Wikswo did not see a significant change in volume throughout the course of their experiments.9 Kay, Amison, and Rogers demonstrated that the heart volume increases rapidly within the first in Langendorff perfused swine hearts after exposure to DAM.11 In our FFPI system, we cannot directly examine volume changes from the PDA. Therefore, to minimize the overall effects of volume changes on the geometric reconstruction and texture mapping procedures, the heart was given a longer time (at least ) to stabilize on the Langendorff apparatus before heart rotation and image acquisition began. In addition to contributing to volume changes of the Langendorff perfused heart, BDM also has an effect on a variety of ion channels and may alter the action potential duration in a number of species.26, 27 Therefore, the effects of BDM need to be taken into consideration for an appropriately designed experiment. However, we have recently identified a new excitation-contraction uncoupler, blebbistatin, which may solve this problem.28
In this study, we allowed at least for the LED light sources to reach a steady state before data acquisition. However, we did not directly measure the time to steady-state spectrum, intensity, or noise of the LED. These characteristics need to be examined in a future study.
Finally, cardiac electrical activity is essentially a 3-D phenomenon, in particular during complex arrhythmias. Results of this study are limited due to the typical epicaridal penetration depth of the optical mapping technique.
This work was supported by National Institutes of Health (NIH) grants (HL67322, HL074283) and National Science Foundation grant 049856.