The advent of molecular biology and the possibility of imaging biochemical events within the environment of single cells have progressively shifted the focus of analysis from the realm of microscopy to that of molecular imaging. The underlying assumption is that structure and function at the scale of the smallest organic components of the tissue, such as the constituents of the microcirculation, is a solved problem. However, the measurement of basic transport phenomena parameters operational at the microscopic level of the tissue, such as flow, oxygen transport, shear stress, fluid exchange, and electrolyte distribution, was until recently a major challenge, solvable only by resorting to the analysis of 2-D tissue models. Recent developments in optics have introduced techniques such as nuclear magnetic resonance imaging,1 high-speed intravital multiphoton laser scanning microscopy,2 visualization by means of quantum dots,3 and Doppler optical coherence tomography,4 which simultaneously provide high spatial and temporal resolution and the possibility of imaging in subsurface noninvasively.
A critical issue is that there are many microscopic tissue phenomena that occur only in the living condition, determined by the dynamics of the corresponding transport phenomena. This is the situation for effects in the plasma layer in the gap between the blood column and the glycocalyx/endothelial surface, the spontaneous activity of contraction and relaxation of most microvessels, including capillaries and arterioles, the maintenance of microscopic tissue perfusion by capillaries through the passage of red blood cell (RBCs), the dynamics of leukocyte rolling and sticking, and the distribution of tissue oxygen partial pressure gradients. The measurement, analysis, and interpretation of each of these phenomena is directly dependant on the ability to carry out microscopic dynamic measurements, a process complicated due to the microscope magnifying distance and not time, thus magnifying apparent velocities. Furthermore, the targets of this analysis are usually objects defined by boundaries whose location varies in geometrical scales smaller than the resolving power of the optical microscope. In the following, we present the status quo and challenges related to the quantitative characterization of each of these classes of phenomena (summarized in Table 1 ) as they manifests in the realm of the living tissue.
Summary of dynamic parameters for the optical characterization of microvascular function.
|Plasma layer||Direct visualization||5, 6|
|High speed video||7|
|Stochastic differentialequations & randomboundaries||8, 9, 10|
|Vessel diameter||Flying spot microscope|
|Automated systems||12, 13|
|Prony spectral analysis||14, 15|
|Blood flow||Dual slit method||Analog digital crosscorrelation computation||16, 17, 18|
|Frequency analysis viagrating||19|
|Four slit method||Spatial differentiation &cross correlation||20|
|Laser Doppler||21, 22|
|Oxygen partialpressure||Phosphorescencequenching||Exponential averagingand fitting||23, 24, 25, 26, 27, 28|
Visualization of the Microcirculation
The prevalent method for visualizing the microcirculation to obtain functional data is via the technology labeled “intravital microscopy,” primarily applicable in tissues that can be transilluminated. In general, the tissue is visualized with monochromatic light using the optics of a standard microscope. Access to thicker tissues, where transillumination is not feasible can be obtained by epiillumination and the methods of fluorescence microscopy, using injectable compounds that when illuminated, emit light at a different color than the exciting radiation. A limitation (and virtue) of this technique is that the fluorescent image shows only the structure that was labeled with the fluorescent dye.
These methodologies are invasive and generally not applicable to clinical settings. A method that has provided visualization of deep tissue, while maintaining the possibility of gathering functional data was developed by Slaff, using polarized light illumination. This technique uses the reflected light from the tissue passing through an analyzing optical element whose axis of polarization is orthogonal to the illuminating beam.30 The net effect is that of eliminating light reflected by the tissue surface, which tends to be not polarized and therefore is blocked by the analyzing polarizer whose polarization axis coincides with that of the illuminating laser beam.
A modification of this technique termed sidestream dark field (SDF) imaging was developed by the group directed by Ince.31, 32 In this approach, the light contamination from the surface reflection is eliminated by providing illumination with concentrically placed light emitting diodes to provide illumination that penetrates the tissue by scattering. The lens system is placed in the center of the illuminating ring and is therefore inaccessible to tissue surface reflections. This method has been used clinically to study the microcirculation of the brain, conjunctiva, tongue, and skin.
Analysis of the Plasma Layer
The axial migration of RBCs in flow conditions leads to the formation of a cell-free or cell-poor layer adjacent to the endothelium in arterioles and venules that significantly influences microvascular function since it affects effective blood viscosity and wall shear stress, one of the principal stimuli for release of vasodilators such as nitric oxide (NO) and prostaglandins by the endothelium,33 while acting as a barrier to NO scavenging by the RBC core. Furthermore, it is now established that the cell-free layer width is a determinant of NO bioavailability resulting from the balance between scavenging of NO by RBCs and the production of NO by the endothelium.34
However, data on the cell-free layer width in the microcirculation was until recently limited, with estimates varying from on the basis of visual estimates.5, 35 A computer-based method was recently developed by Kim, for monitoring the width of the cell-free layer in vivo.7 This method is based on high-speed video recordings carried out at framing rates up to , enabling us to record the mean width and spatial and temporal variations of the cell-free layer at single sites.
The methodology is based on the analysis of video records played back at normal framing rates and using the SigmaScan Pro 5 software to obtain records of optical density along an analysis line placed perpendicular to vessel wall. The width of the cell-free layer is determined from the spatial separation between the position of the vessel wall and the outer edge of the RBC core. The position of the vessel wall is found by noting the transition from dark to light pixels along the analysis line near the vessel wall.6, 36 The width of the RBC column is determined by converting its light intensity image into a binary image using an automatic thresholding method37 that maximizes the variance between pixels presumed to belong to the RBC column, and those outside of this column. The actual width of the plasma layer is then obtained by subtracting the width of the glycocalyx layer of according to Reitsma 38 and Kimet 39
The principal significance of this methodology is that it provides a statistical view of the structure of the plasma layer. Measurements of the variability of the plasma layer at a contiguous location that are progressively separated enables us to determine the correlation length,40 which describes the degree of persistence of perturbations downstream from the point of observation, a parameter found to be of the order of .
The thickness of the microvessel glycocalyx layer at the endothelial surface is approximately , therefore these studies assume that it is included in the cell-free layer since it is not observable with light microscopy. Several studies propose that the plasma layer is a function of vessel diameter ranging from for capillaries to for the larger arterioles,39 therefore, when the glycocalyx is accounted for the thickness of the free-flowing layer becomes very thin, in all likelihood leading to special hemodynamic effects affecting the generation of shear stress for the different size arterioles.
The temporal and spatial changes in the cell-free layer should cause physiological effects. It is reported that the variations in cell-free layer width are non-Gaussian,39 which is a consequence of the outward variations being limited by the vessel wall, while inward variations can extend into the RBC core, leading to a Poisson-like distribution. These variations have been shown theoretically to increase the effective viscosity in the cell-free layer41 as a function of the pattern and the magnitude of the variations. Additionally, temporal changes should also affect the dynamics of gas transport, particularly oxygen transfer to the tissue and the scavenging of endothelial produced NO.42
Regarding oxygen transport, the temporal variability of the cell-free layer may explain in part why the plasma layer is not a major diffusional barrier to the passage of oxygen, as proposed in some studies.43 In the presence of temporal variations of the column width of RBCs, shear rates in the plasma layer would present a variability that translates in the increase of the effective diffusion coefficient for gasses, according to modeling of this effect leading to the concept of shear-induced augmentation.44
Model Analysis of the Plasma Layer
Experimental delineation of bounding surfaces of the plasma layer is subject to significant uncertainty due both to practical limitations of imaging techniques and to the inherent roughness of the respective luminal and abluminal boundaries. This uncertainty can hamper one’s ability to estimate the functional extent of the plasma layer from optical imaging. Consequently, there is a growing interest in experimental, theoretical, and numerical studies of deterministic and probabilistic descriptions of such surfaces and of solutions of differential equations defined in the resulting domains.
Early attempts to represent surface roughness and to study its effects on the system behavior were based on simplified, easily parameterizable, deterministic surface inhomogeneities, such as symmetrical asperities, to represent indentations and semispheres to represent protrusions. Alternatively, one can use random fields to represent rough surfaces whose detailed topology cannot be ascertained due to the lack of sufficient information and/or measurement errors. Random representations of rough surfaces are conceptually appealing because of their generality and ability to incorporate experimental data. Moreover, such an approach enables one not only to make predictions of the system behavior, but also to quantify the corresponding predictive uncertainties.
The presence of uncertainty in delineation of uneven boundaries necessitates the development of new approaches for the analysis and numerical solution of differential equations defined on random domains. For example, it has been demonstrated that classical variational formulations might not be suitable for such problems, and finite difference approaches remain accurate only for relatively simple rectangular irregularities. The adoption of a probabilistic framework to describe rough surfaces makes even an essentially deterministic problem stochastic, e.g., deterministic equations in random domains give rise to stochastic boundary-value problems. This necessitates the search for new stochastic analyses and algorithms. For example, one of the very few existing numerical studies have employed traditional Monte Carlo simulations, which have turned out to be so computationally expensive as to become impractical.
Both data analysis and modeling of flow and transport in the plasma layer can be carried out within the probabilistic computational framework8, 9, 10 that is applicable to a wide class of deterministic and stochastic differential equations defined on domains with random (rough) boundaries. A key component of this framework is the use of robust stochastic mappings to transform an original deterministic or stochastic differential equation defined on a random domain into a stochastic differential equation defined on a deterministic domain. This enables employing the well-developed theoretical and numerical techniques for solving stochastic differential equations in deterministic domains.
Static and Dynamic Measurements of Vessel Diameter
The nature and intensity of the response of blood vessel diameter to different stimuli remains one of the principal tools of analysis of pharmacological action in such diverse fields as hypertension, resuscitation, and inflammation. In fact, a major portion of pharmacological science is directed at eliciting specific responses from microscopic blood vessels in terms of vasodilation and vasoconstriction as a means of controlling blood flow and pressure. Not surprisingly, this measurement has challenged the ingenuity of physiologists and engineers, yielding a variety of techniques for producing a graphical representation of how microvessel diameter varies in time. Note that the demand of accuracy is at times high, since diameter is a determinant of flow resistance, and a 10% change in lumen results in approximately a 40% change in flow resistance.
Blood Vessel Dimensions by Photometric Scanning
Diameter measurements of active blood vessels with a conventional stage-calibrated ocular micrometer or reticules is rather cumbersome and inaccurate. Those derived from microphotography are usually time consuming and scarcely practicable. Measurement of blood vessel diameter can be carried out continuously by scanning the vessel with a moving spot of light (flying-spot microscope). Alternatively, the vessel image can be repetitively moved past a photomultiplier tube (image-scanning microscope). Both approaches have the advantage of relatively simple electronics as compared to a video system and a more favorable SNR ratio due to the relatively low signal frequencies involved.
A cathode ray tube (CRT) mounted beneath the stage of a microscope provides a moving spot of light that is projected into the plane of the microvessel being measured, sweeping across the blood vessel. The transmitted light is collected by the microscope objective and projected to a phototube. Light absorption of the RBC column is high relative to the tissue and a single sweep across the vessel produces a pulse-shaped output from the phototube. The pulse height is proportional to the absorbance of the blood column and the pulse width is proportional to the column diameter.
The flying-spot system reveals little detail in the vessel wall because the spot size may be too large to resolve the fine structure of the wall. Furthermore, wall detail in a conventional image is composed in part of diffraction patterns, which are not sensed since the phototube records only the intensity of transmitted light, which is not sensed.
The technique of image scanning uses a conventional illumination and imaging system, with a voltage-stabilized power supply to avoid fluctuations. The image of the microcirculation is projected onto a nearby screen for viewing. A portion of this image is intercepted by a galvanometer mirror and projected onto a phototube window also located in the image plane. The motor is driven by a sawtooth generator, which enables precise adjustment of scan rate and amplitude. Scan rates of or less have been most commonly used. The phototube output in this system provides finer resolution than in the flying-spot microscope since the phototube window is smaller than the flying spot and diffraction patterns produced at the wall boundary are detected. Finally, variations in slit size make it possible to alter the resolution of the system to obtain fine detail or greater averaging of tissue opacity as needs dictate. The phototube output can be processed in the same fashion as with the flying-spot system to obtain continuous diameter recording.
In-Line Measurement of Lumen by Image Shearing
Current technological advances resulting in the incorporation of microscopic techniques and television scanning system techniques, enable measure of dynamic changes of microvessel lumen. Although, these methods depend on the light intensity, a comparatively simple system, using the principle of image splitting, enables accurate in-line computation of dynamic changes of the microvessel lumen and wall in rapid sequence. The accuracy of measurement relies on direct visual recognition by a trained observer of the lumen and structures of the wall in the microvessel and the unparalleled ability of humans to match the alignment of two parallel lines.11
The method has been successfully applied in numerous in vivo studies of the microcirculation. The process of measurement with both instruments consists of setting the two images edge to edge, which can be done with great precision. The amount of shear is noted, the images then are crossed over (passing through the position of coincidence), and the amount of shear is noted again. The algebraic difference between the two shears gives the diameter of the object. In practice, the video data that are displayed on a monitor can be “sheared” along the raster lines by delaying the start of the raster line scan by a controllable period. The result is that the image encoded in the delayed raster lines appears shifted and therefore "sheared" and laterally displaced by an amount proportional to the adjustable period, which becomes a measure of the image dimension in a manner analogous to the micrometer adjustment setting of an optical image-splitting eyepiece. Shearing can be done manually or by an automatic feedback control.12 Other automated methods for measuring diameter changes are also available, however they require that vessel walls present parallel images, and vessels that are straight.13
Dynamic Diameter Measurements and Characterization
Small arteries and arterioles, under normal conditions, and in the absence of anesthesia visualized in vivo, present a spontaneous, rhythmic activity of contraction and relaxation termed vasomotion.45 The precise physiological significance of this effect is not well understood, and it is not completely established whether it is a characteristic of a normal tissue or it is a response of the circulation to physiological stress.46 It is, however, an activity present in most organic structures endowed with smooth muscle. It appears to originate at bifurcations that act as pacemakers for this activity, however, these pacemakers are not synchronized and their characteristic frequency is inversely proportional to the diameter of parent and daughter vessels, varying from up to in the smallest arterioles to about in the larger vessels, making this activity accurately recordable by manual image shearing.
The quantitative characterization of the time-dependant features of the records of diameter variations is in principle readily accomplished by spectral techniques, because of the recurrent, quasiperiodic nature of the contractions and relaxations. The Fourier spectral series analysis would be the standard technique utilized in such a situation; however, it has certain disadvantages that make its use undesirable. First, Fourier analysis should be used with data where the record length is infinite. In most cases, records are finite, therefore, it is necessary to arbitrarily specify the nature of the data outside the recorded interval, using the assumption that either the data record is repetitive or that the data record is zero outside the recorded interval. When data are assumed to be zero outside the period of interest, the result is equivalent to multiplying an infinite record by a rectangle function. As a consequence, the shorter the data record is, the greater becomes the deviation between the actual spectrum and the calculated spectrum. This is termed energy leakage, where energy from higher frequencies “leak” into lower frequency bands. Similarly, if we assume a repetitive record, frequencies related to the extent of the record appear in the power spectrum, and attention must be given on how the successive repetitive records are connected. A second problem inherent to the Fourier analysis is that resolution is prescribed beforehand. However, it is possible that two frequencies may be present in a signal, and that they differ in frequency by a smaller amount than can be resolved by the method.
A method that is better suited for the analysis of vasomotion is the "Prony" spectral analysis technique14 that models a finite data record as the sum of a finite number of nonharmonically related sinusoids in uncorrelated noise. This is different from the Fourier analysis, which prescribes the frequencies as harmonics. Also, there is no assumption about the data outside the actual record. The Prony method attempts to make a best fit of a finite number of sinusoids over the described region, and resolution is not dependant on the data length, but is a function of the accuracy of the method of calculation. Because the data record length is no longer an important factor, the Prony method is very effective for short data records.
Any method for obtaining the power spectrum of data that contain noise yields results that are to some extent approximate, in which the degree of approximation is in part dependant on how well the data are related to the properties of the model that the spectral technique is designed to characterize. In the case of vasomotion, we assume that the measured time-dependant effects are the consequence of the activity of a finite number of discrete and localized pacemaker-like groups of cells.47 This implies a narrow-band process, where the energy of the time-dependant effects is concentrated at discrete frequencies, further suggesting that the vasoactive effects propagate along the vessels, so that at any given location we observe the additive result of the activity of several sources.
The Prony method is mathematically better suited to the analysis of the kind of time-dependant effects that characterize vasomotion, but it has the defect that from the point of view of the amount of computation required, the algorithm is less efficient than the Fourier method, which has prescribed frequencies and requires only the calculation of the amplitudes and phases at those frequencies. The Prony method first searches for the frequencies, and then performs the required operations to determine amplitudes and phases. However, for short data records, even though the computations are longer, the accuracy and resolution in the spectra is notably better.
A procedure to obtain a limited number of sinusoids whose sum best fits the given data consists of calculating the Prony spectrum of frequencies, amplitudes, and phases, and then computing the correlation coefficient between the waveform reconstructed with the data from the Prony spectrum and the original data.15 A first-order size approximation contains one sinusoid, and the correlation coefficient is computed at each increment of order size. If the correlation coefficient is larger than the previous order, then this solution is kept and the prior solution is discarded. A maximum is reached by increasing the order size, and a maximum correlation will be seen at a specific order. If the correlation is greater than some acceptable level, a solution has been found. If the maximum correlation is less than the acceptable level, the salient features of the signal have not been determined and the Prony method is not appropriate for that particular signal, as the assumption of narrow bandedness has been violated. As a rule, diameter records of , containing 200 to 300 individual data points, show frequency spectra with three to eight separate frequencies, which when utilized to reconstruct the original waveform, fit the data with a correlation coefficient of the order of 0.85 to 0.90, which is significant with .
Dual Window Correlation
Blood flow in microscopic vessels is also a parameter that is derived from RBC velocity and diameter measurements obtained optically. A widely used technique is based on determining the delay between photometric signals generated from two axially spaced detectors, measured by cross-correlation as a function of time delay according to the approach developed by Wayland and Johnson.16 This method is also known as the dual-slit method. The precision of the method is primarily dependent on the quality of signals, namely, their similarity, a quality directly related to the previously described correlation length.40 The dual-slit method is relatively insensitive to microscope focus, and yields the same result regardless of which part of the image of the flowing blood is in focus. The measured centerline velocity is corrected by a factor with a numerical value of 1.6 for vessels in the -diam range, and decreases gradually to the value of 1.0 for narrow capillaries, where cells move single file.17
The dual-sensor method has also been implemented in a video configuration, enabling us to record images of the microvessel in real time, thus providing a synchronized record of diameter and flow velocity.48 This technique is primarily applicable to capillaries, vessels where the optoelectronic signatures caused by the passage of RBCs are well differentiated from the noise.
The principal problem of RBC velocimetry is the method for conversion of the optical signature of the passing RBCs into an electronics signal. The use of nonideal optical signal transductors, the complexity of the microvascular image, and the efficiency of the algorithm that computes the intradetector transit time determine the limits of accuracy of the technique. Continuous progress in the development of optical detectors has improved signal quality; however, this advancement is curtailed by the difficulty in obtaining dedicated correlators that compute the delay between signals as a continuous process.
Continuous-correlation dedicated computers were developed in the 1970s by Princeton Applied Research (PAR, Boston, Massachussets) and a little later by Hewlett Packard (HP, Menlo Park, California). A general-purpose correlator with these features has been developed by TSI Incorporated (Shoreview, Minnesota), however, it has not yet been used in blood measurement applications. Vista Electronics (Ramona, California), manufactures a dedicated correlator for blood flow measurements. These instruments differ on how they delay the upstream signal relative to the downstream signal: the PAR and the Vista instruments use a digital delay line, while the HP instrument used a tapped sound transmission line. However, both instruments calculated the cross-correlation function continuously by means of hard-wired multipliers that process the data at each of the delay increments. This scheme ensures the complete utilization of all the available data. By contrast, modern schemes using general-purpose computers compute the correlation function sequentially from discrete record lengths, causing the computed correlation function to contain products from uncorrelated data.
The interdetector optical signature transit time is currently optimally measured by the variable time base tracking cross-correlation custom built by Vista Electronics (Ramona, California). This varies the frequency that controls the delay line for the upstream signal by means of a servo loop that maintains the maximum delay at a fixed, central location in the delay line. An interesting feature of this process is that the resulting frequency to attain this goal is directly proportional to the velocity of the transiting RBCs, which can be readily seen by noting that both functions are in terms of inverse time.18
Several other approaches have been taken to maximize the utilization of the available data. One of these uses spatial correlation, where optical scans are made along the axis of the blood vessels under investigation. When two of these signals are obtained in succession and transformed into electronic format, they can be cross-correlated to determine the displacement that has taken place in the period between scans. A different approach is the direct frequency analysis of the photometric signals. This can be implemented by projecting the image of the moving RBCs through a grating, which generates a frequency that is proportional to the velocity and the spacing in the grating.19 The grating-photodetector configuration can be replaced by a linear array of photodetectors49 or a sequence of four video photometers, which yield the difference between the photometric signals from two contiguous windows, thus forming two signals, each being the difference of time contiguous signals.20 This constitutes a prespatial differentiation of the signals that significantly improves the cross-correlation measurement of transit time relative to the one obtainable from only two windows, has an improved frequency response, and is less sensitive to motion in the preparation.
Lased Doppler Velocimetry
A method that provides an absolute measurement of blood flow velocity is based on the measurement of Doppler-shift frequency spectra of laser light scattered from RBCs in flowing blood. This method was first reported by Riva and consists of illuminating the microvessel of interest with visible laser light and analyzing the power spectrum of the backscattered light.21, 22 This spectrum presents two distinct regions identified by their different amplitudes, where the large-amplitude region corresponds to the Doppler-shifted backscatter. A special characteristic of this methodology is that the Doppler-shifted backscatter from the particles moving in a tube where velocities have an approximately parabolic distribution has constant amplitude. This enables a trained operator to readily identify the maximum velocity corresponding to the centerline of the flow.
Simultaneous detection of the RBC scattered light from two directions separated by a known, fixed angle makes the methodology independent of the actual angular relationship between the direction of blood flow and that of each of the two backscatter light beams being analyzed.
Velocity measurements based on this methodology do not appear to have been utilized beyond ophtalmographic investigations primarily because the determination of the maximal frequency of the Doppler-shifted spectrum requires visual inspection by experienced observers. Another problem is that accurate laser Doppler measurements of the centerline velocity depend critically on the centering of the incident illumination on the target vessel.50
Cellular Nonlinear Network (CNN) Technology
CNN technology may provide the next development for data processing associated with the analysis of microvascular phenomena in a network. These networks are made by means of arrays of identical systems called cells whose local interaction can be programmed.51, 52 The CNN structure is particularly suitable for the type of spatially distributed input processing characteristic of image analysis. Each cell is, in practice, the processing unit for an element of the distributed input and the connection of cells can be configured to produce an output that is a function of conditions in neighboring pixels, leading to the design of filters and image enhancing features. The dynamics of CNN circuits are controlled by equations that describe the state of each cell, its output, etc. The time constant for each cell is typically of the order of .
The significance of the CNN approach is that the system can be programmed to perform operations such as image subtraction, thresholding, and particle tracking. The application of such an algorithm to the capillary network highlights capillaries in which there is RBC flow, providing a means for adding maximal contrast to the path of each RBC. Interrogating neighboring cells enables us to form a display showing in which portions of the tissue there is RBC motion, thus delineating the microvessels. Finally, identifying contiguous pixels in which there is image motion delineates the capillaries in a machine interpretable context, which can the be used to count capillaries and determine functional capillary density.53, 54
Information on microvascular hematocrit is in principle contained within the signals used in flow velocity measurements. Two different hematocrits must be considered: namely, the tube hematocrit (the instantaneous value in the blood vessel) and the discharge hematocrit (measured by collecting the fluid that exits the blood vessel). These parameters are different because of the relative velocities of RBCs and plasma.
Tube hematocrit can be measured by manual count in capillaries, and by labeling a small fraction of RBCs with a fluorescent marker. Automated cell counting has been implemented for conditions where RBCs produce distinct optical signatures (see also previous section). In larger vessels, the measurements of optical density provides information that can be converted into hematocrit data.29
Optical Measurement of Oxygen Partial Pressure
The optical measurement of oxygen tension in microvessels and tissue became a reality following the development of oxygen-dependent quenching of phosphorescence emitted by metalloporphyrins introduced by Wilson.23, 24 The technique was applied to the measurement in single microvessels and the surrounding tissue by Torres Filho and Intaglietta25 and Kerger 26 As in our previous discussion, we focus here on the measurements that are obtained in vivo, at the microscopic level, with the tissue under normal conditions and isolated from the environment. A critical issue in obtaining the necessary data is to ensure that there is enough information on the conditions of related transport parameters, particularly microvascular blood flow and hematocrit, the source of the oxygen supply, and the inhomogeneity of oxygen distribution that determines the gradients of oxygen partial pressure necessary for its transport.
The methodology is based on the oxygen-dependent quenching of phosphorescence emitted by albumin-bound metal-porphyrin complexes after pulsed light excitation, in which the phosphorescence decay rate is directly related to the concentration of molecular oxygen under in vitro or in vivo conditions.27
Phosphorescence quenching is a function of the probability of a collision between excited probe molecules and molecular oxygen, which is described by the Stern-Volmer equation. Palladium-porphyrin complexes are usually employed as phosphorescent probes for measurements. Measurements are not affected by light absorption by other biological chromophores such as myoglobin, hemoglobin, or cytochromes. Furthermore these complexes bind albumin, which prevents self-quenching and causes measurements to be independent of the concentration of probe molecules and light intensity.24
Phosphorescence of probe molecules is excited by a light flash and is measured by a photomultiplier or an intensified video camera to obtain the spatial distribution of oxygen tension in a procedure called oxygen mapping.28
Implementation of the oxygen-quenching technique presents several challenges and a priori selections on how to excite the phosphorescence and collect and process the data, which has led to significantly divergent experimental results on the distribution of oxygen in the tissue. There appears to be agreement on the intravascular data obtained with different implementations of the method, but there is little agreement on the distribution on the extravascular oxygen partial pressure.
Our laboratory has repeatedly reported that phosphorescence-quenching measurements of tissue oxygen partial pressure can be reliably and rapidly obtained by an automated fitting procedure that matches an electronically generated monoexponential decay to that measured by a photomultiplier in a system in which a number of phosphorescence decay curves are averaged to lower noise, minimize flash intensity, and probe concentration. Conversely, some of the literature indicates that fitting the measured phosphorescence decay curves to a single exponential when the emission originates from complex oxygen concentration fields that contain oxygen concentration gradients such as the perimicrovascular environment yields erroneous results and that more complex functional representations of the phosphorescence decay are required. Vinogradov and Wilson55 analyzed this problem and proposed that the signal could be decomposed into a set of linearly independent exponentials in such a fashion that the chi squared statistical calculation is minimized, showing the ability to extract exponential time constants clustered about discrete values with a precision that increased in proportion to the increase of SNRs ranging from 100 to 4000. However, this result can be obtained only with high-power flash excitation, which contaminates the signal, since excitation of the probe consumes oxygen, a process ongoing as the phosphorescence is emitted. To circumvent the problem of oxygen consumption, flash excitation intensity must be maintained at the lowest possible level leading to SNRs of less that 0.1 per excitation flash. This requires averaging of about 100 flashes to obtain interpretable signals whose SNR is in the range of 1 to 5. It should be apparent that in this averaging process only the principal exponential decay provides relevant information.
Fitting a single, uncorrected monoexponential to the data appears to be adequate for obtaining reliable measurements in most circumstances, a result validated by calibration against simultaneous measurements with oxygen microelectrodes.56 This result is in part due to the ability of our system to shape the optical slit through which the measurements are made and to visualize how the slit is positioned relative to the preparation, which is done by back-illuminating the slit and projecting its image on the preparation in sharp focus. This particular characteristic was a property of the trinocular head of some of the older Leitz (Leica) microscopes, and is again a feature (although not explicitly advertised) of the trinocular head of Olympus microscopes. A critical feature of this system is the ability to precisely locate and delineate the source of phosphorescence emission relative to the underlying anatomy and to shape the optical diaphragm in such a fashion that oxygen tension inhomogeneities are minimized.
Perspective for Translation to Human Clinical Investigations
This brief presentation highlights the high level of ingenuity and technological know-how that has engaged and been deployed by physiologists and engineers to render the microvascular environment accessible to measurement “while the engine is running,” i.e., in normal in vivo conditions. Notably in many cases, these developments necessitated the combined deployment of ingenious technological approaches and special numerical analytical tools. It is, however, clear that regardless of the means deployed, the goal of using microvascular data as a clinical diagnostic or analysis tool should be the final goal of this technology.
A question to pose is whether understanding this system in the context of clinical health and disease at the level of everyday medical practice is necessary. The answer to this question is probably negative. However, there are conditions such as sepsis, hypertension, cancer, and hemorrhagic shock where phenomena at the microvascular level play the key role. The challenge is how to obtain representative data from a system that extends for for each kilogram of tissue, and that in humans, is in practice only accessible for observation in noncritical organs such as nailfolds, the conjunctiva, and the surface of the tongue. In this context, recent developments by Dobbe 32 are showing the possibility of using direct visualization of microvascular data obtained directly from patients by means of the SDF imaging.
It is well established that virtually all biological systems thrive and exist within narrowly circumscribed thermodynamic boundaries; however, currently there is no well-defined ergodic hypothesis that ensures the quasiuniformity of biological phenomena in the sense that what is observed at one location, usually not extending beyond the millimeter scale, is reproduced throughout a system on the scale, a disparity of nine orders of magnitude. Regardless of these extreme mismatches, experimental studies in carefully controlled tissues remain the only possibility for understanding the mechanics of major physiological and pharmacological effects. Consequently, there is a clear need for technology that can access the depth of the tissue and yield functional data. The challenge therefore remains that of producing images of the living tissue, a quest that may have a parallel in the development of minimally invasive surgery.
These studies were funded in part by National Institutes of Health (NIH) Grant No. R01HL064395 and Bioengineering Research Partnership Grants R24-HL64395 (Intaglietta) and P01HL071064-05 (Friedman). C. M. Hightower is the recipient of a postdoctoral scholarship from the NIH Training Grant No. T32 HL 007089 “Cardiovascular Engineering Science.”