We present the first successful waveform inversion of real crosswell seismic data. The methodology is to first invert for the smooth velocity structure by inverting the first arrival traveltimes, and then invert for the fine-detailed velocity structure by waveform inversion of the seismograms. This method will be called WTW or wave equation traveltime and waveform inversion. WTW mitigates the problem of getting stuck in local minima when the starting velocity model is far from the true model. The spatial resolution of the WTW tomogram for Exxon's Friendswood data is about 6 times that of the traveltime tomogram. WTW opens up a new door in monitoring Enhanced Oil Recovery operations in oil and gas fields.
The goal of high resolution Crosswell Seismic Profiling is to image weak and small scale heterogeneities imbedded in a background of strong, large scale variations. This model is appropriate for many geological environments and reservoir delineation and monitoring applications. One `practical' approach is to separate the imaging problem into transmission traveltime tomography and reflection segments. The travel times are used first to image the large scale velocity variations that refract the waves. The full wavefield is then pre-processed to enhance primary reflections and mapped or migrated. This segmented approach is verified with tests on field data recorded in a carbonate oil reservoir. Traveltime tomography is found to resolve velocity features of the order of 1 - 2 wavelengths. Structural features as small as 1/2 wavelength are imaged with reflection methods.
The paper illustrates a comprehensive method for improving tomographic imaging of seismic sections with the addition of a priori information. In particular, along with `hard' bounds imposed on propagation velocity, morphological statistical constraints have been incorporated into inversion. For this purpose a Maximum A Posteriori probability solution is given combining least squares, ray tracing and simulated annealing techniques. Experimental results for both synthetic and field data are provided.
A new hybrid method for inversion of reflected seismic waves based on iterative minimization of a cost function has been developed and tested on synthetic data. It includes the following steps: (1) iterative generation of a trial set of wave-forms, (2) construction of the fuzzy neighborhood set for a recorded wave-form in the set of trial wave-forms, (3) fuzzy mapping of the neighborhood set to the space of seismic parameters to produce the fuzzy image of the neighborhood set, and (4) defuzzification of the image set and calculation of the center of the next set of trial wave-forms. The sequence of centers of the trial wave-forms is constructed in such a way that it converges to a global minimum of the cost function.
The bane of non-linear waveform inversion based on the Born scattering model has been the need of a close starting model--particularly for the broad components of the velocity field. Without a close starting model the method is susceptible to failure by getting stuck in a local minima. Since these broad components of the velocity field are larger than the wavelengths of the signal, they do not scatter energy. These velocity components mainly effect the transmission of energy. Along the lines of Woodward (1990), we show that the Rytov approximation is better suited than the Born inversion for these components. The Rytov approximation can be combined with the Born approximation to form a hybrid method to perform an improved non-linear waveform inversion. This hybrid method is more effective with starting models that are not close in the broad components and may provide an optimal waveform inversion.
We approach the seismic inverse problem by forward modelling through finite differences in the frequency domain. We simulate a 3D wave equation, but we reduce the problem to a superposition of 2D problems by a wavenumber transformation in the out-of-plane direction. This combined frequency-wavenumber formulation is utilized in a convergent iterative inversion algorithm suitable for application to real data without the need for ad-hoc preprocessing of the seismic amplitudes. Because our algorithm operates in the frequency domain, it is straightforward to solve for a complex velocity parameter, in order to invert for inelastic attenuation. We invert single frequency components of the wavefield data at a time. Where the data are wide band it is often helpful to initiate the procedure with a low frequency, and then to use a high frequency to obtain an optimal resolution. Because the source behavior is rarely known in practice we include this parameter in our inversions. Our tests with synthetic data encourage us to answer the question posed in the title of this paper in the affirmative.
In the context of structural imaging of geological settings the determination of the subsurface velocity model is the most crucial step. For this determination, the results of prestack depth migration of seismic data can be used. However, in case of important lateral velocity variations in the subsurface, the image quality obtained by prestack depth migration is not sufficient for interpretation (used for a subsequent update of the velocity model) and for checking the correctness of the velocity model. We propose to substitute prestack depth migration by Prestack Imaging by Coupled Linearized Inversion (PICLI). This approach yields improved images without altering the important kinematic information contained in prestack migrated images. The PICLI method is computationally more expensive than prestack depth migration and contains notably the solution of a huge size linear system, on which ordinary conjugate gradient methods are ineffective. An adequate change of the scalar product in model space leads to a preconditioned conjugate gradient algorithm which gives in a few iterations the solution of this huge inverse problem.
It is shown that an essential requirement of Kirchhoff migration in the space-time domain is that the migration operator should be anti-alias protected. A scheme is presented for accomplishing this anti-alias protection, which is based on performing local filtering of the migration operator by coupling the filtering stage to the interpolation. Thus by using band- limited sinc function interpolation to evaluate the summation amplitudes, anti-aliasing of the migration operator is accomplished. The combination of the traditional weighting factors and anti-aliasing leads to discrete Kirchhoff migration in the space-time domain.
An amplitude-preserving migration aims at imaging compressional primary (zero- or) nonzero- offset reflections into 3D time or depth-migrated reflections so that the migrated wavefield amplitudes are a measure of angle-dependent reflection coefficients. The principal issue in this attempt is the removal of the geometrical-spreading factor of the primary reflections. Using a 3D Kirchhoff-type prestack migration approach, also often called a diffraction-stack migration, where the primary reflections of the wavefields to be imaged are a priori described by the zero-order ray approximation, the aim of removing the geometrical-spreading loss is achieved by weighting the data before stacking them. Different weight functions can be applied that are independent of the unknown reflector. The true-amplitude weight function directly removes the spreading loss during migration. It also correctly accounts for the recovery of the source pulse in the migrated image irrespective of the employed source- receiver configurations and the caustics occurring in the wavefield.
In this paper we shall present a new inversion method. Instead of imaging the volume heterogeneity, we shall reconstruct the shape of a heterogeneity in a homogeneous elastic medium from the knowledge of the observed scattering waves, the known incident waves and the elastic parameters of the heterogeneity as well as the surrounding material. First, we shall derive an analytic forward formulation for the scattering problem due to an arbitrary shape elastic heterogeneity by incorporating the perturbation approach with the T-matrix method for SH case, then we shall establish the corresponding inverse formation based on our analytic forward formulation. Finally, we shall assess the validity of our inversion formulation and discuss its implementation.
Wavelet transform method found their origin as an analysis tool for examining scattering of seismic waves. In the last few years, they have proven useful and popular in many fields. Wavelet analysis can characterize temporal or spatial behavior of geophysical signals. They can filter, remove unwanted signals such as ground rolls, or enhance data; can be used to detect specific events; and can be used to compress data. Applications of wavelet transform methods to a variety of geophysical problems are briefly described from the mathematical point of view and illustrated by several examples.
The strengths and weaknesses of linear inverse scattering methods are described and illustrated with seismic data examples. The possibility of using non-linear inverse scattering for multiple attenuation is described. The procedure is applied to the suppression of surface multiples and illustrated with synthetic and real data examples.
We present a new method for solving the large residual-statics problem, called mean field annealing, which may be capable of overcoming some of the limitations of stochastic methods. Mean field annealing is a global-search method that is governed by a purely deterministic set of equations. The mean field annealing method approaches global optimization problems in a fundamentally different way from stochastic global-search methods. The mean field approach never samples individual trial solutions in model space. Instead, the mean field equations, which are solved at successively lower temperatures with a fixed-point method, involve solving for the probability distributions of each variable rather than solving for the variables themselves. The probability distribution of each variable is resolved separately by replacing all other variables that occur in the energy function by their expectation values.
The most important element in the depth imaging process for reflection seismograms is the velocity field. This field predicts the travel paths of seismic waves, and therefore determines the placement of reflector positions to explain the data. This paper investigates the mathematical structure of a number of velocity estimation algorithms, both conventional and speculative. All of these are based on optimization of an objective function of velocity.
Shortest path network algorithms have been applied to geophysical raytracing by Nakanishi and Yamaguchi and Moser. In the present work two new algorithms are introduced which further exploit the special structure of network models as applied to geological structures. The algorithm differs in the scanning order of network blocks in a processing queue. An early version of this concept, with computed examples was given. Mathematical correctness of both algorithms, `block-greedy' and `block-breadth', is established. It is then shown that block- greedy limits the updating cycles per block but is inherently sequential whereas block-breadth may incur back-tracking but allows parallel execution. Shared memory architectures allow block level parallelization with guided self-scheduling on a global queue. Message-passing architectures allow direct parallel counterparts to sequential breadth-first execution.
We present an automatic blocking algorithm for some medium size nonlinear least squares problems that arise in the inversion of travel time data in geophysics. This blocking leads to a nonlinear Gauss-Seidel type iteration which can be distributed to a network of computers. The low dimensional blocks are also amenable to global optimization methods which leads to further parallelization. All this is necessary because the original problem is generally non- convex, ill-conditioned, with a goal functional that is very expensive to evaluate.
Finite-difference migration in the natural x-t coordinates, e.g. the 15-degree implicit wave- equation method, has been in common use for over a decade of geophysical data processing. As such, it predated the invention of large vector computers, such as the Cray-1S introduced in 1977, and so was organized and optimized to achieve the sensible objectives of reducing the number of computations and I/O operations as much as possible. Outboard array processors, employed to accelerate computations, changed the relative weights of computation and I/O but still fit comfortably into existing program designs. Since the mid 80's the author has revisited the implementation of 15-degree implicit finite-difference migration on vector and parallel computers now in common use and found that impressive performance gains can be achieved by completely reorganizing the computations. This report summarizes the highlights of these efforts and gives benchmark results for a variety of vector and parallel platforms.
Petroleum industry applications range from the straightforwardly parallelizable seismic trace processing codes to the more complex reservoir simulation. The different characteristics of these applications has led to a multitude of computer systems to handle the diverse needs. A typical oil company data center includes mainframes for tape and other file handling needs, vector computers for reservoir simulation, and practically every conceivable device from accelerator boards to array processors to minisupercomputers for seismic trace analysis. In this article, we describe one company's vision of a single, homogeneous, scalable architecture to handle all of these computational needs. We examine the system architecture in some detail, map the seismic algorithms onto the architecture and finally report on some preliminary performance measurements.
An analysis of the discrete wavelet transform of dipping segments with a signal of given frequency band leads to a quantitative explanation of the known division of the 2D wavelet transform into horizontal, vertical and diagonal emphasis panels. The results must be understood in a `fuzzy' sense: since wavelet mirror filters overlap, the results stated can be slightly violated with violation tending to increase with shortness of the wavelet chosen.