Click here to return to archives.
Some Salient Issues in Seismic Data Processing Prior to Imaging
Ruben D. Martinez
PGS Marine Geophysical
Over the last twenty five years, seismic data processing technology development has been dominated by work on imaging methods. We, geophysicists, find this evident when we review the published literature. This is not unexpected since the principal economic engine in the seismic data processing business is seismic imaging. It also explains why most of the research and development resources in the industry are, in general, allocated to imaging development projects.
When I read publications about imaging technology, I observe that many of the imaging techniques published assume that the wavefield is recorded perfectly, i.e. the sampling requirements are met using regular recording geometries so the imaging techniques will perform optimally. It is also assumed that all modes of wavefield propagation except primaries have been removed or sufficiently attenuated. Furthermore, the recording datum is generally assumed to be flat over the entire 3D survey area. In general, these assumptions are seldom valid in the real world! These publications appear to imply that the pre-imaging processing techniques are problem free or that those problems are, for the most part, solved.
Today, fewer resources are dedicated to research and development of pre-imaging seismic data processing techniques, i.e. deconvolution, transmission and absorption compensation, statics, noise attenuation, multiple attenuation, regularization, trace interpolation and velocity analysis. Earlier researchers developed these techniques to solve the pre-imaging processing problems of their time. Do these techniques solve the pre-imaging processing problems of today? It is well known that the successful application of these pre-imaging techniques in seismic data processing is essential for the success of any imaging technique. If the pre-imaging processing techniques are not evolving at the pace of the imaging processing techniques we cannot expect that the imaging techniques will perform optimally.
In this lecture, rather than presenting imaging techniques, I address issues related to the application of pre-imaging seismic data processing technology and the role of seismic data acquisition technology. I discuss, and illustrate, common problems encountered in seismic data processing prior to imaging and their relationship with the field recording geometries typically used to acquire the data. This analysis gives us some insights to define technical challenges that may help to motivate future research and development work on these important processing techniques.
Ruben D. Martinez is Vice-president - Seismic Processing Technology with Petroleum Geo-Services (PGS) and has been active in the seismic industry for twenty eight years. He was associated with Geophysical Service Inc. (GSI), Halliburton Geophysical Services (HGS) and Western Geophysical as a senior research geophysicist and with Andrews Group (AGI) as VP-Technology before joining Petroleum Geo-Services (PGS). His responsibilities in PGS have included leading R&D and software commercialization projects in signal processing and imaging. He is author and co-author of a number of technical articles and patents in seismic data processing. His current responsibility is the direction of PGS's seismic processing technology R&D, software commercialization and support. He earned a BSc in Geophysics from the Instituto Politécnico Nacional (México), MSc in Geophysics from the Colorado School of Mines and a PhD in Geosciences from the University of Texas at Dallas. He is a member of the SEG, EAGE and AMGE (Mexican SEG).