You are on page 1of 288

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.

org/

Ac
qui
s
i
t
i
on

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Society of Exploration Geophysicists

Expanded Abstracts
2016 Technical Program
SEG International Exposition
and Eighty-Sixth Annual Meeting

These abstracts have not been edited for technical content or to


conform to SEG/Geophysics standards. They appear here in the
form submitted by their authors.
Abstracts are for individual use only.

October 1621, 2016


Dallas, Texas

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ISSN (online) 1949-4645


Society of Exploration Geophysicists
P. O. Box 702740
Tulsa, OK 74170-2740
2016 Society of Exploration Geophysicists

Abstracts
Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ACQ: Acquisition and Survey Design


ACQ 1

Land Acquisition I ........................................................... Tuesday a.m., Oct .18 .......................... 1

ACQ 2

Marine Acquisition I ....................................................... Tuesday p.m., Oct .18 ........................ 41

ACQ 3

Blended Acquisition Technologies ....... Wednesday a.m., Oct. 19 ................... 82

ACQ 4

Land Acquisition II ......................................................... Wednesday p.m., Oct. 19 ................. 124

ACQ 5

Marine Acquisition II ..................................................... Thursday a.m., Oct. 20 ..................... 162

ACQ E-P1

New Acquisition Technologies ..................................... Monday p.m., Oct. 17 ....................... 203

ACQ P1

Land and Marine Acquisition ..................... Tuesday p.m., Oct .18 ................. 245

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Broadening the Bandwidth of Seismic Data by Shallow Well Excitation


Hongxiao Ning*, Donglei Tang, Baohua Yu, Yanhong Zhang, Haili Wang, and Yongqing He, BGP, CNPC
Summary
Theoretical analysis
The very thick low velocity layer (LVL) area is a dualcomplex area where both the near surface structure and the
subsurface geologic structure are complex. The S/N of the
seismic data from these areas is low, so it is difficult to
image well. To image well, a high-density seismic survey is
required to effectively suppress interferences. The location
of the traditional shot-hole shooting method is the preferred
layer on the basis of the near surface survey. That is not
only costly for projects, but also produces results for
imaging. By analyzing the near surface structure and the
ghost reflection response of the thick LVL in this paper, a
novel method is proposed: shooting close to the ground
surface to reduce the ghosts impact on data quality, expand
the bandwidth, and improve the S/N ratio. The theoretical
analysis, site-point test and 2D lines test show that the
method is feasible. Besides improving seismic data quality,
this method greatly reduces the drilling difficulty and
lowers the survey costs in complex areas. This research
provides an economical and feasible shooting method in
similar regions for implementing high density and efficient
seismic data acquisition.
Introduction
With the development of high density exploration
technology, the single-vibroseis shot has become the
universal method. In the complex surface area due to the
low S/N ratio and complex geological structure, higher
density acquisition must be used to suppress interference
and get accurate imaging of the geological target
underground. In some areas, the topography, the lithology,
and some other factors are not suitable for vibrator
excitation, so explosive excitation is the only option.
Conventionally we choose the high velocity layer, which is
a moist strata or aqueous clay layer as the excitation layer,
in which the better excitation energy and higher S/N ratio
of the seismic data can be generated. This method is
complicated in complex surface areas and the drilling work
is relatively difficult, which results in low efficiency and
high cost, especially in a high density 3D seismic data
acquisition project. With the application of high density
exploration technology, the noise from the excitation
source is not a challenge in seismic data processing, at the
same time the broad-band signal has become the focus of
attention. In this paper, through theoretical analysis and a
real data testing, a novel method is proposed: shooting
close to the ground surface, which not only can greatly
reduce the difficulty of drilling in complex areas and the
cost of seismic surveys, but also provides broad-band and
high S/N ratio seismic data.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Seismic waves generated by explosives propagate toward


all directions. Reflection occurs when a seismic wave
propagates to a wave impedance interface. When the wave
impedance interface is above the source, the upward
seismic wave is reflected to become the downward reflect
wave, which becomes ghost reflections following the
downward seismic wave generated by explosives (Lv,
2002). (Figure 1) Many scholars have researched ghost
reflection previously, which is mainly generated by the
high-velocity top interface or the phreatic water table. The
formation mechanism and frequency response of the ghost
reflection have been researched (Li et al., 1997). The
determination of the ghost reflection impedance interface
and the applying of the ghost reflection have been
researched (Lu et al., 2002). As has the determination of
the ghost reflection impedance interface by double up-hole
survey and the design of the source excitation factors by
applying the ghost reflection impedance interface (Liu et
al., 1999), How to apply the ghost reflection from the highvelocity top interface to design shot hole depth has been
discussed (Zhang et al., 2006). The theoretical analysis of
the shooting depth chosen below the phreatic surface has
been researched (Tang et al., 2005). The analysis of the
excitation effect of the explosive source has been discussed
(Qian et al., 2003). The excitation method of the explosive
source at longitudinal wave exploration has been discussed
(Wang et al., 1999). Targeting at the very thick
LVL/weathered layer area, this paper mainly focuses on the
ghost reflections generated by the ground surface and by
the formation interface above the source point.

Figure 1: Schematic of ghost reflection influence on seismic


wavelet.

Here, assuming that the waveform of the first down-going


waves is x0(t), which can be written as F0 (f) in the
frequency domain. It is supposed that the ghost reflection
has the same waveform as the direct down going waves,
except for a delay in time; its amplitude is only influenced

Page 1

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Broadening the Bandwidth by Shallow Well

by the reflection coefficient, moreover the impact of the


absorption and diffusion factors shall be ignored. Then the
ghost reflection can be expressed as:

x j t R j x t j

(1)

Wherein, Rj is the reflection coefficient of the velocity


interface j, j is the time difference between the ghost
refection generated by the velocity interface j and the 1st
downward wave. Equation (1), in the frequency domain,
can be written as:

Fj(f ) R j F(f )e

i 2f j

(2)

When there are multiple ghost reflective interfaces, the


seismic signals received at the same receiver point on the
surface are the stacking of the above mentioned 1st
downward waves and the multiple ghost reflections. The
seismic signals can be written as:

x(t) x 0(t) x1(t 1 ) x 2(t 2 ) x 3(t 3 ) (3)


In the frequency domain, it is
F(f ) F0(f ) R 1F0(f )e i 2f R 2F0(f )e i 2f R 3F0(f )e i 2f
1

F0(f )(1 R1e

i 2f1

R 2e

i 2f 2

R 3e

i 2f 3

harm to the reflection signals which are already quite weak


for the thick LVL (weathered) layer. If the ghosts from
other interfaces are also added, the influence on the highfrequency portion of the seismic data would be more
complex, the S/N ratio will be even lower and the band will
be even narrower. If the shooting depth is not more than 4m,
a good result will be achieved.

(4)

The amplitude response characteristics of seismic signals,


which are the stacking of the ghost reflections and the first
downward wave can be written as follows:
H (f ) 1 2R cos(2f ) R 2
(5)
1V1 2V 2
R
1V1 2V 2
(6)
Where R is the reflection coefficient of wave impedance
interface; Vi is the velocity of the formation on both sides
of the interface; i is the density of the formation on both
sides of the interface; f is the frequency; and is the time
difference.
The above analysis shows that the ghost caused by the
ground surface is the strongest among all the ghosts. The
ghost reflection coefficient of the ground surface calculated
by Equation (5) is close to -1. The ground surface ghost
amplitude response curve is calculated and drawn by
Equation (5), see Figure 2. It can be seen from the figure
that the ghost suppression zone moves rapidly toward lower
frequencies with the increase of the shooting depth. When
the shooting depth is deeper than 6 m, the first minimum
value point appears below 85Hz, which may cause great

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 2: Seismic signal suppress response curves from ghosts at


different shooting depths

Different colors in the figure represent different shooting


depths. The horizontal axis is the frequency. The vertical
axis is the amplitude response of ghost. Given the overlying
stratum velocity is 1000m/s; and the shooting depth is 1, 2,
4, 6, 8 and 10m respectively.
Real data test
From the theoretical analysis results, shooting in a shallow
hole in the thick LVL/weathered layer will get better results.
For this reason, BGP conducted a point-shooting test in a
gravel accumulation zone in west China. The accumulation
thickness of gravel on the surface of the test area is over
100 m at maximum. Table 1 shows the surface velocity
structure of the test point.
Table 1: Test point surface velocity structure
V0
m/s
656

H0
m
1.5

V1
m/s
1066

H1
m
11.0

V2
m/s
1478

H2
m
21.0

V3
m/s
1716

Remarks
Uphole

The shooting depths for the test were 3, 5, 7, 9, 12, 15, 34,
and 47m respectively; the charge size was 3kg (Ning et al.,
2011). Figure 3 shows the shot gather records of the test
point: the shooting depth is 3m on the left and 34m on the
right. It is easy to find that the visual frequency is low and
the interference energy is strong on left; while the
interference is weaker obviously and the visual frequency is
high on the right. For the time window 1400ms-4000ms of
the100-140th trace, the inter-trace spectrum analysis is
made. Figure 4 shows the spectrum of three different
shooting depths. You can see that with the increasing of the
shooting depth, the high-frequency energy decreases,

Page 2

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Broadening the Bandwidth by Shallow Well

especially the high flooded frequency (Ling Yun, 2001) is


reduced significantly, which fits well with the previous
theoretical analysis results. In this way, it cant judge the
quality of a single-shot record intuitively. Deeper and more
comprehensive analyses are required.

stronger, and the second one is becoming obvious when the


shooting depths are 34m and 47m, which proves that deephole shooting has an obvious frequency dropping effect.

Figure 5: self-correlation wavelets of the seismic records from


different shooting depths. In the figure, the shooting depth is 3, 5, 7,
9, 12, 15, 47 and 34m in turn, from left to right.

Technical application effect


Figure 3: Comparison of the records from different shooting depths:
3 m on the left, 34 m on the right.

From the above analysis and understanding, we know that


it is an effective method of increasing the S/N ratio and
widening the bandwidth of the effective wave for a dry
surface area or an area without a near-surface water table to
select the near surface for shooting. Based on this
understanding, BGP made 2D data acquisition comparison
tests among three regions in the west of China and achieved
good results.
1. Sand-shale weathered mountain area

Figure 4: Inter-trace gather spectrum analysis on comparison test


records from different shooting depths

The figure indicates that when the frequency is greater than


a certain value, the shooting energy is equivalent to or
lower than the environmental energy. The frequency is
called the flooding frequency; the preset high cut filter
frequency of the recording instrument is 187.5Hz, and the
shooting depths are 3m, 9m and 34m respectively.
Five seismic traces of the medium-offset (2000m) in all the
test records are selected to make the self-correlation
wavelet analysis. The autocorrelation is shown as figure 5
in succession from left to right with increasing the shooting
depths 3, 5, 7, 9, 12, 15, 47 and 34m. As can be seen, the
self-correlation wavelet has a very small side lobe when the
shooting depth is 3m; it becomes obvious when the
shooting depth is 7m, and the first side lobe is much

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 6: Comparison of deep-shallow hole shooting sections in


sand-shale weathered mountain area

The left is the section acquired by the old method: a 2D


wide line, 960 folds, and a shooting depth deeper than 50 m.
The right is the section by the new method: a 2D wide line,
972 folds, and the shooting depth is 3-4m.
The near surface of Qaidam Basin Yingxiongling Area in
western China (a dry weathered surface) is a typical low
S/N ratio area and is difficult for imaging since it is dry and

Page 3

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Broadening the Bandwidth by Shallow Well

loose with the near surface LVL/weathered layer thickness


up to 500m. Figure 6 is the comparative sections between
the new and the old shooting methods in the area. It is easy
to see from the figure that the section acquired by the new
method shows a significantly improved S/N ratio,
especially at medium-deep layers, where many geological
phenomena, such as fault plane waves can be seen more
clearly.

test used two shooting methods with the same receiving


line and same fold. It can be seen, in the stacked sections,
the deep-hole shooting data cant be imaged; while the one
using the new method shows good imaging results and
obvious geological structures.

2. Piedmont gravel accumulation area


Being restricted by the terrain and surface conditions, many
piedmont gravel accumulation regions can only use holes
for shooting. The previous practice is to look for a highvelocity or penetrated stone layer for shooting. Such a
shooting method has a great operational difficulty, high
costs and a poor acquisition. Figure 7 is a couple of 2D
contrast test stacked sections of the Qilianshan Mountain
piedmont gravel accumulation area. The accumulation
thickness of gravel in the area can be more than 200m at
maximum. The old method: shooting with dual array, shot
hole deeper than 18m, 100 times fold, 30 geophones array
receiving, receiver point interval 30m, source point interval
60m. The new method: shooting with single hole, shot hole
is not deeper than 5m, 240 times fold, 10 geophones array
receiving, receiver point interval 20m, source point interval
20m. The section acquired using the new method shows a
significantly improved S/N ratio, especially the reflection
wave energy from the deep layer and clear substrate
imaging effect.

Figure 8: Dry desert area 2D test line stacked sections

The left seismic section is acquired using a 3-deep-hole


array with the shooting depth above 80m. The right seismic
section is acquired using the shallow-hole array for
shooting and the shooting depth not more than 5m.
Conclusion
It is proved that shooting in shallow depth holes close to
the ground surface can reduce the ground surface ghost
influence on imaging. This method effectively improves the
S/N ratio and the bandwidth of seismic data. Through the
actual field production practices, we known the method
combined with high density acquisition, which not only
improves the quality of the seismic data from complex
areas, but also increases the drilling efficiency. Of course,
in addition to the effect of the shooting method, the
comparative data in the paper inevitably have some
contribution from both the sampling density and the folds.
Acknowledgement

Figure 7: Stacked sections of deep-hole and shallow-hole test from


the piedmont gravel accumulation area

The authors wish to thank Crew 217 and Crew 2256 for
their time and hard work spent on the test acquisition. And
thanks BGP, CNPC for permitting us to publish the related
results.

The left section is gotten by the old method. The right


section is gotten by the new method.
3. Dry desert area
Figure 8 is the test contrast data from a dry desert area with
dry sand dunes covering the surface. It is the Gobi below
the sand dunes and there is no water table in the Gobi. The

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 4

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Li, T. S., H. Q. Gao, Z. X. Liu, X. X. Liu, S. H. Qian, and Z. X. Zeng, 1997, The acquisition of ghost
information and the study of its generation and frequency response: Geophysical Prospecting for
Petroleum, 36, 3844.
Ling, Y., 2001, Analysis of charge size and charge type: Oil Geophysical Prospecting, 36, 584590.

Liu, Z. X., C. W. Ding, X. X. Liu, W. F. Pan, and S. H. Qian, 1999, A new technique for high resolution
3-D seismic data acquisition: Oil Geophysical Prospecting, 34, 3744.
Lv, G. H., 2002, Measurement of the ghosting interface of seismic survey and its application: Oil
Geophysical Prospecting, 37, 295299.
Ning, H. X., Y. H. Zhang, D. R. Zhang, Y. X. Jiang, and W. L. Jiao, 2011, Seismic source optimization in
the Qilian Mountain area: Oil Geophysical Prospecting, 46, 370373.
Qian, R. J., 2003, Analysis of the explosive effect of the dynamite source: Oil Geophysical Prospecting,
38, 583588.
Qian, S. H., J. P. Liu, Y. X. Gu, T. S. Li, and Z. X. Liu, 1998, The research on the explosive mechanism
and excitation condition of dynamite source: Geophysical Prospecting for Petroleum, 37, 114.
Tang, D. L., F. Yan, and X. Q. Wang, 2005, Determination of shooting depth under water table: Progress
in Exploration Geophysics, 28, 3641.
Wang, W. H., 1999, Analysis of the dynamite source pattern in compressional wave exploration: Oil
Geophysical Prospecting, 34, 249259.
Zhang, D. R., J. H. Zhang, Y. Y. An, Y. X. Jiang, and W. Z. Lu, 2006, Study on shooting parameters of
seismic acquisition in Kulongshan mountainous area: Oil Geophysical Prospecting, 41, 249252.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 5

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ISS on Ice Seismic Acquisition in the Arctic


Jan H. Kommedal, Gino Alexander, Larry Wyman and Sean Wagner,
BP Exploration (Alaska) Inc.
Summary
Independent Simultaneous Sources (ISS) was deployed in
the Arctic for the first time in order to achieve a step
change in data quality by acquiring high trace density,
broadband, vibroseis data. Improved imaging was required
to resolve structures and position faults with less
uncertainty for future well placement. Data quality, noise,
HSE, and productivity issues were analyzed in an effort to
efficiently deliver the data in fewer days and at lower cost
than previous surveys. Data acquisition faces numerous
challenges including a short operating season bounded by
tundra closure, ice thickness, extreme weather conditions
and darkness. Additionally, legacy data has been
challenged with issues such as wind noise, cultural noise in
a major oil field and flex wave on floating ice. It was
necessary to overcome traditional paradigms in legacy
acquisition such as the use of fleets of vibrators and agree
to the use of simultaneous recording of single vibrators.

broadband reflectivity and build a more accurate velocity


model. Operationally we were looking for acquisition
efficiency and deployment flexibility.
Independent Simultaneous Sources
To achieve this we decided to use the ISS method which
has been developed over the last eight years.
Conventional land data is acquired using groups of
vibrators operating together with only one group of
vibrators active and being recorded at one time. With ISS
vibrators operate independently and simultaneously
allowing the crew to acquire a larger number of source
points per day (see Howe et al. 2008). Interference due to
multiple sources working simultaneously can be resolved
during data processing.
The first ISS land survey was recorded in Libya in 2008.
Since then a number of other large land surveys have been
acquired in Jordan, Iraq and Oman (Ellis, 2013).

Introduction
After almost 40 years of production, Prudhoe Bay remains
one of the largest oil fields in North America with
additional drilling activity planned for the coming years. In
order to support continued drilling it was decided to acquire
new seismic data in 2015 over key parts of the field, see
figure 1, replacing suboptimal legacy data.

Figure 2. Prudhoe Bay survey trace density (blue) and


efficiency (red)

Figure 1. Location map and survey outline in red.


Our main goals for the survey were to improve the final
images and attributes, increase the S/N ratio, deliver

2016 SEG
SEG International Exposition and 86th Annual Meeting

These surveys show that denser spatial sampling and better


azimuth-offset distribution result in superior imaging. The
blue columns in figure 2 show the increase in trace density
for surveys in Prudhoe Bay over the last decade. This
agrees with observations by Ourabah et al (2015) on trends
for land acquisition trace density.
The red line in figure 2 shows acquisition efficiency.
Fortunately ISS provide both efficiency and high trace
density. Further flexibility was gained by using receivers
with autonomous nodes compared to cable systems.
Acquisition parameters for a 2009 vintage survey and the
ISS survey are listed in Table 1.

Page 6

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ISS on Ice Seismic Acquisition in the Arctic

Table 1. Survey Parameters


Number of Vibrators
Sweep length
Sweep
Source intervals in-/cross-line
Receiver intervals in-/cross-line
Receiver lines/patch
Traces per square mile
Geophone frequency / group

2009 Survey
3 fleets of 4
4 seconds
6-84Hz Linear
110 / 770- 880
110 / 1320-1430
16
0.8 million
10Hz / 3

ISS Survey
12 single
32 seconds
2-96Hz
-3dB/octave nonlinear
110 / 110
220 / 660-770
32
>12 million
5 Hz / single

Benefits of ISS
This ISS survey area covered an offshore area and a coastal
plain including tundra, lakes, rivers and sea ice. Floating
sea-ice covered 46% of the survey area and 10% was
frozen rivers and lakes. The weight of the single vibrator
compared to a fleet of vibrators allowed safe access to
floating ice and areas that legacy surveys left incomplete.
A single vibrator with a single sweep delivered the required
energy as has been reported by others (Thacker et al. 2014,
Ourabah et al 2015). The initial comparison of receiver
field gathers will favor fleets of vibrators, but once the
aggregate data-density is realized after processing, the
single source, dense vibrator technique produces superior
signal to noise and illumination. Denser sampling and high
fold data is well suited for modern noise removal
techniques suppressing different types of noise (source
generated, cultural and random noise).
The vertical sampling was improved by broadening the
bandwidth of the data. The low frequencies were increased
by starting the sweep at 2Hz and using 5Hz geophones. The
sweep used was a 32 second non-linear sweep (low dwell).
The increased bandwidth reduced side-lobes of the wavelet
(Yilmaz 2001) compared to legacy data and resulted in
cleaner images. Figure 3a and 3b clearly shows the
suppression of side-lobes and reduction in the peak to
trough ratio of the wavelet extracted from the ISS data
compared to the legacy wavelet. Figure 3c and 3d shows
the effect broader bandwidth has on the data by suppressing
artificial reflections from side lobes, increasing the vertical
resolution and making the data more believable and simpler
to interpret. Finer spatial sampling reduces the acquisition
footprint. The example shown in Figure 3e and 3f is a
comparison of shallow depth slices. The ISS has better
spatial continuity while the legacy data is noisier, making it
difficult to recognize continuous events.

2016 SEG
SEG International Exposition and 86th Annual Meeting

3a,b) Extracted wavelet

3c,d) vertical section

3e,f) depth slice


FIGURE 3 Legacy data on the left and ISS data on the right
The heterogeneous permafrost overburden introduces
challenges in resolving the shallow velocity model. The
amount of input to reflection tomography was increased by
an order of magnitude and dense, well sampled picks in
offset and azimuth (measurements of residual moveout)
produced a superior model. The tomography is able to
update the permafrost layer with realistic heterogeneity that
had not been observed in the sparse legacy data. Figure 4
shows a shallow depth slice through the velocity model. In
this example we see good conformance of the slow velocity
anomalies to the surface lakes.

Page 7

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ISS on Ice Seismic Acquisition in the Arctic

The acquisition plan had to be flexible to accommodate


uncertainty around the thickness of sea ice and tundra
closing given the March 1st survey start date. The tundra
can only be accessed by specialized vehicles in a short
operating window depending on temperature and snow
cover. Tundra access is normally closed in late April as the
snow cover begins to melt and State agencies issue a 72
hour warning before all activities have to be completed.
The timing also created uncertainty around growth of the
minimum sea ice thickness required to safely use the
73,000 pound vibrator buggies.
Results

Figure 4. Shallow depth slice (taken at the location


indicated by the red star in Figure 1) through the velocity
model showing that lower velocity features conform to
lakes.
The robust ISS method allowed work to continue with
much higher wind noise than in previous surveys. This is
the first survey shot by BP in the Arctic where there was no
downtime due to wind noise. The ISS method also allowed
optimal positioning of the camp without concern for the
camp noise, maximizing operational efficiency.
Operational efficiency of ISS translated into increased
productivity and less HSE exposure hours since the survey
was acquired faster than using conventional methods.
The ISS operation also presented some HSE challenges.
Vibrator operators could end up as lone workers. This
was mitigated by pairing independent vibrators within view
of each other. Since the number of source points increased
from 50,000 to nearly 300,000 with ISS carpet shooting,
the amount of ice checking also increased significantly.
The ISS method takes advantage of point receivers and
point sources instead of arrays which have been discussed
by others (e.g., Baeten et al. 2000a, 2000b). This condition
is exacerbated when acquiring data in the permafrost
environment with the associated velocity inversion.
Arctic Operations
Acquiring seismic in the Arctic poses a number of
challenges (Trupp et al., 2009). The winter seismic
operation was carefully planned to allow safe execution and
to avoid harm to the environment by drawing on experience
from previous surveys in the sensitive biotope, harsh
climate and long winter nights of Prudhoe Bay. Exclusion
zones were established around denning polar bears and
grizzly bears, production facilities, infrastructure,
archaeological sites and other sensitive areas. Simultaneous
operations were coordinated with other BP and third party
field activities.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Even the intermediate post stack depth migrated ISS data


produced a better and more reliable subsurface image than
the final pre-stack depth migrated legacy data.
In Figure 5 the red fault is easily mapped on new the ISS
data. This fault was not observed in the legacy data and
resulted in increased operational time while drilling.
Figure 5 also demonstrates that the broadband wavelet is
more stable and less ringy in the new ISS dataset.

Figure 5. Migrated data comparison of final Legacy pre


stack depth migration (left) to ISS intermediate post stack
depth migration (right). Dark line shows well trajectory.
While the new ISS data appears to be low frequency, it
must be observed that the legacy data is not actually higher
frequency but results from side lobes which creates events
that are not real. The ISS data contains a broader
bandwidth signal and is better for inversion.
This difference in spatial resolution and noise directly
impacts calculation of attributes; Figure 6 compares an
RMS amplitude extraction made on a reservoir interval.
The RMS extraction from the ISS data is a more believable
representation of the subsurface than the legacy
counterpart.

Page 8

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ISS on Ice Seismic Acquisition in the Arctic

Figure 6. RMS amplitude extraction at the reservoir comparing the legacy sparse data (left) to the dense ISS data (right).

Conclusion
The ISS method combined with autonomous node
recording technology offers operational advantages
delivering an order of magnitude increase in trace density
and source productivity (over 11,000 VPs on the best day).
This efficiency improvement translated to less HSE
exposure hours, lower costs and allowed us to complete the
survey in fewer days during the short operating season.
This method also proved to be robust and tolerant to high
wind conditions and cultural noise in the busy oilfield. This
acquisition method provides data that produce a significant
uplift in imaging quality. We are hopeful that ISS in the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Arctic will deliver a step change in seismic acquisition and


imaging.
Acknowledgment
We would like to acknowledge BP and Greater Prudhoe
Bay Working Interest Owners (ConocoPhillips,
ExxonMobil and Chevron) for permission to publish this
paper. Also, we appreciate the BP Advanced Seismic
Imaging Technology group for technical support.
ISS is a registered trademark of BP plc.

Page 9

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Baeten, G. J. M., V. Belougne, L. Combee, E. Kragh, A. Laake, J. E. Martin, J. Orban, A. zbek, and P.
L. Vermeer, 2000, Acquisition and processing of point receiver measurements in land seismic:
70th Annual International Meeting, SEG, Expanded Abstracts, 4144,
http://dx.doi.org/10.1190/1.1816083.
Baeten, G. J. M., V. Belougne, M. Daly, B. Jeffryes, and J. E. Martin, 2000, Acquisition and processing
of point source measurements in land seismic: 70th Annual International Meeting, SEG,
Expanded Abstracts, 4548, http://dx.doi.org/10.1190/1.1816095.
Ellis, D., 2013, Simultaneous source acquisition Achievements and challenges: 75th Annual
International Conference and Exhibition, EAGE, Extended Abstracts, Th-08-01,
http://dx.doi.org/10.3997/2214-4609.20130080.
Howe, D., M. Foster, T. Allen, B. Taylor, and I. Jack, 2008, Independent simultaneous sweeping A
method to increase the productivity of land seismic: 78th Annual International Meeting, SEG,
Expanded Abstracts, 28262828, http://dx.doi.org/10.1190/1.3063932
Ourabah, A., J. Keggin, C. Brooks, D. Ellis, and J. Etgen, 2015, Seismic acquisition, what really matters?
85th Annual International Meeting, SEG, Expanded Abstracts, 69,
http://dx.doi.org/10.1190/segam2015-5844787.1.
Thacker, P., D. Harger, and D. Iverson, 2014, An evaluation of single vibrator, single sweep, 3D seismic
acquisition in the Western Canada Sedimentary Basin: CSEG Recorder, 39.
Trupp, R., J. Hastings, S. Cheadle, and R. Vesely, 2009, Seismic in arctic environments: Meeting the
challenge: The Leading Edge, 28, 936942, http://dx.doi.org/10.1190/1.3192840.
Yilmaz, O., 1987, Seismic data processing: SEG.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 10

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Using buried receivers for multicomponent, time-lapse heavy oil imaging


Peter Vermeulen*, Guoping Li, Brion Energy; Hugo Alvarez, Peter Cary, Arcis Seismic Solutions/TGS and
Gary James, LXL Consulting Ltd.
Summary
In 2015 a time-lapse buried receiver 3C/2D seismic experiment was performed in the heavy oil area of NE Alberta, Canada. The
purpose was to determine if on-going reservoir monitoring was feasible beneath a thick layer of muskeg. 3C analog geophones
and digital sensors were installed at surface, 3m and 9m along with dynamite sources at 9m. Shot points were doubled at each
source location in order to acquire data during winter conditions and also during the following summer.
The test was in response to poorly imaged seismic stacks and inversions from previously acquired 3C/3D surface seismic data.
High quality time-lapse PP and PS images were produced from the 2D data when both the dynamite sources and receivers were
buried to 9m depths. Recording PP and PS reflections that bypass the absorptive near-surface muskeg layer with buried receivers
and sources facilitates time-lapse multicomponent seismic monitoring in this area.

Introduction
The study area, located near Fort McMurray, Alberta, overlies a portion of McMurray oil sands that will be developed using
Steam Assisted Gravity Drainage (SAGD) techniques. Steam injection into the reservoir will be periodically monitored using
time-lapse 3C/3D seismic data to understand why, where and how the heat moves.
Near-surface, low-velocity, heterogeneous layers can pose significant detrimental impacts on the quality of land seismic data. In
some parts of Northern Canada a type of bog consisting of water and partly decomposed organic material, called muskeg, occurs
at the surface. This muskeg can become thick enough to attenuate some PP and most PS seismic waves.
In this study area the muskeg has been measured up to 8m thick and was detrimental to an existing PP and PS 3C/3D surface
seismic dataset. Figure 1 shows an example of PS converted-wave data from the 3C/3D dynamite surface seismic survey. The
center part of the line, where the thick muskeg exists, demonstrates the severe absorptive effect on the shear-waves being
recorded at the surface. The P-waves were also affected but to a lesser degree than the S-waves.
Time-lapse joint PP/PS pre-stack inversion has demonstrated its value in locating steam chambers and mobile bitumen within
heavy oil reservoirs surrounding this study area (Gray et al., 2016; Zhang and Larson, 2016). Therefore, 4D joint inversion
techniques are expected to form an integral part of the reservoir management for this SAGD project. These expectations led to
the decision to investigate whether the quality of the seismic data could be substantially improved by planting dynamite sources
and permanent buried receivers below the thick muskeg layer. In the winter and summer of 2015 a 3C/2D seismic line was
acquired with buried receivers with the purpose of testing whether future high-quality 3C/3D time-lapse seismic monitoring was
a possibility.

Design and Acquisition


The 2D seismic program was designed to answer not only the data quality issue (Pullin et al., 1987) but also address 3C
processing and operational concerns related to sensor tilt and horizontal orientation.
The 2D test line was composed of six different sensors and two source configurations:
Sensor
1. 78 surface deployed 3C analog geophones every 10m
2. 78 surface deployed 3C digital MEMS accelerometers every 10m
3. 39 3m buried 3C analog geophones every 20m on the even stations
4. 39 3m buried digital MEMS accelerometers every 20m on the even stations
5. 39 9m buried 3C analog geophones every 20m on the odd stations
6. 39 9m buried 3C digital MEMS accelerometers every 20m on the odd stations

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 11

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source
1. 22 lines orthogonal to the receiver lines with two buried sources (0.125 kg dynamite @ 9m) every 20m
2. 1 line parallel to the receiver lines with two buried sources (0.125 kg dynamite @ 9m) every 20m
Figure 2 shows a map view of the layout of the central 2D line with receiver stations in red along with inline source stations and
the source stations along orthogonal lines shown in green.
Front end preparations included widening an existing 3D receiver line to accommodate a Low Impact Seismic (LIS) drill for
geophone installation. The LIS drill assisted with installing 0.125kg dynamite sources on both the grid and the 2D line. The
duplicate source points employed 4 PVC pipes as both a monument and to protect the leads against curious wildlife.
A few analog and digital receiver planting poles were manufactured from three meter sections of steel pipe and fastened together
with pinned steel couplers. An orientation tool was attached at the top and custom fitted cups attached to the bottom. The planting
pole design proved cumbersome for the 3m and 9m buried receivers but was manageable in weather down to -10oC. When
temperatures plunged to -25oC significant challenges were encountered with wet muskeg quickly freezing to the loading pole
cups and couplers.
Sand points were attached to the sensors before installation to increase the coupling success rate and to aid in decoupling the tool
from the sensor (Figure 3). 4 PVC pipe and colored caps were used at the surface as both a monument and to protect the cable
and connectors against the elements and curious wildlife.
Wireless recording equipment was deployed and the sensors went through a series of QCs to ensure they were operating
correctly before acquisition. The recording equipment was collected in a methodical manner as to not confuse data from different
sensors at different depths. The crew returned in the summer, using Argos in the extreme wet terrain (Figure 4), to re-deploy the
recording equipment and re-acquire the data from the buried geophones.
Processing
Figure 5 shows a comparison of a dynamite shot gather being recorded into the vertical component of analog receivers at the
surface, 3m depth and 9m depth. In several ways, Figure 5 illustrates the successful outcome of the buried-receiver experiment
since the data recorded by the 9m receiver data is obviously far superior to the surface receivers and the 3m deep receivers. Not
only are there much larger static delays present on the surface and 3m receivers than on the 9m receivers, but also there is much
more variation in frequency content from trace to trace on the surface and 3m receivers. From Figure 5 it is evident that a good
deal of the reflected energy returning to the surface is being absorbed by the muskeg layer. After observing the pre-stack gathers,
it was decided not to process the 3m receivers since the 9m receivers were much better quality.
The first acquisition of the 3C/2D line took place in January, 2015, and the second took place in June, 2015. Figure 6 shows a
comparison of the PS radial-component asymptotic-conversion point stacks from the surface receivers versus the 9m deep
receivers from the winter acquisition. The zone of interest (the McMurray oil sands) is approximately at 400ms PS time on the
9m deep receiver section. The PS stack from the 9m deep receivers shows good quality, but it is impossible to interpret the
surface-receiver PS section due to its poor quality.
One concern during acquisition was that the 3C down-hole receivers might move or twist in position over time. Analysis of the
orientation of the 3C receivers between winter and summer indicated that the receivers had stayed in place in the 9m deep holes.
It was unexpectedly discovered that several of the 9m deep receivers had inadvertently been placed down the hole in reversepolarity orientation. This was an important finding for improving future acquisitions.
Based on comparisons such as those in Figure 6, it was decided to perform time-lapse processing of winter and summer datasets
using only the 9m deep receivers, not the surface or 3m deep receivers. Although no production took place between winter and
summer acquisitions, it was considered important to test the repeatability of the buried receivers and to test whether the
processing flow could compensate for any changes in the seismic response between winter and summer. The decrease in NRMS
from raw data through post-stack migration during the AVO-compliant processing steps of the PP 9m deep receiver data was
from 130% to 20%. The decrease in NRMS to 20% after post-stack migration indicates that the non-repeatable factors were
reduced to a level where time-lapse differences could be reliably detected, if SAGD production had occurred. Figure 7 shows the
winter, summer and difference sections for the 9m deep receiver dynamite PP data.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 12

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The decrease in NRMS of the radial converted-wave PS data through the major AVO-compliant processing steps was from 130%
to 40%. The decrease in NRMS down to 40% (compared to 20% for the PP data) is to be expected for the noisier, narrow-band
PS data. Figure 8 shows the winter, summer and difference sections for the 9m deep receiver, dynamite PS data.
Conclusions
The acquisition and processing of the 3C/2D test line successfully demonstrates that high-quality 3C images can be obtained if
both the sources and receivers are buried beneath an extremely absorptive muskeg layer.
Acknowledgements
The authors would like to thank Brion Energy and Arcis Seismic Solutions/TGS for permission to publish these results.

Figure 1: A PS converted-wave section from a 3C/3D


seismic survey acquired with receivers at the surface. The
location of thick muskeg at the surface is evident from the
absorption of the shear waves (particularly visible in the
central portion of this section). The location of the 2D test
line with buried receivers is at the center of this crossline
from the 3C/3D survey.

Figure 2: Map view of the acquisition location of the 3m


and 9m buried receivers (in red) along the 2D line and the
dynamite sources (in green) along the 2D line as well as
along lines orthogonal to the central 2D line.

Figure 3: Custom sensor cup at end of loading pole with 3C


analog geophone and sandpoint. Orientation tool.Recording
equipment connected and QCd. Planting tool in action.

Figure 4: Summer operations using Argos in the extremely


wet conditions

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 13

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 5: The vertical component of a dynamite shot gather recorded by 9m deep receivers (left), receivers at the surface (center)
and 3m deep receivers (right). The highest quality, most coherent, broadest bandwidth reflections were recorded on the 9m deep
receivers since the down and upgoing raypaths do not pass through the absorptive muskeg layer at the surface.

Figure 6: Radial PS converted-wave stacks recorded by 9m deep receivers (left) and by receivers at the surface (right).

Figure 7: Final migrated PP stacks recorded with 9m deep receivers from the winter (left), summer (center) and difference (right).

Figure 8: Final migrated PS stacks recorded with 9m deep receivers from the winter (left), summer (center) and difference (right).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 14

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Gray, F. D., K. A. Wagner, and D. J. Naidu, 2016, 3C-4D locates mobile bitumen in oil sands reservoirs:
CSEG Geoconvention.
Pullin, N., L. Matthews and K. Hirsche, 1987, Techniques used to obtain very high resolution 3-D seismic
imaging at an Athabasca oil sands thermal project: The Leading Edge, 6, no. 12, 1015,
http://dx.doi.org/10.1190/1.1439353.
Zhang, J., and G. Larson, 2016, Time-lapse PP-PS joint inversion over an active SAGD Project: CSEG
Geoconvention.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 15

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A variable frequency pseudorandom coded sweep control scheme for mini-SOSIE


LI Na * , CHEN Zu-bin, SUN Feng, LONG Yun
S ummary
Mini-S OSIE signal analysis
In seismic profiles, there exists high side lobe, low
resolution and correlation noise in the seismic profile,
which gained by constant frequency pseudorandom (CFP)
coded sweep with a mini-SOSIE. So in this paper we
propose variable frequency pseudorandom (VFP) coded
sweep technique. Here we simulate mixed phase wavelet,
CFP coded sweep signal and VFP coded sweep signal, and
compare correlation results. The results illustrate that VFP
coded sweep technique broadens the frequency range, and
reduces relevant interference which is caused by constant
frequency. Compared with CFP coded sweep signal, the
autocorrelation of VFP coded sweep, reduces the side lobe
significantly and the ration between main lobe and side
lobe is small, tail amplitude decays quickly, improves the
resolution and signal-to-noise ratio(SNR).

For most vibrosies, the reference signal is controlled by


electronic device so the output signal is stable. But
owing to the mechanical properties of mini-SO SIE, the
reference signal is unstable. So it is necessary to analyze
the reference signal of mini-SO SIE. Figure 1 is the stable
signal, 1s in length. We can know that its vibration
frequency is about 12Hz, and the amplitude is stable also.
Figure 2 is the correlation results, there exists serious
interference because of the constant vibration frequency,
and the side lobe effect is also very serious.

Introduction
For mini-SOSIE, the autocorrelation characteristics of
scanning signal, especially the quality of reference channel
signal will directly affect the quality of seismic profiles. So
the key issue is to design and develop high quality scanning
signal. Barbier (1976) proposed mini-SOSIE seimic
exploration, its main idea is to make the vibrator run as
random as possible by varying the engine speed
(Barbier,1976), but there exists serious background noise,
which affect the quality of correlation seismic record.
Cunningham(1979) first applied pseudorandom coding
theory into the design of vibrosies (Cunningham,1979),
which could avoid side lobe interference generated by
linear or nonlinear sweeping signal(Coupillaud,1976), but
there was serious coherent noise in seismic profile. Shaun
Strong (2004), Hayan Nasreddin(2012), John J Sallas(2008,
2012), T. Dean(2014), etc.al. applied pseudorandom coding
in vibrosies (Shaun Strong, 2004; Hayan Nasreddin,2012;
John J Sallas, 2008, 2012; T. Dean,2014), and pointed that
pseudorandom scheme method can reduce interference,
improve resolution, thus improve the effect of seismic
exploration. With further study, pseudorandom coding is
widely used in vibroseis (M arc Becquey,2002; Timothy
Dean, 2012; ). But due to particularity of mini-SOSIE
signal, its output is equal to the CFP scanning signal,
compared with natural vibration, it can improve the
resolution of seismic record, and reduce side lobe.
In order to realize better result, here we propose VFP
coding scanning technique based on CFP coding scanning
technique, and analyze the principle of the two methods
detailedly and compared the simulation results.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1 Stable vibration signal of mini-SOSIE

Figure 2 correlation of stable signal

Pseudorandom coding theory

Page 16

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A variable frequency pseudorandom coded sweep control scheme for mini-SOSIE

Linear shift register is the basis of pseudorandom coding;


the diagram of n class, q-ary linear shift register is shown in
Figure 3(Dilip,1980).

Figure 3 Principle diagram of linear shift register

In the figure, each box represents a shift register, from left


to right, respectively it is level 1, level 2,..., level n. The
a , a ,...an
initial state of the shift register is ( 1 2
), When there
is a shift pulse, the contents of each level move to the next
level, and the contents of each level are simultaneously
multiplied and added, so the output is

an 1 a1cn a2cn 1 ... an 1c2 anc1 (1)


And the output will be sent to the first shift register, then
a , a ,...an 1
the state of the shift register is 2 3
so the
a1
output is . If there is a consistent shift pulse, the output is
a q-ary sequence, q=2 yields a binary pseudorandom
sequence.

single vibration signal is similar to a mixed phase wavelet,


so here we simulate a mixed phase wavelet, shown in
Figure 4.
If mini-SOSIE works in the way of pseudorandom coding
method, that is to say its a CFP scanning scheme (Barbier,
1974). Fig 5 is a pseudorandom coding sequence and
Figure 6 is the constant frequency pseudo random scanning
signal (vibration frequency is 12Hz). Its auto-correlation
result is shown in Figure 7 Compared with the autocorrelation result of the natural vibration, side lobe effect
weakened slightly, but there is very serious interference.

The autocorrelation function R(j) of binary pseudorandom


sequence
is:
n

2 1, j 0(mod N )
R(j)

1 , j 0(mod N ) (2)
Its period is N, it is approximately a pulse function.

CFP coding scanning technique


Figure 4 Mixed phase wavelet

Though the autocorrelation results have improved in some


degree gained from the simulation results of the pseudo
random coding scanning signal, there is still high side lobe.
This is mainly due to the signal vibration frequency.
For mini-SOSIE, its motion state changes with the coding
in pseudorandom coding sequence, there are two states:
start and stop which are consistent with coding 1s and 0s.
And from the analysis of mini-SOSIE signal, we know

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 17

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A variable frequency pseudorandom coded sweep control scheme for mini-SOSIE

pseudorandom coding theory, and follows the principle of


random number, according to the unique characteristics of
vibration signals, thus we can get VFP coding scanning
signal. We can make the vibrator run as random as possible
by this way. The vibration frequency is 1-14Hz for miniSOSIE, but due to the mechanical p roperties of miniSOSIE, and to protect the mechanism, we make it in 613Hz Figure 6 shows a series of VFP coding scanning
signal and Figure 7 is its autocorrelation results.
Compared with CFP coding scanning technology , the VFP
coding scanning signal is better, reduces the side lobe and
interference significantly, improves the resolution and SNR,
thus improves the quality of seismic data.
Figure 5 Pseudorandom coding sequence

Figure 6 CFP scanning signal

Figure 8 VFP coding scanning signal

Figure 9 Autocorrelation result of VFP coding scanning signal


Figure 7 Auto-correlation of CFP scanning signal

Conclusions
VFP scanning technique
In order to increase the frequency range of the signal and
reduce side lobe, in this paper we put forward VFP coding
scanning technique. So this scheme is based on

2016 SEG
SEG International Exposition and 86th Annual Meeting

This paper puts forward VFP coding scanning technology,


on the basis of pseudorandom coding theory, and follows
the principle of random number. From the analysis of
simulation results, we know that Compared with CFP

Page 18

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A variable frequency pseudorandom coded sweep control scheme for mini-SOSIE

coding scanning signal, VFP coding scanning technology


broadens the frequency range, suppresses interference and
improves resolution. The autocorrelation of VFP coding
scanning signal reduces the side lobe significantly and the
ration between main lobe and side lobe is small, tail
amplitude decays quickly, improves the resolution and
SNR.
Acknowledgment
The research is sponsored by the
Foundation of China (No. 41404097,
Science and technology development
Province (No.20150520071JH), and
Exploration in China (SinoProbe-09-04).

Natural Science
No. 41304139),
project of Jilin
SinoProbe-Deep

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 19

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Barbier, M. G., P. Bondon, R. Mellinger, and J. R. Viallix, 1976, Mini-Sosie for Land Seimology:
Geophysical Prospecting, 24, 518527, http://dx.doi.org/10.1111/j.1365-2478.1976.tb00952.x.
Barbier, M. G., and Villix J. R., 1974, Pulse coding in seismic prospecting sosie and seiscode,
Geophysical Prospecting, 22, 153-175.
Becquey, M., 2002, Pseudorandom coded simultaneous vibroseismics: 72nd Annual International
Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/1.1817375.
Cunningham, A. B, 1979, Some alternate vibrator signals: Geophysics, 44, 19011921,
http://dx.doi.org/10.1190/1.1440947.
Dean, T., 2012, Establishing the limits of vibrator performance - experiments with pseudorandom sweeps:
Annual International Meeting, SEG, Expanded Abstracts.
Dean,T., 2014, The use of pseudorandom sweeps for vibroseis surveys, Geophysical Prospecting, 62, 5074.
Goupillaud, P.,1976, Signal Design in the Vibroseis Technique, Geophysics, 41, 1291-1304.
http://dx.doi.org/10.1190/1.1440680.
Nasreddin, H., T. Dean, and K. Iranpour, 2012, The use of pseudorandom to reduce interference noise in
simultaneous vibrosies surveys: 22nd International Geophysical Conference and Exhibition.
Sallas, J., J. Gibson, F. Lin, O. Winter, B. Montgomery, and P. Nagarajappa, 2008, Broadband vibroseis
using simultaneous pseudorandom sweeps: Annual International Meeting, SEG, Expanded
Abstracts.
Sallas, J., J. Gibson, P. Maxwell, and F. Lin, 2011, Pseudorandom sweeps for simultaneous sourcing and
low-frequency generation: The Leading Edge, 30, 11621172,
http://dx.doi.org/10.1190/1.3657077.
Sarwate, D., and M. Pursley, 1980, Crosscorrelation properties of pseudorandom and related sequences,
Proceedings of the IEEE, 68, 593-619.
Singh, C., 1987, Generating Random Sampling Numbers: Defence Science Journal, 37, 361366,
http://dx.doi.org/10.14429/dsj.37.5923.
Stasev, Y., A. Kuznetsov, and A. Nosik, 2007, Formation of pseudorandom sequences with improved
autocorrelation properties: Cybernetics and Systems Analysis, 43, 111,
http://dx.doi.org/10.1007/s10559-007-0021-2.
Strong, S. and S. Hearn, 2004, Numerical modeling of pseudo-random land seismic sources: ASEG 17th
Annual Geophysics Conference and Exhibition.
Wei, Z., M. A. Hall, and T. F. Phillips, 2012, Geophysical benefits from an improved seismic vibration:
Geophysical Prospecting, 60, 466479, http://dx.doi.org/10.1111/j.1365-2478.2011.01008.x.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 20

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A seismic acquisition technology to improve the production efficiency in big sand dune area
Zhang Beibei *, E Dianliang, Hou Chengfu, Qi Yongfei, Liang Junfeng, Li Kanghu and Bai Jie, BGP, CNPC
Summary
In big sand dune area, the complex topography and surface
mobility make a great deal of difficulties for stake-less
vibrator operation and layout. The COG (Center of gravity of
a source array) is often not accurate and has the high
likelihood to exceed survey requirement, which will cause
additional acquisition . In an attempt to solve these issues,
BGP developed a new system named Digital-Seis System
(DSS) which can realize autonomous navigation and real-time
QC in sand area operation . Through several indoor and
outdoor simulation, we use shot point projection technology to
realize the goal. Before one program line producton start, we
set up a safe distance, it can reach a timely reminder of
obstacle avoidance purposes. The application of DSS can not
only improve operation efficiency and reduce safety risks, but
also significantly decrease the likelihood of COG exceeding
survey requirement. In a word, the application of DSS plays
an important role in high efficiency and safety operation.
Introduction

quality, and it also helps to simplify the management


processes, reduce safety risks, and reduce production costs.
Also, it is suitable for a variety of production mode and good
results have been achieved so far in practical application. At
present, this system has been successfully applied to BGP
multiple acquisition projects, the system can be upgrade
according to the project technical requirements and achieved
good results.
Project Overview
S project is a high density 2D seismic acquisition program
that is in big sand dune area. Program survey lines cross the
Rub Al Khali desert, this is the world's largest flowing desert,
the altitude is from 100 to 300 meters in the east side, aligned
parallel to the great sand dunes, the flowing sand dunes are
mainly caused by the monsoon, wind direction and differences
of mainstream wind, the dunes is mainly characterized by the
new moon shaped, star shaped and linear type three; The
altitude is from 100 to 500 meters in the west side ,there are
mainly gravel, sand dunes between the marshes, Salt Lake
widespread.

With the advance of seismic exploration technology and the


increase of exploration costs, efficient acquisition has become
an evitable trend. The traditional manual introduction point
method has been unable to meet the requirements for effective
acquisition project with high density and high-count recording
channels. Especially in the big sandy area operations, it brings
a great of difficulties to the multiple vibrator operation,
resulting in the COG (Center of gravity) exceeded the
proportion is very high. Because of the special conditions lead
the communications between recorder and vibrator poor that
brings a lot of safety hazards. Therefore, how to improve the
field production efficiency and reduce the safety hazards has
become a bottleneck in field seismic projects.
This article mainly describes one 2D seismic acquisition (S
project) project in Saudi Arabia. The project covers a vast area
and surface elevation varies significantly. The above factors
cause unprecedented difficulties and challenges for vibrators
working and combing graphics transformation in sandy area.
It's not a good ideal to navigate the target position by using
the navigation system where is working in narrow source
path,the acquisition results can not meet the technical
requirements, resulting in a waste of production time and the
production efficiency is relatively low.Based on the
production status of project, we introduced the DSS system
which was composed by control system Digital-Seis
Commander (DSC) and navigation system Digital-Seis
Guidance (DSG:). The application of DSS helps crew to
improve the production efficiency and ensure seismic data

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: Single source line elevation

5 vibrators are per pattern for the project. Based on work area
topographical features, the technology procedure requires
theoretical source points offset maximum 500 meters in
Cross-line direction and 2.5 meters in In-line direction. There
are two source pattern dimensions: one is in flat terrain, the
vibrators can be placed vertical line direction, namely Crossline combinations, the distance between vibrator pad is 8
meters; Another is in the sand dunes and other special areas,
the source can only move along with the pathway of the
source, namely In- line combinations, the pad to pad distance
is 20 meters.

Page 21

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Because of the complexity of the ground situation, the project


did not really realize stake-less operation methods, each offset
VPs are required a flag as a sign to help the vibrator pad to
find the correct position, resulting in additional consumption
of human material.

production. Because of the poor communication there may


landslide and overturned or other major accidents occur when
the additional acquisition. Working in some special areas,
such as oil field, buried underground pipelines, and small
wellhead. Due to its poor visibility, it is easily overlooked and
destroyed, and thus leads to serious accidents. It brings a lot of
hidden dangers and hazards to the safety of production.
Production practice showed that the Sercel navigation
guidance system still has some problems for projects used
with traditional stake-less operation method. These issues
dont accord with requirements of the production in the field
and also restrict the production efficiency.
DSS Systerm

Figure 2: Source pattern dimension

Project Difficulties
24 hours production as a calendar day. According to the
mode of project in the field, for the field production, in order
to improve the production efficiency we must save every
second.
Work in big sand area, its very difficult for vibrators
movement. The acquisition will be carried out twice or more
times as the results cannot meet the requirement. The mainly
reason is the COG value beyond the limited and also this is
the key factor that waste production time. As the offset case,
because of the certain angle between the vibrator pathway and
program lines, Sercel navigation system cannot be well for 5
vibrators proper navigation, in order to meet the requirements
value of COG, the accuracy position can be only adjusted by
field work experience. For the current version of Sercel
seismic acquisition instruments, the acquisition data can be
transmitted after the shooting finished and then the recorder
section check the qualification. If the qualification cannot
meet the requirement, the vibratos should return back to shoot
again. These processes are consuming time, restricting
production efficiency.
The vibrator pattern type conversion together with the surface
conditions change between the dunes and gentle. Compare
with the huge volume of vibrators and the small source COG
value requirements.Its a plenty of time cost durning the
pattern change and reposition, to complete this process the
instrument terminal need to change the pattern mode, and then
transfer the pattern parameters to each of work vibrator.
Occasional case encounter signal blind spots, it will need to
manually set up the repeater in the highest point for the signal
transmission.This process is time consuming.

To solve the additional time consuming and reduce the safety


hazards in the field. We applied the DSS system, the control
module of this system installed inside of recorder, under good
communication between recorder and vibrators, this control
module can monitor the vibrators' operational status and
cross-check the data quality. In case of special terrain
conditions or poor signal for communication, the navigation
module which is installed in the vibrator terminal can
completely customize the independent navigation and data
quality control, the application of the DSS system effectively
saving production time, improving data acquisition quality
and greatly reduces safety hazards especially in bad terrain
area. It gives crew a great help for the project safe and
efficient production.
DSS Work Theory
Acquisition in the undulating topography area, according to
the work terrain conditions and technical requirements, the
design source points may be offset based on the work actual
demand. In order to improve the navigation accuracy and the
precision of vibrator pad location. We got the results from the
contrast by several indoor simulation and field test.The
vibrator pathway was chosen as the navigation reference
document. Recalculate the vibrator pad physical location by
the projection technology and set the constant parameters for
the DSS navigation system , each vibrators will have their
own independent navigation.Use the local network the
vibrators can be mutual monitoring physical location each
other. All of the above technological methods effectively
solve the COG value exceeded issue.

In steep terrain, the recorder staffs cannot identify the actual


terrain in the field and use the radio to command the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 22

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Effect of DSS Application


In order to have a reality and objectives for the application of
DSS, we choose the data from the same project area to make
the comparison. There are two period, one is the first five
months that was no DSS use; another is the last 19 months
that the DSS used in operation.
COG Precision and Beyond Limit Comparison

Figure 3: Display interface of DSG

Performance of DSS Application


DSG terminal can display source point offset value that is in
In-line direction and monitor the quality of each point. The
vibrator just need to follow the guidance from the DSG, then
the result will meet the technology requirement. The DSG
terminal will alarm when the vibrator close to the designed
position and this fuction helps vibrator easily and quickly to
find the designed position. It helps to save the production time
especially in the additional acquisition. Durning the vibrators
pattern conversion process, DSS systerm has already set a
pattern option for choice.The vibrator operators can choose
the pattern in real-time according to the actual situation. This
choice option gives an effective way for the field operation
and avoids the disadvantages of the conventional of unilateral
control, also saves the time that cost in choosing vibrator
pattern and transmitting parameter to the working vibrators.
Acquisition in some special areas, we will advance to input
the coordinates of special areas or special points into the DSS
system, set up a safety distance value, during in the actual
production process, any vibrator reaches the set value, there
will be an alarm sound tips, so as to guarantee the safety in
production.

In complex terrain conditions, the actual COG value of 5


vibrators should be less than 2.5 meters that compare with the
design COG location. It is difficult to guarantee the technical
requirements. Due to the function of DSS, it can guide the
vibrators real-time and avoid the past case that the acquisition
finish completely then it can know whether the result meet the
standards. At the same time, because of the DSS warning
function, the accuracy of COG value is guaranteed effectively.
From the Figure 5, it can be clearly find the last 19 months
COG value error is significantly lower compare with the first
five months, also the COG accuracy had been improved.

Figure 5: Monthly average COG precision and number of beyond


limit statistics

Patten Change and Secondary Acquisition Comparison


As a result of terrain conversion the vibrators pattern also
change between in-line and cross-line type. Four of five
vibrators will need to adjust the position during this process.
The process cost the production time. Compare with the
statistics, 17 seconds will be saved after installing the DSS
System. This result is important for field production. In case
of the seismic data record failed. The vibrators need to move
back and shoot again. The production time will cost especially
in undulating sand dunes. Because of the DSS independent
navigation function, the production time saved clearly.

Figure 4: DSG Status Warning

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 23

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 6: Pattern change and Secondary acquisition time cost


comparison

Figure 8: Cost comparison

Conclusions
Operation Efficiency Comparison
Operation efficiency was calculated by the effect operation
time and production. Comparison of two years' statistical data,
it can improve around 3-5 shots per hour after use of the DSS
system. From the figure 7, the overall efficiency performs an
upward trend.

The successful application of the DSS system provides great


help for high efficiency production, and the main features are
the following:
1. DSG terminal can display the target position and offset
value of source to COG in real-time, monitor the qualified
COG, and thereby increase the accuracy of source COG.
2. DSG terminal can navigate the vibrator on real-time basis.
There will be an alarm sound when the vibrator approaches
the design position, which helps to find the target position
quickly and save the production time considerably.
3. Production cost can be effectively saved and field working
hazards can be reduced.
Acknowledgments
The authors would like to thank BGP, CNPC for the
permission to publish this work.

Figure 7: Hourly production statistics

Production Cost and Safety Comparison


The vibrator operators can navigate independently with the
help of DSS. The recorders can do the cross check in indoor.
The application of DSS helps to reduce the quality events, a
large number of flags which are used as a mark before for
navigation and realize the real vibrator stake-less operation.
The application of DSS played a certain role in safe guarding
for the smooth operation of the project.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 24

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

No references.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 25

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Linear low frequency sweep vs Customized low frequency sweep


Zhouhong Wei*, INOVA Geophysical
Summary
The technique of blended acquisition with dispersed source
arrays becomes an efficient simultaneous shooting method
recording broadband data in land Vibroseis acquisition.
Each vibrator or each group of vibrators is specified to
cover certain narrow frequency bandwidth. For the low
frequency band (< 10 Hz), the vibrator is often driven using
a customized low frequency sweep to prevent the vibrator
from reaching its physical limits. Consequently, this
customized low frequency has to be designed with small
amplitudes and a long sweep length. The long sweep length
potentially downgrades the productivity using this blended
acquisition technique. This paper attempts to present a new
low frequency limits control technology, embedded as a
vibrator control firmware, can enable the vibrator to
produce more force using a linear low frequency sweep
than using a customized low frequency sweep.
Experimental tests show that at least a 5-dB improvement
on the ground force has been seen with the low frequency
limit control at the low frequency range (< 10 Hz).
Introduction
Using Vibroseis techniques to record broadband seismic
data has been routinely performed. In order to efficiently
record broadband data, the technique such as blended
acquisition with dispersed source arrays (Berkhout, 2008
and 2012; Berkhout et al., 2009) becomes very necessary.
In Vibroseis acquisition, blended acquisition with dispersed
source arrays means that a blended source array consists of
different vibrators with different central frequencies
(Berkhout, 2012). In real practice, it is often to see that a
blended source array has the same model of vibrators but
with different sweep bandwidth. For example, Tsingas et
al. (2015) presented a case study using the method of
blended acquisition with dispersed vibrators for acquiring
optimum broadband data. In their experiment, three
conventional 80,000-lbs vibrators (DX-80) were dedicated
to three frequency bands, namely, 1.5 to 8 Hz, 6.5 to 54 Hz
and 50 to 87 Hz. For the bandwidth from 1.5 Hz to 8 Hz, a
customized low frequency sweep was built to run the
vibrator. However, this customized low frequency sweep
resulted in a long sweep length (18 seconds).

be designed with small values. To achieve the required


force-energy, it ends up with a long sweep length. Linear
sweeps are frequently used to run vibrators. However,
extending the frequency bandwidth towards low
frequencies, the linear sweep needs to have either a very
long front taper or a small force output level or both to
avoid the vibrator reaching its physical limits. Eventually,
an even lower ground force is produced.
In order to use linear sweeps to achieve a maximal ground
force output at low frequencies, a low frequency limits
control technique is developed (Phillips et al., 2013).
Simply speaking, the low frequency limits control is aimed
to protect the vibrator from hitting its physical mechanical
stops when a linear low frequency sweep is used to shake
the vibrator. This control is a function of frequency and
uses the theoretical maximum force that can be produced
by the vibrator as a reference. If the force generated by the
linear sweep is greater than the maximum force allowed by
the vibrator, the control will use the maximum force to
drive the vibrator. With this low frequency limits control a
linear sweep from 1 Hz to 10 Hz in 15s with a 0.5-s taper at
a 70% force level can be possibly used to run the vibrator
without reaching any physical limits.
To demonstrate the advantages of using linear low
frequency sweep with the low frequency limits control, two
standing field tests were carried out. The first field test was
in Middle-East desert environment on a prototype low
frequency vibrator (80,000 lbs). In this test, the vibrator
was run with two different bandwidth sweeps. One was a
linear sweep from 1 Hz to 21 Hz in 10s with a 0.5-s taper at
70% force level. The other was a customized low frequency
sweep from 1 Hz to 86 Hz in 12s at 70% force level.
The second test was in South Texas on an AHV-IV model
364 vibrator (60,000 lbs). In this test, the identical sweep
frequency band from 1 Hz to 10 Hz was carried out using
two different sweeps. One was a linear sweep and the other
was a customized low frequency sweep.
In both tests, a vibrator controller named Vib Pro HD was
used to control the vibrator. The low frequency limits
control is embedded as a part of controller firmware.
Test in Middle-East desert environment

The customized low frequency sweep is designed based on


the vibrator fundamental force profile at low frequencies.
Usually, it is very difficult to obtain an optimal
fundamental force profile at low frequencies from a given
model of vibrator. For the reason of vibrator safety, the
amplitudes of the customized low frequency sweep have to

2016 SEG
SEG International Exposition and 86th Annual Meeting

Blended acquisition with dispersed source arrays (Berkout,


2012) is a perfect simultaneous shooting method for
Vibroseis acquisition in Middle-East desert environment.
The prototype low frequency vibrator is designed to output
ground force with a broad frequency band. It has been

Page 26

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Linear low frequency sweep vs Customized low frequency sweep


proven using downhole measurements at 7500 ft that a
broadband of 8 octaves (0.5 Hz to 130 Hz) can be achieved
with this prototype low frequency vibrator (Wei, 2015a and
2015b). It can be configured as a dedicated source unit to
cover the low frequency band (1 Hz to 10 Hz). It can also
serve as a source unit that covers either a middle frequency
band (10 Hz to 50 Hz) or high frequency band (50 Hz to
130 Hz).
In this test, the prototype low frequency vibrator is located
on a sandy surface ground. The sandy surface ground is a
mixed layer of loosen sand and gravel rocks. This type of
ground surface is very common in Middle-East areas.
Figure 1 illustrates an example on sandy surface ground
where the prototype vibrator shakes. It clearly shows that a
rectangular shape of the vibrator baseplate is marked on the
sandy surface ground after vibration. The welding
characters on the baseplate bottom surface leaves clear
marks on the ground surface. This indicates that the
baseplate sinks during sweeping. Much plastic deformation
happens.

Figure 1. The baseplate print on sandy surface ground.


Figure 2 displays the plot of frequency vs time of two
different sweeps. The linear sweep from 1 Hz to 21 Hz in
10s with 0.5-s tapers is shown in the top graph (Figure 2a)
and the customized low frequency sweep from 1 Hz to 86
Hz in 12s is shown in the bottom graph (Figure 2b).
Because the low frequency limits control is implemented,
the 70% force level (56,000 lbs) is set for both sweeps to
attempt to output the maximal vibrator ground force at low
frequencies (< 10 Hz). In Figure 2, red broken rectangles
are added to highlight the frequency bandwidth from 1 Hz
to 10 Hz. It can be seen that the linear sweep uses 4.5
seconds to cover the frequency band from 1 Hz to 10 Hz
while the customized low frequency sweep uses 7.5
seconds to vibrate this frequency bandwidth. The vibrator
dwells 3 seconds more time with the customized low
frequency sweep than with the linear low frequency sweep.
Figure 3 shows the comparison of ground force amplitude
spectra produced using the linear low frequency sweep and

2016 SEG
SEG International Exposition and 86th Annual Meeting

customized low frequency sweep in the frequency range


from 1 Hz to 10 Hz. The amplitude spectra of two sweeps
remain close below 1.5 Hz. From 1.5 Hz to 10 Hz, the
linear low frequency sweep produces much more force than
the customized low frequency sweep. The difference of
approximately 5-dB can be observed. This is because the
amplitude of the linear low frequency sweep is much
bigger than the amplitude of the customized low frequency
sweep. The low frequency limits control enables the
vibrator to shake much hard using linear low frequency
sweep without reaching any physical limits.

Figure 2. Frequency-time plots, (a) a linear sweep from 1


Hz to 21 Hz in 10s; (b) a customized low frequency sweep
in 12s.

Figure 3. Amplitude spectra of vibrator ground forces on


the prototype low frequency vibrator with linear low
frequency sweep (red curve) and the customized low
frequency sweep (blue curve) in the frequency band from 1
Hz to 10 Hz.

Page 27

Linear low frequency sweep vs Customized low frequency sweep

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Test in South Texas

vibrator physical limits. Moreover, the vibrator stays stably


during sweeping.

In the first test, because the sweep frequency bands using


the linear sweep and the customized low frequency sweep
on the prototype low frequency vibrator are different, the
second field test is performed. In this test, a 60,000-lbs
commercial vibrator (an AHV-IV model 364 vibrator) is
used. This vibrator is driven using two different types of
sweeps. One is a linear low frequency sweep from 1 Hz to
10 Hz in 10s with 0.5-s taper at 70% force level. The other
is a customized low frequency sweep from 1 Hz to 10 Hz
but with a sweep length of 20s, 15s and 10s, respectively.
Again, a Vib Pro HD controller with embedded low
frequency limits control is used to control the vibrator.
The vibrator is placed on a gravel surface ground shown in
Figure 4. The ground surface is hard and relatively flat. The
vibrator baseplate is coupled very well with the ground
surface. There is not very clear vibrator baseplate mark on
the ground surface after vibration. This indicates that no
plastic deformation happens during vibrator vibration.

Figure 5. Vibrator ground force traces; (a) the ground force


produced using the linear low frequency sweep from 1 Hz
to 10 Hz in 10s; (b) the ground force produced using the
customized low frequency sweep in 20s; (c) the ground
force produced using the customized low frequency sweep
in 15s; (d) the ground force produced using the customized
low frequency sweep in 10s;
Figure 6 displays the comparisons of fundamental forces
(top graph) and harmonic distortion (bottom graph)
produced by these sweeps. The red trace is produced by the
linear low frequency sweep. The magenta, green and blue
curves are produced by the customized low frequency
sweep with a respective sweep length of 20s, 15s and 10s.
Figure 4. The AHV-IV model 364 vibrator baseplate print
on the gravel ground.
Figure 5 shows ground force wiggle traces produced by
four sweeps. Figure 5a demonstrates the vibrator ground
force produced using the linear low frequency sweep from
1 Hz to 10 Hz in 10s. Figure 5b, 5c and 5d are produced
using the customized low frequency sweep from 1 Hz to 10
Hz with a sweep length at 20s, 15s and 10s, respectively.
It is clearly seen that the linear low frequency sweep
outputs much bigger amplitudes than all customized low
frequency sweeps. This is because the low frequency limits
control with the linear low frequency sweep allows the
vibrator to shake as hard as it can but not reaching any

2016 SEG
SEG International Exposition and 86th Annual Meeting

The top graph in Figure 6 displays the comparison of the


fundamental forces produced by all sweeps. The
fundamental force means that all harmonics are removed
from the ground force signal. It clearly demonstrates that
the vibrator generates much more fundamental force using
the linear low frequency sweep than the customized low
frequency sweep with all sweep lengths. Averagely, an
amount of 20,000-lbs more force is seen with the linear low
frequency sweep.
The bottom graph in Figure 6 shows the comparison of
total harmonic distortion with all sweeps. It is obviously
observed that the linear sweep produces the least harmonic
distortion. The peak harmonic distortion is below 30%. The
customized low frequency sweep produces a relatively

Page 28

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Linear low frequency sweep vs Customized low frequency sweep


higher harmonic distortion, especially the customized low
frequency sweep with the 20-s long length. The peak
harmonic distortion goes to 45% with that sweep. It seems
that the longer the sweep is, the higher the harmonic
distortion is.

Figure 6. Comparison of fundamental forces (top graph)


and comparison of harmonic distortion (bottom graph).
Figure 7 illustrates the comparisons of amplitude and phase
spectra of the vibrator ground forces produced by these
sweeps. The top graph shows the amplitude spectra and the
bottom graph shows the phase spectra.
The top graph clearly shows that the vibrator generates
more force-energy using the linear low frequency sweep
than any sweep lengths of the customized low frequency
sweep. With the linear low frequency sweep approximately
more than 10-dB force-energy is seen in the frequency
bandwidth between 1 Hz and 10 Hz even though the linear
sweep length is only 10s. Again, this is because the low
frequency limits control with the linear low frequency
sweep allows the vibrator to output high force amplitude
but remaining in vibrator physical limit ranges.
The bottom graph shows the ground force phase spectra
produced by all sweeps. The phase spectra almost overlap
each other. All peak phases happen at the beginning of the
sweep (1 Hz). All peak phases except the customized low
frequency of 10s stay at approximately 15 degrees. The
peak phase of the customized low frequency of 10s is 18

2016 SEG
SEG International Exposition and 86th Annual Meeting

degrees. At 2 Hz all phases are close to 10 degrees.


Approximately at 4.2 Hz, all phases go into the target phase
range. Then, the phases maintain very well in this target
range. There is no significant phase variation in the entire
phase spectra. In general, the vibrator controller controls
the phase spectra very well.

Figure 7. Comparisons of ground force amplitude spectra


(top graph) and phase spectra (bottom graph).
Conclusions
In order to use the technique of blended acquisition with
dispersed source arrays efficiently, it requires the vibrator
to produce the maximal force-energy in low frequency
band (< 10 Hz). Customized low frequency sweeps usually
are designed with low sweep amplitude and result in long
sweep length.
The low frequency limits control enables the vibrator to
shake high force amplitudes using conventional linear low
frequency sweeps without reaching any vibrator physical
limits. Experimental tests show that more force-energy in
the frequency band from 1 Hz to 10 Hz can be achieved
using a linear low frequency sweep than the customized
low frequency sweep. A 5-dB improvement is seen with the
prototype low frequency vibrator and a 10-dB improvement
is seen on an AHV-IV model 364 vibrator.

Page 29

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Berkhout, A. J., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924938,
http://dx.doi.org/10.1190/1.2954035.
Berkhout, A. J., G. Blacquire, and D. J. Verschuur, 2009, The concept of double blending: Combining
incoherent shooting with incoherent sensing: Geophysics, 74, no. 4, A59A62,
http://dx.doi.org/10.1190/1.3141895.
Berkhout, A. J., 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, no. 4, A19
A23, http://dx.doi.org/10.1190/geo2011-0480.1.
Tsingas, C., Y. Kim, and J. Yoo, 2015, Broadband Acquisition, Deblending and Imaging Employing
Dispersed Source Arrays: EAGE Workshop on Broadband Seismic - A Broader View for the
Middle East, EAGE, BS27.
Phillips, T., Z. Wei, and R. Chen, 2013, Method of seismic vibratory limits control at low frequencies: US
Patent application, US 2013/0201789.
Wei, Z., 2015a, A new generation low frequency seismic vibrator: 85th Annual International Meeting,
SEG, Expanded Abstracts, 211 215.
Wei, Z., 2015b, Extending the vibroseis acquisition bandwidth with s newly designed low frequency
seismic vibrator: EAGE Workshop on Broadband Seismic - A Broader View for the Middle East,
EAGE, BS06.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 30

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Microseismic surface patch array: Modeling and velocity estimation using ambient noise
Tianxia Jia1*, Carl Regone2, Jianhua Yu2, Abhijit Gangopadhyay1, Robert Pool1, Colin Melvin1, Scott Michell1
1: BP; 2: Formerly with BP
Summary
Microseismic events have been widely used for the
monitoring of hydraulic fracturing (Duncan & Eisner,
2010). Common methods of microseismic monitoring use
downhole or buried arrays, both of which are expensive and
can be operationally difficult to deploy. Surface arrays,
therefore, have the potential to mitigate such issues. In this
paper, we used finite difference modeling based on the
SEAM II model to test a surface patch arrays effectiveness
in locating and imaging microseismic events. We detected
the modeled microseismic events on both noise-free and
noisy data using different layout geometries of the patches.
The results show that surface patch arrays could generate
reasonably good locations of the modeled microseismic
events. Based on the modeling results, BP acquired their
first microseimic dataset for fracture monitoring using a
surface patch array. The array reasonably recorded the
microseismic events. Beyond the traditional event
locations, we also turned the ambient noise recorded by the
surface patch array into signal using the Spatial AutoCoherency (SPAC) technique (Jia, 2011). We noted that the
Rayleigh wave phase velocity calculated from ambient
noise correlates very well with the shallow subsurface
geology.

Introduction
The industry widely uses microseismic events from
hydraulic fracture stimulations to understand the
effectiveness of stimulation (Duncan & Eisner, 2010).
Traditionally, downhole and buried arrays are two popular
ways to record microseismic data. Downhole arrays
(Maxwell et al. 2010), or putting geophones into a
neighboring borehole, is the least noisy, but is limited by
the locations of observation wells, and often by the distance
from the observation well/s to the treatment well. A
shallow, near surface, or buried array can mitigate some of
these problems. Although they are noisier than a downhole
array, they have better observations because of the wide
azimuthal coverage. However, they are quite expensive,
require a longer term investment, and take longer to deploy.
A surface array is another alternative. It could have similar
azimuthal coverage compared to a buried array. It
maintains the observations without the long term capital
investment. Although the data may be noisier compared
with both downhole and buried array, surface arrays are
quick to deploy. Surface arrays come with different layout
geometries. The star pattern, which is a set of linearly

2016 SEG
SEG International Exposition and 86th Annual Meeting

grouped geophones centered on the wellhead, is the most


popular layout. It is effective if one knows the surface noise
direction and wavelength. Another layout geometry is
utilizing patch arrays, which can be beneficial to attenuate
noise from multiple directions. We use finite difference
modeling based on the SEAM II model (Oristaglio, 2012)
to test different geometries of a surface patch array and
locate the modeled microseismic events in the subsurface.
We also add different surface noise. The results show that
we could get reasonably good locations of the microseismic
events using a surface patch array. Following the modeling
results, BP acquired their first microseismic dataset using
surface patch arrays. We used the microseismic events to
understand the effectiveness of the stimulation in different
stages of two horizontal wells. Besides the events, we also
analyzed the ambient noise recorded from the surface patch
array and estimated the Rayleigh wave phase velocity using
the Spatial Auto-Coherency (SPAC) technique.

Modeling and Acquisition Design


We model three events in the subsurface at a depth of 1,825
m and a total lateral of 1,524 m recorded with receivers
spaced at 6.25 m x 6.25 m covering a 5 km x 5 km area.
We then decimate the receivers into different patches and
geometries and use those patches to image the three events
using the time-reverse imaging technique. Table 1
describes the different patch geometries.
Patch1: 125mx125m, 6x6 stations, station interval 25m
Patch gemetry

total stations

10x10 patches, patch spacing: 375m

3600

7x7 patches,patch spacing: 625m

1764

5x5 patches,patch spacing: 875m

900

3x3 patches,patch spacing: 1875m

324

circular shape patches

1400

star shape patches

1224

Patch2: 87.5mx87.5m, 8x8 stations, station interval 12.5m


Patch gemetry

total stations

10x10 patches, patch spacing: 513m

6400

7x7 patches, patch spacing: 815m

3236

5x5 patches, patch spacing: 1120m

1600

3x3 patches, patch spacing: 2325m

576

Patch3: 250mx250m, 6x6 stations, station interval 50m


Patch gemetry

total stations

5x5 patches, patch spacing: 750m

900

Table 1: Different patch geometries

Page 31

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Microseismic surface patch array: Modeling and velocity estimation using ambient noise

Figure 1a is an example of the map view of the 125 m x


125 m, 6 x 6 stations single patch layout. Figure 1b is a
map view of the layout of 7 x 7 patches of Figure 1a, with
patch spacing of 625 m. Three red stars in the middle
represent the locations of the three microseismic events.

Figure 3: a) Five noise sources we modeled are a dump truck,


electric motor, generator, transfer truck and truck start. b)
Locations of the five noise sources in the surface with respect to
the three red stars representing the three microseismic events.

Figure 1: a) Layout of a single patch 125 m x 125 m in size with 6


x 6 stations, 25 m apart. b) Layout of all 7 x 7 patches. The red
stars are the three modeled microseismic events at depths of 1825
m 762 m apart with a total lateral of 1524 m.

Then we test different geometries of patches as shown in


Table 1 to locate the events with the added noises. Figure 4
is an example of the 3D view of the time reverse imaging
of the three events we modeled with added noises using the
surface patch array in Figure 1.

Figure 2 is the 3D view of the time reverse imaging of the


three modeled events using the surface patch array in figure
1. Without noise, the surface patch array focuses the three
events correctly, and we found that the surface patch array
located the events more accurately laterally than vertically.

Figure 4: 3D view of imaging of the three events with added noises


using the surface patch array in figure 1.

Figure 2: 3D view of the time-reverse imaging of the three


microseismic events using the surface patch array geometry in
figure 1.

To further mimic the real world, we modeled different


kinds of noise sources including a truck start, dump truck,
generator, elector motor and transfer truck (Figure 3a left
to right). We then put them at the surface as noise sources
(Figure 3b).

2016 SEG
SEG International Exposition and 86th Annual Meeting

We observed that using both noise free and noisy data, the
surface patch array could generate reasonably good
microseismic event locations. Imaging is more sensitive to
patch spacing. For the same number of patches, the closer
the spacing between patches, the better the results. The
closer station spacing within patches is preferred. The
modeling favors either a 10 x 10 or 7 x 7 patch.

Data Acquisition and Results


Microseismic monitoring was required during hydraulic
stimulation of two horizontal wells conducted by BP.
Different microseismic data acquisition methods were
considered in advance. A surface patch array was chosen

Page 32

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Microseismic surface patch array: Modeling and velocity estimation using ambient noise

because it was easier to access and permit, and relatively


cheaper and faster to deploy.

estimate the Rayleigh wave phase velocity from ambient


noise.

Figure 5a is the layout of a single patch that has 49 vertical


geophones. The center lines through the patch in yellow are
3C geophones. Figure 5b is the layout of all patches with
each of them spaced between 2,000 and 5,000 ft. Two
horizontal wells with target depths ~10,000 ft and ~5,000 ft
lateral lie in the middle of the 32 patches.

Figure 7 is an example of 300 seconds of ambient noise


before any perforation shot and hydraulic fracturing, from
one patch.

Figure 5: a) Layout of single patch, 7x7 stations, total 49 vertical


geophone(red boxes) and 13 3C geophones (yellow circles) with
16 ft station spacing. b) Layout of the whole experiment. Two
horizontal wells in 10,000ft depth with 5,000ft lateral offset are
located in the middle and 32 patch arrays are spread out with patch
spacing range from 2000 to 5000 ft.

The surface patch array successfully located the perforation


and microseismic events. Figure 6a and 6b show the
imaged microseismic events in map view for each well.
Figure 6c is a depth view of the same. In all figures, the
colors represent various stages of stimulation, and the dot
sizes indicate the magnitude of the events.

Figure 6, a) Map view of imaged microseismic events for one well.


b) Map view of imaged microseismic events for another well. c)
3D view of imaged microseismic events, different colors of the
dots represent different stages of stimulation, and different sizes of
the dots indicate the magnitude of the events.

Velocity Analysis Using Ambient Noise


Beyond the microseismic events that were recorded, we
considered if there were any other information that can be
extracted from the ~2,000 stations on the ground. We use
Spatial Auto-Coherency (SPAC) technique (Jia, 2011) to

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 7: 300 seconds recording of ambient noise for 49 stations in


one single patch

By calculating an azimuthal average of the station pair


correlation or coherency (Aki, 1957), a Rayleigh wave
phase velocity dispersion curve was estimated by fitting the
coherency with a Bessel function. For a detailed derivation
and steps of the SPAC method, see Jias doctoral thesis
Chapter 3 (Jia, 2011).
Figure 8 is the dispersion curve calculated from one
individual patch.

Figure 8: Dispersion calculated from one individual patch. Dots are


picked phase velocity for each frequency using SPAC method,
curve are fitted dispersion curve.

We calculate the dispersion curve using the SPAC


technique for all the patches and pick a 7 Hz frequency, and
use Kriging to spatially interpolate a phase velocity map of
this particular frequency (Figure 9a). We then compare the
phase velocity map to the elevation map of the area (Figure
9b). The phase velocity correlates very well with the

Page 33

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Microseismic surface patch array: Modeling and velocity estimation using ambient noise

elevation. Low phase velocity corresponds to high


elevation and slower velocity sediments. High phase
velocity corresponds to low elevation because low
frequency surface waves penetrate deeper implying higher
velocity sediments.

Acknowledgments
We thank BP for the permission to publish this work,
Dawson Geophysical for the acquisition and Schlumberger
for the data processing.

Figure 9: a) Phase velocity map generated by analyzing ambient


noise of surface patch array using SPAC technique. b) Elevation
map of the area.

Conclusions
Using finite difference modeling, we demonstrate that
surface patch arrays can be used to image and locate
microseismic events in the subsurface. The main advantage
of the surface patch array is the flexibility of location of
each individual array. We were able to move the patches
around the planned location due to permits and land access
constraints. This allowed us to significantly reduce the
cycle time and costs associated with obtaining the permits,
potential damage claims and get access to areas that are not
easily permitted, thus dramatically improving the overall
efficiency of the acquisition. We successfully imaged
microseismic events using surface patches to monitor the
hydraulic fracturing of two wells. We were also able to
calculate the Rayleigh wave phase velocity from ambient
noise recorded by the surface patch array using SPAC
technique. The phase velocity map correlates well with
shallow subsurface geology. Rayleigh wave phase velocity
can be further inverted to shear wave velocity, which could
be used for velocity model building. Ground roll
attenuation, a critical step in land data processing, could
benefit hugely from the accurate estimation of Rayleigh
wave phase velocity. The SPAC technique can also be
applied to traditional active source 3D land seismic
acquisition, where a great amount of ambient noise data
recorded by geophones are discarded or treated as noise
and therefore attenuated during pre-processing.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 34

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Aki, K., 1957, Space and time spectra of stationary stochastic waves, with special reference to
microtremors: Bulletin of the Earthquake Research Institute, 35, 415456.
Duncan, P. M., and L. Eisner, 2010, Reservoir characterization using surface microseismic monitoring:
Geophysics, 75, no. 5, 75A13975A146, http://dx.doi.org/10.1190/1.3467760.
Jia, T., 2011, Advanced analysis of complex seismic waveforms to characterize the subsurface earth
structure: Ph.D. thesis, Columbia University.
Maxwell, S. C., Rutledge, J., Jones, R. and Fehler, M., 2010, Petroleum reservoir characterization using
downhole Microseismic monitoring: Geophysics, 75, no. 5, 75A12975A137,
http://dx.doi.org/10.1190/1.3477966.
Oristaglio, M., 2012, SEAM Phase IILand seismic challenges: The Leading Edge, 31, 264266,
http://dx.doi.org/10.1190/1.3694893.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 35

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Comparing Seismic Data Recorded with Three-Geophone and Six-Geophone Groups


Kevin L. Woller*, Pioneer Natural Resources
Summary
Theory
An experiment conducted in Reagan County, Texas,
compares pre-stack and post-stack data recorded on
geophone groups that had three and six geophones per
string. I found that the reflection signal of data recorded
with six geophones was marginally better than that
recorded on three geophones on pre-stack data, but showed
a more substantial improvement of reflection signal to
random noise ratio on stacked data. I further measured data
quality in an interpretive sense by autopicking horizons on
3D volumes for each data type.
Introduction
Not much attention has been paid to field measurement
options and techniques for land 3D recording in published
papers in recent years. This is owing to the fact that land
recording techniques evolving to orthogonal source and
receiver lines with as small point and line spacing as
project budget allows. Other variables are the number of
vibrators and sweep frequencies, but these too seem to have
decreased in variability as broad-band, multi-vibrator fleet
recording has become the norm in most areas.
A search for recent articles on geophones, noise reduction
and arrays only yielded a few articles, such as Beresford
and Johnstons (2007) work on shot arrays for 2D lines and
Watt et al (2005) evaluating 3C sensor coupling. There are
many more studies addressing use of geophone arrays for
noise reduction, but they are usually focused on reducing
organized noise such as ground roll and usually applied to
2D recording where the sources and receivers are in line
with each other.
In most areas in North America the number of geophones
per group has stabilized at six. Some contractors have used
and offered three geophones, but the cost savings have been
small and the signal advantages less certain. Conventional
thinking about the number of geophones and improvement
in the signal-to-noise ratio may not apply as well to modern
3D. The geophone layouts are often closer together like a
point receiver. This closer layout may make the apparent
advantage of larger geophone groups less effective than a
smaller number of geophones.
In this analysis I use comparative quantitative measures to
evaluate whether 3D data recorded on six-geophone groups
is superior to that recorded on three-geophone groups. I
compare the data pre-stack and post-stack and also compare
statistics from auto-picking horizons in 3D stacked
volumes.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Signal theory states that the improvement of signal-to-noise


is a function of the square root of the number n of
geophones whose signals are being summed (Signal/noise
improvement = f(n1/2)) (Sheriff, 2002). Traditionally, in
areas with a lot of ambient or environmental noise, as many
as 48 geophones were used on 2D seismic lines. With the
advent of higher channel 3D surveys, that number of
geophones per stations has dropped to six in most cases as
a happy medium for signal-to-noise improvement. A six
geophone group should have a signal-to-noise improvement
of the square root of 6, or about 2.45 over using a single
geophone. Similarly a three geophone string should have an
improvement of 1.73. The six geophone group should
therefore have a 42% signal-to-noise improvement over the
three geophone group for random noise. However, this
improvement may not prove out when the geophones are
closer together as they often are for modern 3D surveys.
Noise affecting one geophone may also affect the other
nearby geophones negating the expected advantage of a
larger number of geophones. Further, more modern
recording using higher channel count and denser source and
receiver locations provide improved signal-to-noise from
higher fold stacking and advanced migration techniques
that may reduce the effectiveness or need for improving the
signal at the geophone group level.
In addition, signal arriving at geophones on wider
recording patches used in modern 3D surveys is at an
increased angle to any geophone array that is in line with
the receiver line. This reduces organized noise signal/noise
improvement on receiver stations with more geophones as
the array effects are reduced. Reflection and noise signals
coming across linear arrays along a receiver line or circular
array at the receiver points are in essence being recorded by
at most a 2 element geophone array at any azimuth or offset
than those near to the azimuth of the receiver line. This
geometrical effect may further reduce the efficacy of
having more geophones per station.
Procedure
While executing a 3D survey program in 2012, I instructed
the crew to record an experimental receiver line with three
geophones per group next to a production receiver line with
six geophones per group. The experimental receiver line
was 20 feet to the east from the production receiver line.
Geophones for both lines were laid out in approximate 10foot diameter circles around the surveyed station points.

Page 36

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Both lines recorded data from sources all around the


receiver line, resulting in raw field records, commonmidpoint (CMP) gathers and stacked volumes for each line.
All the data had the same processing flow, including the
steps shown below.
Refraction statics
Trace edit, auto spike edit
Gain function, long window AGC, trace gap
deconvolution (40 msec gap, 200 msec operator
length)
Two passes velocity and residual statics
NMO mute, final datum statics application
Figure 1 is a map showing the fold of the layout of the
experiment. The actual receiver line is shown in blue; the
experimental receiver line is nearly identical in location at
this scale. Using this configuration, the average CMP fold
in the center of the analysis area is about 16-18.

expect to see smoother spectra with geophone groups


having more geophones. However, the spectra of the
ambient noise tests are very similar, with frequency spikes
occurring in the same spots. The spectra for each data type
are similarly convoluted. The only difference seems to be
that the six-geophone spectra often have 2-4 dB higher
power in the 5-25 Hz range.
Data analysis then examines both pre-stack and post-stack
signal and noise characteristics of the two types of
geophone layouts. I compare amplitude and spectra for
CMP gathers and stacks. Then, frequency-wave number
(FK) filtering is employed to define and separate out
reflection signal, random noise and organized noise. Next
the RMS amplitude is compared for each signal or noise
type for three- and six-geophone group data.
Amplitude and frequency comparisons Pre-stack
The pre-stack gather data amplitudes are essentially
identical for the six-geophone data and three-geophone
data. However the frequency spectrum for both shows that
the three- geophone data has a 1-4 dB higher power in the
higher frequency range of about 40-90 Hz. The question is
whether this higher power is extra signal or extra noise in
the data.
Using FK filtering I defined and separated NMO corrected
shot gather data (sorted from CMP gathers) into reflection
signal and random noise as illustrated below in Figure 2. I
then measured the RMS amplitude of the signal and noise
for each data type and compared them. In this case, the
signal-to-noise measurements were essentially the same at
around 0.50. Examples of the NMO-corrected common
source gather data FK filtered to pass reflections are shown
in Figure 3 and 4. Note the higher power values at higher
frequencies for the three-geophone data.
Amplitude and frequency comparisons Post-stack

Figure 1: Receiver line in blue and active source points for the
experiment in red

Analysis Methods
The first level of analysis focuses on ambient data
recordings made over six different days. Comparing the
spectra of the two input geophone group types, we might

2016 SEG
SEG International Exposition and 86th Annual Meeting

Analysis for post-stack data proceeded in a similar fashion


to the pre-stack data. I looked at a single stacked line in the
middle of the study area and 21 stacked lines centered on
the middle of the study area. I compared the RMS
amplitudes of three data types, reflection signal, random
noise and coherent noise. On the single stacked line and
the 21 stacked lines, the signal-to-random-noise ratio was
17% and 19% higher, respectively, for the six-geophone
data over the three-geophone data. This is somewhat less
than the theoretical improvement of 41% but is in the same
direction as the theory suggests. See figures 5 and 6 for
examples of FK passed reflection data for several inlines.
Comparing the reflection signal to coherent noise showed a
slighter higher ratio for the 6 geophone data, but may not
be significant.

Page 37

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

horizon through the volume, but constrained the data to a


polygon where the fold and data quality were high.
There are some subtle differences between the resulting
picks and their attributes. For Horizon 1 and 2, the time
structure maps are nearly identical. The amplitude values
for the 6 geophone data show a somewhat wider variation
than the three-geophone data, perhaps resulting from better
reflection signal. The pick confidence maps and histograms
are very similar, but the 6 geophone data mean is slightly
higher at 0.70 versus 0.67 for the three-geophone data
(~4% improvement) for shallower Horizon 1 and 0.69
versus 0.64 for the deeper Horizon 2 (~7% improvement).

Random
noise
Reflection
signal
Random
noise

Figure 2: NMO corrected shot gather and FK polygon

Figure 4: Three-geophone NMO-corrected shot gathers Reflection signal

Figure 3: Six-geophone NMO-corrected shot gathers - Reflection


signal

Horizon Autopicking Comparison


Since the 6 geophone stacked data indicated a higher
signal-to-noise for reflections and random noise I devised
some tests to see if the improved signal would result in any
differences in autopicking horizons. I performed
autopicking on two horizons indicated in Figure 7. For the
first test, I interpreted the horizons on an inline and a
crossline in each volume of data. Then I used the
autopicking feature in the Kingdom software to extend the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5: Six- geophone stacked inlines

As an additional test, I reran the autopick test on Horizon 1


using a few seed points in the middle of the study area
rather than seed points from an inline and a crossline. The
results were very similar to the above experiment using a
full inline and crossline as seed points. The time structure
maps are nearly identical, and the amplitude ranges are
wider for the 6 geophone data. This time, the mean pick
confidence values are lower in value. However, once again
the 6 geophone data has a higher mean confidence value of

Page 38

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

0.58 versus 0.55 for the three-geophone data, or about a 5%


improvement (figure 9).

Figure 9: Pick confidence for Horizon 1: Seed points


Figure 6: Three-geophone stacked inlines

Conclusions
Data recorded on groups with 6 geophones in the southern
Midland Basin, Texas have higher reflection signal-torandom noise than groups with three geophones as
measured on FK filtered stacked data and as measured by
autopicking confidence values on stacked data volumes.
The improvements of the signal-to-noise were in the 1719% range on the stacked lines with 6 geophones versus the
three-geophone stacks.
The improvement in pick
confidence in the interpretation tests were in the 4-7%
range.

Figure 7: Six-geophone section showing interpreted horizons

Given the relatively low cost of deploying more geophones


per group, the uplift in data quality in some of our poorer
data recording areas justifies using six-geophone strings
instead of three-geophone strings.

Acknowledgements
Thanks to Pioneer Natural Resources management for
letting me to conduct the experiment. Dawson Geophysical
carried out the experiment in the field, and GeoEnergy
provided data processing services.

Figure 8: Pick confidence for Horizon 1: Seed lines

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 39

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Beresford, G., and P. Johnston, 2007, Slant Arrays to reduce ambient noise in Southern Bangladesh: 77th
Annual International Meeting, SEG, Expanded Abstracts, 1620,
http://dx.doi.org/10.1190/1.2792373.
Sheriff, R. E., 2002, Encyclopedic Dictionary of Applied Geophysics, 4th ed.: SEG,
http://dx.doi.org/10.1190/1.9781560802969.
Watts, H. J., J. Gibson, R. Burnett, and S. Ronen, 2005, Evaluation of 3C sensor coupling using ambient
noise measurements: 75th Annual International Meeting, SEG, Expanded Abstracts, 912919,
http://dx.doi.org/10.1190/1.2148309.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 40

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Assessing marine 3D seismic acquisition with new technology: A case history from Suriname
Peter Aaron*, Grant Byerley and David Monk, Apache Corporation
Summary
Before acquiring data offshore Suriname in 2016 a survey
assessment was performed to compare various contractor
methods of de-blending and de-ghosting. In this paper
we will present the model data that was used to make this
assessment and the results of the test comparisons.
Introduction
Over the last 10 years both broadband acquisition and
processing techniques and simultaneous source acquisition,
have played a major role in advancing both the data quality
and the efficiency of 3D marine seismic surveys.
The aim of broadband acquisition and processing is to
remove the unwanted effects on the phase and amplitude
spectra of the source and receiver ghosts, often referred to
as de-ghosting. This is obtained either through a
combination of acquisition and processing solutions, such
as multi-sensor streamer (Carlson et al. 2007) and multilevel source acquisition and variable depth streamers
(Soubaras 2010) or may be achieved using processing
solutions alone (King et al. 2015).
Simultaneous source acquisition typically refers to data that
have been acquired with several sources which overlap in
time. This can either be where shots are fired within a
known and usually relatively short firing window of each
other (Beasley et al 2012), or through the extraction of shot
records from a continuously recorded data set, where the
extracted record length is longer than the interval between
shots. Allowing overlap in the sources is typically done in
order to improve acquisition efficiency and to obtain a
denser dataset which may be better sampled in midpoint,
offset and azimuth. The improved sampling comes with the
trade-off of unwanted interference from one or more other
sources that must be handled in processing using an
appropriate de-blending technique (for example, Akerberg
et al. 2008). Therefore, the ability to separate out, or deblend, in processing the seismic shots must be taken into
account as part of any acquisition design.
This paper uses both real and simulated model datasets in
order to assess current industry receiver de-ghosting and
de-blending capabilities to aid with acquisition design
offshore Suriname.
Background
In 2016 Apache began an assessment of acquisition
methodologies for a survey to be acquired in Suriname.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Seismic acquisition offshore Suriname can be challenging.


For many parts of the year, the weather is poor, and
dramatic eddy currents can prove difficult to cope with
when deploying or towing long streamers.
A previous survey had been acquired in an adjacent block
in 2013. Testing prior to the 2013 survey had suggested
that at the time it may have been a significant compromise
to rely on processing methods to achieve de-ghosting of the
data, and the survey was acquired with a conventional
acquisition method, and streamers towed at 8m depth.
However, at the end of the survey test data were acquired
with streamer depths of 8, 12 and 18m. Additionally, the
2013 survey was acquired with two sources operating in a
flip/flop mode, with a shot interval of 18.75m. This
meant that the extracted shot records of 10 seconds, had
data from an interfering shot which typically occurred
around 8 seconds, but occasionally occurred as early as 6
seconds (due to the vessel speed over the ground being fast
due to movement with the current). On occasions no
interfering shot was visible as the shot timing was greater
than 10 seconds when the vessel was progressing against
the current, and speed over the ground was slow.
For the 2016 survey the intent was to match the previous
bin size, but with the hope of increased efficiency through
the use of triple source and 12.5m shot interval (each array
would therefore be activated with the same spacing (37.5m)
as in 2013 and to improve data quality through the use of a
deeper tow streamer, and subsequent de-ghosting during
processing of the data.
Methodology and Model building
The data acquired offshore Suriname in 2013 proved
invaluable in building a realistic model of data that might
be acquired in 2016 in an adjacent block. The objective of
testing was two-fold. Firstly to assess whether a deeper
deployment of streamers could be adequately de-ghosted,
and more specifically could data at two different depths (8
and 18m) be de-ghosted separately to the extent that the
data could be matched. If successful, then new data could
be acquired with a deeper streamer system, which would
minimize noise in the data and also potentially reduce
downtime in the survey, but the data could also be tied to
the previous survey through reprocessing to create a
seamless data set.
Secondly, the efficiency of the acquisition could be
significantly improved through the use of triple sources
firing sequentially. With 50m between sources in the
crossline separation, this meant that the streamers could be

Page 41

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Assessing marine 3D seismic acquisition with new technology


separated by 150m rather than the 100m used in the
previous survey, and a large efficiency gain could be
achieved. Such wide spreads in marine seismic have been
criticized by some, and are certainly not appropriate for
acquisition in shallow water with very shallow targets.
However, this efficiency gain is at the expense of
interference in the shot records if the inline shot interval
were to be preserved. With shots fired every 12.5m, then it
would be anticipated that under a no current condition and
a vessel speed of just over 4 knots, then an interfering shot
would be seen with a fire time at about 6 seconds, but given
the potential for adverse currents in this area, it is possible
that the interference could occur as shallow as 4 seconds,
and could occur at the same time as the prospective
geological interval. It is therefore critical to assess whether
this noise can be properly attenuated, or de-blended during
processing.
De-ghosting Model
The model data for the comparison and evaluation of
receiver de-ghosting methods was simply to provide data
from the same vessel pass, which had been recorded with
streamers at three different depths of 8, 12 and 18m. The
receiver ghosts can be easily seen in the stacked sections
shown in Figure 1 and Figure 2, which compare a stack
from data acquired with the 8m streamer to that acquired
with an 18m streamer. Note the strong black (trough) of
some events, followed by the ghost white (peak). The delay
time difference of the ghost between 8 and 18m
deployments of the streamers is obvious.

Figure 3 shows the obvious receiver ghost notches


associated with the 8, 12 and 18m streamer depths derived
from the average spectra of the stack lines.
The ideal result of de-ghosting would be an identical
section with no ghost, regardless of streamer deployment
depth. An example of a receiver de-ghosted section is
shown in Figure 4.
De-blending Model
Construction of model data for the de-blending test was
more involved, as no real triple source data was available
in Suriname. However, there were data acquired during the
2013 acquisition which had no interfering source due to the
strong head current, and consequently the long time
between shots. Using data from this line it was possible to
synthesize a triple source acquisition. Of course this
simulated dataset must now contain data from three sources
where only information from two sources were obtained in
2013. To introduce a pseudo third source, energy recorded
from one of the sources, but a neighboring streamer was
used. Since the 2013 acquisition had been acquired using
18.75m flip-flop, the spacing per individual source in the
simulated triple source dataset was honored. This is an
important point because it meant that the various domains
such as common receiver and common midpoint, which
may be utilized as part of the de-blending approach, all had
the correct trace sampling.

Figure 1: Data zoom from stack of 8m streamer data.

Figure 3: Spectra from data acquired at 8, 12 and 18m streamer


tow depths showing the respective receiver ghost notches

Figure 2: Data zoom from stack of 18m streamer data.

Figure 4: Data zoom from stack after de-ghosting

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 42

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Assessing marine 3D seismic acquisition with new technology


The 2016 acquisition will be acquired continuously and
with the intention to extract shot gathers with a 9 second
trace length. While it may seem straightforward to impose a
time delayed shot onto each primary shot record to test
de-blending as has apparently been shown in other papers
(Baardman et al. 2013 and Wolfarth et al. 2016), this would
not adequately simulate the continuous recording and shot
interference expected from a triple source design. Based on
the largest boat speed over the ground of about 6 knots,
observed during the 2013 acquisition, it was estimated that
a worst-case situation would see an average time between
shots of approximately 4 seconds. Therefore, in order to
simulate a worst-case overlapping shot record it is
necessary to construct a synthetic involving 5 shots, for
each output 9 second record. This involves the two
previous shots, which may have energy that extends into
the primary shot, and two subsequent shots, which may
occur at around 4 and 8 seconds respectively.
Figure 5 shows the construction of each synthetic record
using 5 consecutive records. After blending, the primary
shot record (in the center of the figure) must contain energy
from the two previous records, and the two subsequent
records. Note that the shots to be overlapped have been
shifted up or down by their respective firing time delays
relative to the primary shot. The 5 records were summed
and the trace length of the output record cut to 9 seconds.
Perfect de-blending would separate out all the interfering
data and allocate it back to its correct shot. Figure 6 shows
a synthetic shot record created from this model.

Figure 5: Construction of simulated triple source shot from 5


sequential shots. The primary shot to be output is in the center.

Figure 6: Individual shot record. Perfect de-blending (left) and


simulated triple source (right).

2016 SEG
SEG International Exposition and 86th Annual Meeting

While many simultaneous source acquisitions rely on


deliberately introduced timing variations, or dither in the
shot times to enhance the ability to de-blend, the intent in
the 2016 planning is to shoot on a fixed set of pre-plot
locations. The 2013 acquisition, which was shot on a
regular 18.75m flop-flop pre-plot, had already shown that
natural shot-to-shot timing variations existed, as a result of
vessel speed variations between shot locations and were
adequate for the de-blending. Figure 7 shows a plot of
relative firing times taken from a 2013 sequence. Note that
the average absolute shot-to-shot variation in timing along
this line was around 120ms. Similar timing variations have
been observed in other surveys with shot spacing as small
as 6.25m.

Figure 7: Relative shot-to-shot timing variations from a line


acquired in 2013.

In order to make the simulated triple source dataset more


realistic, actual shot-to-shot timing variations were
extracted from real lines acquired in 2013 and subsequently
used when deriving the relative firing times of the
interfering records on each primary record. Since the 2013
shot-to-shot variations were observed to change in their
overall magnitude from line to line over the survey, 3
different sets of relative firing times were used. This can be
clearly seen on the stack in Figure 8. Note that in the left
third of the section the spread of timing variations is tighter
than in the middle of the section. More typical variations
are reflected in the interference shown in the center and on
the right of the section. Note that on the left of the section,
there is interference of the water bottom reflection which
occurs from a 2nd subsequent shot just before the end of
the record in the shallower water.

Figure 8: Stack of simulated triple source data

Page 43

Assessing marine 3D seismic acquisition with new technology

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Data Results and Analysis


For both sets of validation tests, the requirement was to
deliver the shot gathers immediately after de-ghosting and
de-blending respectively. No other processing was to be
performed. A consistent post-processing and 2D pre-stack
time migration (2D PSTM) flow was applied to each result
to allow for a direct comparison. The de-ghosting results
were analyzed by comparing both pre-stack and stacked
sections for the various streamer depths over several
frequency bands to assess phase and amplitude response.
De-blending results were analyzed by studying the residual
energy between the ideal de-blended control dataset and the
actual de-blended data. Analysis was carried out over
several time zones and frequency bands and looked at the
impact on both the pre-stack data quality and on stacked
images. Figure 9 looks at the residual interfering energy
levels after one set of de-blending results on 2D PSTM
angle gathers covering 2 40o. Figure 10 shows a raw 2D
PSTM stacked section before and after the same pass of deblending.
When assessing the impact of any residual interfering
energy on the pre-stack gathers and the stacked image the
target levels and bandwidth of interest must be taken into
account. How much residual energy is allowable is likely to
depend on the final goal. For example, if amplitudes are
key then any residual must be negligible at pre-stack level.
The ability to apply further residual noise attenuation must
also be taken into account. Furthermore, its important to
remember that the simulated triple source dataset described
here was based on a worst-case example. Additional
simulated triple source datasets were generated using
speeds of 5.2 and 4.5 knots (representing P10 and P50
probabilities respectively) which better represent both an
extreme and likely case.

Figure 10: 2D PSTM raw stacked image before de-blending (left)


after in-house de-blending (right).

Conclusions
In this paper we have described how a test of both deblending and de-ghosting was conducted prior to
acquisition of new data offshore Suriname.
While it is easy to accept that new technology can improve
both the quality and efficiency of acquisition of marine
seismic data, it is necessary to validate the potential
improvement and understand any potential risk before
embarking on acquisition. The tests carried out here
provided us with an insight into current industry
capabilities for both de-blending and de-ghosting using
models which were directly applicable to our new survey.
In the case of de-ghosting, we were able to see how well
the 8 and 18m tow depths could be matched in processing.
The de-blending results provided a quantitative assesment
of any potential trade-off in data quality when acquiring
with a triple source configuration and an insight into how
this could affect our main target levels and goals.
The models described were given to a number of
contractors, and as part of the presentation of the paper,
results will be shown.
Acknowledgements
At Apache Corporation we would like to thank the New
Venture group for pushing the envelope in how they
acquired data, and Polarcus, who acquired the test data in
2013, and the production data in 2016. As a consideration
to those companies who participated in the test evaluation,
we have not identified the individual contributions.

Figure 9: 2D PSTM raw angle gathers before de-blending (left),


after in-house de-blending (center) and residual versus ideal (right)

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 44

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Akerberg, P., G. Hampson, J. Rickett, H. Martin, and J. Cole, 2008, Simultaneous source separation by
sparse Radon transform: 78th Annual International Meeting, SEG, Expanded Abstracts, 2801
2805, http://dx.doi.org/10.1190/1.3063927.
Baardman, R., and R. Borselen, 2013, Separating sources in marine simultaneous shooting acquisition
Method & Applications: 83rd Annual International Meeting, SEG, Expanded Abstracts, 15,
http://dx.doi.org/10.1190/segam2012-0992.1.
Beasley, C., I. Moore, D. Monk, and L. Hansen, 2012, Simultaneous sources: The inaugural full-field,
marine seismic case: 82nd Annual International Meeting, SEG, Expanded Abstracts, 15,
http://dx.doi.org/10.1190/segam2012-0834.1.
Carlson, D., A. Long, W. Sllner, H. Tabti, R. Tenghamn, and N. Lunde, 2007, Increased resolution and
penetration from a towed dual-sensor streamer: First Break, 25, 7177.
King, S., and G. Poole, 2015, Hydrophone-only receiver deghosting using variable sea surface datum:
85th Annual International Meeting, SEG, Expanded Abstracts, 46104614,
http://dx.doi.org/10.1190/segam2015-5891123.1.
Soubaras, R., 2010, Deghosting by joint deconvolution of a migration and a mirror migration: 80th
Annual International Meeting, SEG, Expanded Abstracts, 34063410,
http://dx.doi.org/10.1190/1.3513556.
Wolfarth, S., D. Priyambodo, T. Manning, T. Septyana, and S. Putri, 2016, Where are we today with ISS
de-blending processing capability? Results from Shallow Water OBC Data, Indonesia: 78th
Annual International Conference and Exhibition, EAGE, Extended Abstracts.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 45

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

On the anatomy of the air-gun signature


Halvor Groenaas, Ola Pramm Larsen, and David Gerez*, WesternGeco; Matthew Padula, Teledyne Bolt
Summary
In the context of designing a new air-gun with reduced highfrequency emissions, we developed computational fluid
dynamics models that provide new insight into the dynamic
behavior of air-guns. We combined these simulations with
video images and high-fidelity acoustic measurements from
physical tests to investigate the physical mechanisms
contributing to the air-gun signature. We considered the
dynamic output in the time domain, and the energy
components in the frequency domain.

potential environmental effects of high-frequency noise,


Landr et al. (2011) investigated mechanisms that could
generate such noise, such as ghost cavitation and mechanical
clicking.
We used advanced computational fluid dynamics (CFD)
models to describe the physical mechanisms behind the airgun signature, and validated these models against
experimental results. We performed this work in the context
of designing a new air-gun with reduced high-frequency
emissions, described by Coste et al. (2014) and Gerez et al.
(2015).

Introduction
Models and measurements
Figure 1 shows a cross section of a typical air-gun. Before
firing, the forces exerted by the pressurized air in the
operating chamber and the fire chamber act on the two faces
of the shuttle to keep it in the sealed position. When the
solenoid valve is actuated by an electrical signal it initiates
an air flow that shifts the net force, accelerating the shuttle
to the right. This exposes a progressively larger area of the
ports, allowing air to escape from the fire chamber into the
surrounding water.

Figure 1: Internal components of an air-gun. The fire chamber is


sealed with the shuttle in its left-most position.

Rayleighs (1917) description of the period of collapse of a


cavity in water is the foundation of early analytical models
of air-gun signatures, which were validated with
experimental results. Giles et al. (1973) described the output
of arrays comprised of air-guns that individually produced
spherical and non-interacting bubbles. Ziolkowski et al.
(1982) and Vaage et al. (1984) considered the interactions
among multiple spherical bubbles. Laws et al. (1990)
generalized the equations to include the behavior of clusters.
Cox et al. (2004) then considered the effects of non-spherical
bubble geometries. To address growing concerns about the

2016 SEG
SEG International Exposition and 86th Annual Meeting

We first developed 3D CFD models to understand the


dynamic interactions across multiple domains: fluid flows
and pressures within the air-gun, mechanical displacements
and forces, the stochastic oscillation of the air bubble, and
the propagation of acoustic waves in the near-field. This
presents a number of modeling challenges, including severe
pressure gradients, sub-microsecond transient time steps,
multiphase flows, and hexahedral dominant meshes of up to
five million elements. We improved simulation efficiency by
taking advantage of symmetry, emulating specific paths,
configuring dynamic meshes, and optimizing time steps.
Modeling inevitably involves a compromise between
fidelity and execution time, so we developed a range of
models tailored to specific tasks, with execution times
ranging from 12 hours for our 1/8-3D production model to
several weeks for our full-3D maximum-fidelity model.
We then performed physical experiments with two
objectives: calibration and validation. We used specially
made air-guns fitted with Hall-effect motion sensors and
multiple pressure transducers to measure internal parameters
that we applied to calibrate the CFD model. We then
acquired near- and far-field acoustic signatures with a
calibrated high-frequency acquisition system, taking special
measures to eliminate spurious noise and vibrations. We
used these acoustic measurements to validate the models
predictions of the air-gun itself and the bubbles interaction
with the surrounding water. We calibrated and validated on
a variety of designs, providing added confidence in the
models predictions across the entire design space.
We also recorded video imagery of a limited set of firing
events with a high-speed camera capable of up to 3,000
frames per second. We used these images to qualitatively
confirm that the model produces the expected bubble.

Page 46

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 2 shows an example of validation for a single design


point. There is an excellent agreement between the
experimental (blue) and CFD (cyan) results for the new airgun, in both the time and frequency domains. There is only
a small difference in the precursor that precedes the main
peak, caused by a difference in the internal geometries, but
this does not have a significant effect on output energy. The
standard air-gun (black) has a very different pulse shape and
spectrum.

Figure 3: Measurements from sensors positioned inside an


instrumented standard air-gun: Hall-effect shuttle-motion sensor
(blue), fire-chamber pressure (black), and solenoid current (brown).
Two secondary shuttle strokes (arrows) can be seen.

Decomposing the air-gun signature


Having validated the CFD model, we now decompose the
air-gun signature into three main components: precursor,
main peak, and free-bubble oscillation. We use CFD
simulations and video images to understand the physical
mechanisms, and examine their contributions to the
measured output. We compare two air-guns: a standard airgun, and the new air-gun in the most restrictive of its three
bandwidth configurations.
Precursor

Figure 2: Signature of the new air-gun (configured for its most


restrictive bandwidth and 150 cu. in.) as predicted by CFD (cyan)
and as measured over 10 shots (blue); and measured signature of a
standard air-gun (black) over 10 shots. Upper panel: time domain.
Lower panel: frequency domain calculated over main peak.

The precursor is a small-amplitude broadband event that


precedes the main peak of the acoustic signature. We
confirm that air flows represent the basic mechanism. As the
shuttle starts to move, but before it reaches the ports, the firechamber seal is opened and pressurized air escapes through
the small annular gap between the shuttle and the
surrounding housing. The radial gap size of thousandths of
an inch represents a tradeoff between minimizing acoustic
noise, and preventing mechanical contact between moving
parts that would cause wear and compromise reliability. This
escaping air forms the precursor bubble, shown in the CFD
simulation of Figure 4. The left inset of Figure 5 shows the
precursor (red) produced by a standard air-gun in a physical
experiment.

The instrumented air-guns also allowed us to observe an


interesting effect, unrelated to the validation objective. The
small oscillations in the bubble train of the standard air-gun
(black) are caused by the re-injection of air into the water as
the shuttle repeatedly bounces, before coming to rest and
sealing the fire chamber. The re-injection may be directly
visible as extra peaks in the signature as in this case, or it
may modulate the bubble train as air either reinforces or
damps the bubble. Figure 3 (acquired during a different test),
proves that the shuttle does indeed bounce in a standard airgun. The new air-gun, by completely emptying the air in the
fire chamber on the first stroke, prevents this phenomenon.
Figure 4: CFD simulation showing the precursor bubble
immediately before the main release of air through the ports. At this
stage the shuttle has just reached the beginning of the ports.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 47

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Main peak
The main peak is formed when the shuttle moves past the
ports, allowing most of the air in the fire chamber to escape
into the surrounding water. In a standard air-gun, the shuttle
accelerates to a high speed before reaching the ports,
resulting in the steeply rising flank of Figure 5 (green line,
right inset). In contrast, the new air-gun releases air more
gradually, resulting in a signature with a gentler slope and
reduced peak amplitude (Figure 6). As the bubble expands,
its motion is opposed by the local hydrostatic pressure and
by the inertia of the surrounding water, reducing the rate of
volume growth and the resulting acoustic pressure. The
existing precursor bubble also affects the injection of the
main bubble, typically damping the primary peak and
reducing the amount of high-frequency energy.

bubble retains a memory in the sense that the conditions


during its initial release affect the subsequent series of
compression and rarefaction cycles. The period of an
instantaneously released bubble is essentially determined by
two factors: the mass of air in the bubble, with more air
producing a longer period; and the hydrostatic pressure, with
higher pressure producing a shorter period. In addition, the
state of the precursor bubble at the time of the primary
release, and the rate of release, both directly affect the
amplitude of the bubble train. Figure 5 (blue) shows the
standard air-guns bubble train. The new air-gun (Figure 6,
blue) empties the fire chamber more effectively, resulting in
bubble oscillations with a slightly longer period and higher
relative amplitude. If the air-gun fails to release all of the air
in the fire chamber on the first shuttle stroke, re-injection of
air might affect the amplitude (reducing or increasing it
depending on the timing) and period (with the larger mass of
air producing a longer period).
Contributions to the amplitude spectrum
Next, we consider the frequency content of the signatures of
the standard air-gun (Figure 7) and the new air-gun (Figure
8).
For both air-guns, we see that the precursor emits far less
energy in the seismic band than the other two components.
Its contribution is more pronounced at higher frequencies,
albeit at a lower absolute level; these higher frequencies
account for a negligible share of the energy emitted by an
air-gun (Gerez et al., 2015).

Figure 5: Measured signature of a standard 155 cu. in. air-gun, with


close-up of the precursor (red) and main peak (green).

The difference between the two air-guns is most pronounced


in the main peak. The main peak clearly dominates the
output of the standard air-gun above 40 Hz. For the new airgun, it dominates over a narrow band from 40 to 150 Hz,
above which the precursor is highest.
The freely oscillating bubble is the main source of energy
below 40 Hz for both air-guns. Below the fundamental
bubble frequency of approximately 10 Hz, the main peak and
bubble appear to have more energy than the combined
signature, but this is the result of signal-processing artifacts:
DC spectral leakage artificially boosts main-peak energy in
this region, and removing the destructive interference
between the main peak and the bubble troughs boosts the
bubble energy.

Figure 6: As Figure 5, but for the new air-gun at 150 cu. in.

Free-bubble oscillation
After the shuttle has re-sealed the ports, gun dynamics no
longer directly influence the bubble. Nevertheless, the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 48

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

In Figure 9, we see that the output is very stable in the


seismic band. Absolute amplitudes are far lower at higher
frequencies, so the relative variability is higher. This is a
signal-to-noise issue: components of the signature that are
inherently less stable (the precursor and high-frequency
incoherent energy from the main peak) are relatively more
pronounced in the output spectrum. Moreover, long-term
wear increases the precursor size.

Figure 7: Decomposed amplitude spectrum of Figure 5 showing the


contributions of the precursor (red), main peak (green) and free
bubble (blue). The spectrum of the full signature is shown in black.
The curves were generated by zero-padding the signature
components, performing an FFT, and applying a one-third octave
smoother.

Figure 9: Expected range bands from shot variation for a standard


air-gun (dark gray) and the new air-gun (dark blue); from
manufacturing tolerances for a standard air-gun (light gray); and
from the combination of manufacturing tolerances and long-term
wear for the new air-gun (light blue).

Conclusions

Figure 8: Decomposed amplitude spectrum of the new air-gun

Signature stability
Although air-gun output is remarkably stable in the seismic
band, it is still subject to three main sources of variability:
short-term shot variation, manufacturing tolerances, and
normal long-term wear. We measured shot variation
physically, with the high-frequency system described
above. It would be impractical to physically characterize
the remaining two sources of variation to a reasonable
statistical confidence, or to simulate in CFD due to the long
computation times. We therefore developed a statistical
meta-model that can replace the CFD model in predicting
the output spectrum, training the Kriging model (Sacks et
al., 1989) by performing CFD simulations over the design
space. We then used Monte Carlo methods to predict the
expected range of the output as a result of random inputs.

2016 SEG
SEG International Exposition and 86th Annual Meeting

We applied advanced CFD models and physical experiments


to decompose the air-gun signature into its primary
components. The air escaping through a mechanical gap
produces a small precursor peak that contributes little overall
energy. By increasing the rise time of the main pulse, we
reduced the unwanted high-frequency noise. By
simultaneously maximizing the amount of air released into
the bubble, we increased the signal in the seismic band. Like
any mechanical device, air-guns are subject to variability,
and we characterized the output levels that can be expected
for any given air-gun over its operating life.
Acknowledgments
In addition to the authors, many people have contributed to
these results. Among them are: Emmanuel Coste, Jostein
Farstad, Svein Huseboe, Jon-Fredrik Hopperstad, Robert
Laws, Anneli Soppi, and Michel Wolfstirn.

Page 49

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Coste, E., D. Gerez, H. Groenaas, J.-F. Hopperstad, O.-P. Larsen, R. Laws, J. Norton, M. Padula, and M.
Wolfstirn, 2014, Attenuated high-frequency emission from a new design of airgun: SEG
Technical Program Expanded Abstracts, 132137, http://dx.doi.org/10.1190/segam2014-0445.1.
Cox, E., A. Pearson, J. R. Blake, and S. R. Otto, 2004, Comparison of methods for modelling the
behavior of bubbles produced by marine seismic airguns: Geophysical Prospecting, 52, 461477,
http://dx.doi.org/10.1111/j.1365-2478.2004.00425.x.
Gerez, D., H. Groenaas, O. P. Larsen, M. Wolfstirn, and M. Padula, 2015, Controlling air-gun output to
optimize seismic content while reducing unnecessary high-frequency emissions: 85th Annual
International Meeting, SEG, Expanded Abstracts, 154158,
http://dx.doi.org/10.1190/segam2015-5843413.1.
Giles, B. F., and R. C. Johnston, 1973, System approach to air-gun array design: Geophysical
Prospecting, 21, 77101, http://dx.doi.org/10.1111/j.1365-2478.1973.tb00016.x.
Landr, M., L. Amundsen, and D. Barker, 2011, High-frequency signals from air-gun arrays: Geophysics,
76, no. 4, Q19Q27, http://dx.doi.org/10.1190/1.3590215.
Laws, R. M., L. Hatton, and M. Haartsen, 1990, Computer modelling of clustered airguns: First Break, 8,
331338, http://dx.doi.org/10.3997/1365-2397.1990017.
Rayleigh, O. M., 1917, On the pressure developed in a liquid during the collapse of a spherical cavity:
Philosophical Magazine, 34, 9498, http://dx.doi.org/10.1080/14786440808635681.
Sacks, J., W. J. Welch, T. J. Mitchell, and H. P. Wynn, 1989, Design and analysis of computer
experiments: Statistical Science, 4, 409423, http://dx.doi.org/10.1214/ss/1177012413.
Vaage, S., B. Ursin, and K. Haugland, 1984, Interaction between airguns: Geophysical Prospecting, 32,
676689, http://dx.doi.org/10.1111/j.1365-2478.1984.tb01713.x.
Ziolkowski, A., G. Parkes, L. Hatton, and T. Haugland, 1982, The signature of an air gun array:
Computation from near-field measurements including interactions: Geophysics, 47, 14131421,
http://dx.doi.org/10.1190/1.1441289.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 50

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

In situ test results obtained in the Mediterranean Sea with a novel marine seismic acquisition
system (FreeCableTM)
Luc Haumont*, Laurent Velay, and Michel Manin, Kietta
Summary
A new patented marine acquisition system was tested in the
Mediterranean Sea in October 2014. The purpose of the
paper is to explain the test setup, to describe the acquisition
parameters and to present the tests results.
Introduction
The new marine acquisition system under test is an
innovative system based on classical principles of
reflection seismology: one or several source vessels are
used as seismic sources, and a large number of seismic
sensors fastened to a set of submerged cables measure the
reflected waves (receivers). The main originality of the
system relies in the following points:

Cables are stationary or quasi-stationary with respect


to seabed (named SMAC for Stationary Midwater
Autonomous Cable).

The position of each SMAC is controlled by a pair of


unmanned surface vehicles (Autonomous Recording
Vehicle - ARV).

The typical immersion depth is up to 100m,


significantly larger than in towed streamers (~ 10m).

The separation between receiver lines (400m) is


significantly larger than in towed streamers (~ 50 to
100 m).

The source vessels are not physically linked to the


receiver array. Their position is independent of the
receiver array position.
The nominal definition for the system is as follows. The
array is composed of 20 cables 8 km long spaced by 400m,
leading to a receiving area of 64km. Each SMAC
integrates 4C stations (point receiver) with a 25m spacing.
The shooting is done perpendicular to the SMACs with a
distance between shot points of 25m and a spacing between
shooting lines of 400m.
The key advantages of the technology over existing
techniques are improved seismic data quality (signal-tonoise ratio, high fold, full azimuth full offset, full
bandwidth) with a high productivity.

4 ARVs
a control room installed onboard a master vessel
launch and recovery equipment

Each SMAC is fitted with 4C stations (hydrophone and


three geophones) regularly spaced at 25m interval. Each 4C
station embeds a 3-axis inclinometer to derive the
orientation of the geophones with respect to verticality.
The spread of the system under test being limited to 1 or 2
SMACs the high fold with full azimuth full offset is not
expected during the trials. The interest of the test is to
assess the quality of the data (low noise, full bandwidth), to
estimate the system performance in terms of controllability
and productivity, and to validate its operability (launch and
recovery, sea keeping, fuel autonomy).
All of the equipment was brought to the survey site
onboard the master vessel except three ARVs that were
towed to the site.
Acquisition parameters
The test was performed offshore in the Mediterranean Sea
in the vicinity of the GPS point located 4145'57.6"N
459'58.2"E (Figure 1). The water depth at this location is
about 2400m. The geological structure is relatively flat in
the first layers with salt at about 10km of depth (Figure 2).
The source consists of two relatively small air guns: two GI
guns of 45 cu inch and 105 cu inch. The gun depth was set
at 7 meters.
Several configurations for the SMAC were tested: length
from 1.5 km to 2.5 km and depths between 40m and 100m.
A variety of shooting lines were executed: inline shooting,
cross-line shooting and a 3D survey.

Test setup
The test campaign was the first in situ survey for this new
acquisition method. All system components were tested in
real conditions.
The system under test consists of:

two SMACs

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: survey location offshore Mediterranean Sea

Page 51

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

During the 3D survey a single SMAC of 1.5km was used


(60 x 4C = 240 channels). The shooting geometry was
square: it consisted of 29 cross-lines of 5 km spaced by
200m, yielding a span of 5.6km with about 2km margin on
each side (Figure 3). The distance between shot points was
25m. The resulting fold was 8 in the natural bin (12.5 x
12.5m) (Figure 4).

Figure 2: geological structure in survey point vicinity

2500
2000
1500
1000
500

Seismic data quality


The raw seismic data collected was of good quality on the
four components: Figure 12 displays a receiver gather of
unfiltered data during a crossline shooting. It is noteworthy
to highlight that even inline geophones exhibit some useful
signal. In this figure the geophone signals are not rotated:
the two transverse geophones have no specific orientation
and are called transverse geophone #1 and #2. In order to
recover the verticality, the geophone signals are rotated by
using the 3-D inclinometer data. Figure 13 displays the
geophone signals before rotation during an inline shooting,
and Figure 14 displays the geophone signals after rotation
for the same sequence. The absence of signal on the
crossline geophone proves that the rotation is correctly
achieved. During this sequence it is also clearly visible that
the inline geophone recorded some usable signal.
A 2D stack of unfiltered raw hydrophone is presented on
Figure 5: useful signal is obtained at 5.5 seconds while the
source being used is more than 10 times smaller than a
typical air gun array used for traditional marine acquisition.
Adding the hydrophone to the vertical geophone signal
multiplied by the waters characteristic impedance
immediately allows for the separation of down-going and
up-going waves (Figure 6), hence removing the receiver
ghost. Note that the combination used to generate this
figure is a simple addition with a scalar; the outcome is
quite remarkable.

SL
RCV

0
-500
-1000
-1500
-2000
-2500
-3000

-2000

-1000

1000

2000

3000

Figure 3: 3D survey pre-planned geometry (shooting lines in black


and receivers in red)
Fold Map in Natural Bin
8
1000
7

Figure 5: stack with hydrophone only (unfiltered raw data)

500

5
0
4
-500

2
-1000
-1500 -1000

-500

500

1000

1500

Figure 4: 3D pre-planned fold map in natural bin (12.5m x 12.5m)

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 6: receiver ghost removal at 100m; from left to right:


hydrophone, vertical geophone, up-going wave, down-going wave

Page 52

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A stack of the hydrophone and of the vertical geophone is


provided in Figure 7 along with a stack of the up-going and
down-going signals. The receiver ghost is entirely removed
with a simple addition at 100m of depth. Figure 8 displays
the same conclusion in the frequency-domain and proves
that the method leads to a full bandwidth receiver spectrum.

similar to streamer deployment were executed flawlessly,


including auto-test phases. The launch and recovery of an
ARV on a master vessel was validated as well as the ability
for the ARVs to be towed to the survey site. The bunkering
operation of the drones was also tested.

Figure 9: 3D survey pre-plot vs. actual fold

Figure 7: from left to right: hydrophone stack, vertical geophone


stack, up-going wave stack, and down-going wave stack

Figure 10: depth stability around target depth

Figure 8: full bandwidth at 100m of depth: spectrum of


hydrophone stack (red), vertical geophone stack (black) and
recombined signal stack (green)

System controllability and productivity


The system was tested with respect to its ability to control
the position of the array and to be productive. The control
of the depth was achieved with an accuracy of +/- 25 cm
around the target depth after a convergence phase of 30
minutes from sea surface down to 100m depth (Figure 10).
The 2-D position of the cable center was controlled with an
accuracy of several tens of meters (Figure 11).
In terms of coverage Figure 9 compares the pre-plot vs. the
actual fold obtained during the 3D survey. The resulting
coverage proves that the system is able to produce
efficiently as pre-planned. Some extra coverage is obtained
in the top-right corner due to in-fill shooting.
System operability
All main operational modes were tested during the trials.
The launch and recovery of the system was done without
any difficulty in sea state 4. The SMAC assembly tasks -

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 11: cable center distance to target point

Conclusions
The first at-sea tests for a new marine acquisition system
were conducted in the Mediterranean Sea in 2014. The tests
proved that the performance of the system was in line with
the preliminary simulations. The key aspects of the system
were validated: the operability of the system, the capacity
to produce data efficiently in a controlled manner, and the
capability to deliver high quality 4C seismic data. The
seismic data exhibits high signal to noise ratios and low
noise on all components, full bandwidth is obtained at any
depth between 5 and 100m, and the technology is able to
offer true 4C data with usable inline geophone signals. All
technical aspects were qualified by this test campaign. A
subsequent paper will cover the first fully operational
survey performed in the fall of 2015.

Page 53

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 12: raw 4C data receiver gather (crossline shooting); from left to right: hydrophone, inline geophone, transverse geophone #1, and
transverse geophone #2.

Figure 13: geophone records before rotation (receiver gather during inline shooting); from left to right: inline geophone, transverse geophone #1,
and transverse geophone #2.

Figure 14: geophone records after rotation (receiver gather during inline shooting); from left to right: inline geophone, crossline geophone, and
vertical geophone.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 54

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Haumont, L., L. Velay, and M. Manin, 2016, Improving subsalt imaging with a new seismic acquisition
system: Presented at the EAGE/SPE Subsalt Imaging Workshop, http://dx.doi.org/10.3997/22144609.201600435.
Manin, M., 2012, Enhanced method and device for aquatic seismic prospecting: French Patent 2,940,838.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 55

Seismic apparition dealiasing using directionality regularization

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Fredrik Andersson , Lund University; Kurt Eggenberger, Seismic Apparition GmbH; Dirk-Jan van Manen and
Johan O. A. Robertsson, ETH-Zurich / Seismic Apparition GmbH; Lasse Amundsen, NTNU
SUMMARY
Seismic apparition is a technology for decoding simultaneously source acquisition where a periodic or aperiodic filter
is applied to the different sources in a way that they can be distinguished by means of aliasing effects. In the case of dense
sampling on the source side, this allows for direct separation
of the sources. As the sampling is reduced, aliasing effects
can appear in the reconstructions unless additional measures
are taken. In this paper we will make use of directionality
penalties to dealias the decoded data. The directionality of the
wavefield is measured using the lower frequency components
where the aliasing effects are absent. The decoding procedure
can then be formulated as a least squares problem using the
estimated directionality structure.

INTRODUCTION
Simultaneous shooting is a major trend within the seismic acquisition industry which has the potential to increase the rate
at which seismic data can be acquired and as well improve
subsurface sampling by increased shot density (Ellis, 2013).
Originally established for land seismic data acquisition (Howe
et al., 2008; Bouska et al., 2009), the idea is to trigger two
or more (encoded) sources sufficiently close together in time
so that the recorded signal energy interferes. The interference of signals is handled in data processing to decode or separate the information generated from each source. To date,
the main principle in marine seismic multishooting has been
to shoot with random dithering for one or several sources acquiring seismic data simultaneously. The random dithers are
known and can be removed in processing to generate seismic data where all reflections generated by that source are coherent (e.g., in the common-offset domain) whereas the signals from the other source(s) have a random time distribution. One popular method to decode such simultaneous-source
data is to consider the data separation to be an underdetermined inverse problem, which can be solved through an iterative procedure, assuming additional constraints, like sparsity and coherency. Moore et al. (2012) reported a narrow
azimuth survey with two source arrays firing simultaneously,
one source-array being time-dithered. The sources were separated using a modelling and inversion algorithm (Ji et al.,
2012). Langhammer and Bennion (2015) reported on triplesource simultaneous shooting to achieve higher density seismic. They used an adaptive subtraction method for source separation. Wavefield signal apparition or seismic apparition is a
new method to sample time-discrete signals that allows for the
separation of interfering signals from multiple sources. The
theory of wavefield signal apparition is discussed in Robertsson et al. (2016). In essence, by changing a well sampled conventional source sequence where the wavefield in the spectral

2016 SEG
SEG International Exposition and 86th Annual Meeting

f k-domain is present in a cone around the spatial wavenumber


k=0, to a source where every other measurement is exposed to
a filtering (e.g. a time shift) the wavefield in the spectral f kdomain will be present in two cones: one around the central
spatial frequency, and the other around the Nyquist wavenumber. Whereas the separation works perfectly in the non-aliased
bandwidth (Robertsson et al., 2016), wavefield dealiasing techniques need to be employed for sparser source sampling to
precondition the data appropriately for the wavefield signal apparition effect.
In this paper the dealising is performed in two steps. First,
we make use of the fact that since the seismic data have cone
support in the frequency domain, the apparition will work accurately up to a certain temporal frequency (or more generally
in some bounded subdomain of the original cone). This allows for perfect separation of filter versions of the sources.
Now, using these filtered versions, we can estimate the local
directionality and in this way we can regularize the full separation problem by penalizing energy with that goes into the
wrong directions. The local directionality can be estimated
by using structure tensors Knutsson (1989). Structure tensors
have been used in seismic data analysis for instance in Hale
et al. (2011); Andersson et al. (2015); Andersson and Duchkov
(2013); Ramirez et al. (2015). Given the local directionality
structure, we now regularize the source separation problem by
imposing both a cone constraint on the frequency support of
the sources, as well as a penalty on deviations from the estimated local directions. In this way we can express the separation problem in terms of a linear system that we can solve
using an iterative solver. We illustrate the potential of the proposed method on a synthetic seismic data set from the North
Sea.
METHODS AND THEORY
Seismic apparition background
Let us begin by introducing notation and recapitulating the theory for regular seismic apparition. We will use the notation
Z
fb( ) =
f (x)e2ix dx

for the Fourier transform in one variable, and consequently


fb(, ) for the Fourier transform of two dimensional function
f (t, x) with a time (t) and spatial (x) dependence.

Suppose that f1 = f1 (t, x) and f2 = f2 (t, x) are two function


with frequency support in two cones of the form
2
2.
c2

(1)

The constraint comes from assuming that the functions f1 and


f2 represents the recording of a wavefield at time t at a fixed receiver coordinate, source coordinate x, and fixed depth, where

Page 56

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

the recorded wave field at this depth is a solution to the homogeneous wave equation with a velocity c. The wavefields
are generated at discrete values of x which we assume to be
equally spaced, i.e. of the form x = 4x k.
We now assume that the two sources are used simultaneously,
in such a way that their combination takes the form
d(t, k) = f1 (t, k4x ) + f2 (t 4t (1)k , k4x )

k
= F ( fb1 )(t, k4x ) + F ( fb2 (, )e2i(1) 4t )(t, k4x ),

i.e., the recorded data are now modeled as a sum of the two
functions, but where one of them has been subject to a periodic time shift. In a more general version more general filtering operations than time shifts can be applied. Let ak be
filter operators (acting on the time variable) where the k dependence is such that it only depends on if k is odd or even,
i.e., that ak = ak(mod2) .

Figure 1: The function D1 (, ) from (3) for a set of apparition simultaneously source synthetic data with noise. The right
panel shows in green the region described by (1), while the
central green part in the right panel shows the diamond shaped
conditions (4), and the aliasing/mixing effects from (3) and (5)
is highlighted in red color.

d(t, k) = f1 (t, k4x ) + ak f2 (t 4t (1)k , k4x )

Now, due the assumption of conic support of fb1 and fb2 it holds
that if







||
1
+ 1 , (4)
| | <
, || < c
,
||
<
c


c
4x
4x

= F ( fb1 )(t, k4x )+


Z
Z
k
ak () fb2 (, )e2ik4x d e2i(t(1) 4t ) d


= F ( fb1 )(t, k4x ) + T F ( fb2 ) (t, k4x ),
(2)

The Poisson sum formula

f (k) =

k=

k=

fb(k)

can be modified (by using the Fourier modulation rule) as

f (k)e2i k =

k=

k=

k=

f ()e2i (k)

fb( + k).

By the standard properties of the Fourier transform it is straightforward to show that

f (k4x + x0 )e2i(k4x +x0 ) =

k=




k
1 X b
k
f +
e2ix0 4x
4x
4x
k=

hold for the spatial sampling parameter 4x and some fixed


spatial shift x0 . Using this fact we can now derive the relation
D1 (, ) =

k=

d(t, k)e2i(k4x +t) dt



 X



k
k
b
b
f1 , +
+
f2 , +

4x
24x
k=
k=

1
ab0 ()e2i4t + (1)k ab1 ()e2i4t
.
(3)
2
=

2016 SEG
SEG International Exposition and 86th Annual Meeting

then only the terms where k = 0 contribute. This produces the


simplified relation
2i4t
+b
a1 ()e2i4t

ab ()e
D1 (, ) = fb1 (, ) + fb2 (, ) 0

In a similar fashion this hold true for


Z X
D2 (, ) =
d(t + 4t (1)k , k)e2i(k +t) dt,
k

ab0 () + ab1 ()
= fb1 cos(24t ) (, ) + fb2 (, )
.
2

(5)

This implies that for each pair (, ) satisfying (4), the values
of fb1 (, ) and fb2 (, ) can be obtained by solving the linear
system of equations
1
cos(24t )

ab0 ()e2i4t +b
a1 ()e2i4t
2
ab0 ()+b
a1 ()
2

!

 

fb1 (, )
D1 (, )
.
=
D2 (, )
fb2 (, )

(6)

This provides information on how to recover the wavefields f1


and f2 for frequencies either up to the limit c/(44x ), or more
generally, satisfying the (diamond shaped) condition (4). The
overlaps of are illustrated in Figure 1, where the right panel
shows in color the cone given by (1), and the right panel the
diamond shaped region described by (4). In this approach the
decoding takes place by considering the data available in the
central cone of Figure 1.
An alternative approach for reconstruction, is by noting that
if either of the support constraints are satisfied, then for the
values of (, ) of interest (3) reduces to


1
D1 ,
=
24x


1
ab0 ()e2i4t + (1)k ab1 ()e2i4t ,
fb2 (, )
2
Page 57

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

implying that fb2 (, ) is recoverable from D1 (, 1/24x ).


In a similar fashion, D2 (, 1/24x ) can be used to recover
fb1 (, ). In this way, the decoding can be achieved by direct consideration of the data in the shifted cones illustrated in
Figure 1.
High frequency dealiasing using directional penalties
From (6) it is possible to recover the functions f1 and f2 parb has support inside the region
tially. Let w be a filter such that w
described by (4). It is then possible to recover the functions
h1 = w f1 ,

h2 = w f 2 .

(7)

For values of (, ) outside the region described by (4), it


is not possible to determine fb1 (, ) and fb2 (, ) uniquely
without imposing addition constraints. Typically seismic data
can locally be well described by sums of plane waves with
different direction. The plane waves carry the imprint of the
source wavelet, and according to ray theory the data from such
a plane event should have the same directionality for the frequency range that covers the source wavelet. We will use this
information to construct a directionality penalty that we can
use for the separation of the two wavefields f1 and f2 from the
simultaneously source data d.
One way of estimating local directionality is by means of structure tensors. For the two known wavefields h1 and h2 the corresponding structure tensors are defined as
   



2
h1
h1 h1
(t, x)
K t x (t, x)
K t

   
T1 (t, x) =

2


K th1 hx1 (t, x)
K hx1
(t, x).

and similarly for T2 and h2 . Above, the function K describes


a smooth, localizing windows, for instance a Gaussian. The
eigenvalues of T1 and T2 will point-wise describe the local energy in the direction of maximum and minimum variation, and
the associated eigenvectors contain the corresponding directions. The tensors are computed as elementwise convolutions
of the outer product of the gradient of the underlying function,
and this directly defines the generalization to higher dimensions. For the sake of simplicity we describe here the twodimensional case.
Let s11 (t, x) and s12 (t, x) be the eigenvectors of T1 (t, x), and let
e11 (t, x) and e12 (t, x) denote the corresponding eigenvectors. If
the wavefield f1 only has energy in one direction in the vicinity
around (t, x) covered by K, then this implies that

From (8) we have a condition when one the eigenvectors vanish. However, for the more general case we would need to
formulate a penalty function that can deal with the cases where
the components gradually change, and at places where the eigenvectors are equal in size, and equal amount of penalty should
be used for the two directions. One such choice is to define
S(Tm ) =

j=1

This property is clearly not always satisfied (although counterparts in higher dimension hold more frequently with increased
dimensionality), it is a property that can be used as a penalty
from which the simultaneously source data can be decoded.
Even if (8) is not satisfied, the relation can be used to minimize the energy of the decoded data in the directions carried
from h1 and h2 , respectively.

2016 SEG
SEG International Exposition and 86th Annual Meeting

2
X
j=1

 T
j emj emj

These functions have the property that


lim (s1 , s2 ) (0, 1),

2 0

and

s1 (1 , 1 ) = s2 (1 , 1 ),

implying that (8) will be satisfied in the case where there is locally only energy in one direction, and where an equal amount
of penalty will be applied in the case where there is the same
amount of energy in both directions.
This definition now allows for the generalization of (8) to penalty
functionals
ZZ 

( f1 )T S(T1 ) f1 (t, x) dtdx,
and

ZZ 


( f2 )T S(T2 ) f2 (t, x) dtdx,

respectively, for the two wavefields. The expressions above describe the energy in the undesirable direction, given the knowledge of the bandpassed versions h1 and h2 , respectively.
Before we use these expressions to define a minimization problem that describes the decoding procedure, we incorporate the
original cone condition (1) in the formulation. To this end, we
will now work with sampled representations of fb1 and fb2 . By
abuse of notation, we will use fb1 and fb2 to denote these sampled values. We let Fc denote the inverse Fourier operator
that is restricted to function supported on the cone defined by
(1), and recall the definition of the apparition operator T from
(2). The relationship (2) is then satisfied for (the non-unique)
solutions to
b
f1 , b
f2

(8)

Tm =



1 2
1 22
s1 ( ) = 2 exp
2
2 1
 1



1 2
1 2
s2 ( ) = 1
exp
.
2 1
2 1

min

which in turn means that


= 0.

 T
s j ( )emj emj
,

with

s12 (t, x) = 0,

f1 e12

2
X

kFc fb1 + T Fc fb2 dk2 ,

with the additional constraint that fb1 and fb2 have support on
the cone defined by (1). To obtain a unique approximate solution, we now add the directionality penalties and consider
min
b
f1 , b
f1

kFc fb1 + T Fc fb2 dk2 +

ZZ  
T


Fc fb1
S(T1 ) Fc fb1
(t, x) dtdx

ZZ  
T


+
Fc fb2
S(T2 ) Fc fb2
(t, x) dtdx.
+

(9)

Page 58

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

and


Fc D1 Fc
AD =
.
Fc D1 Fc

Equating the Frechet derivatives of (9) with respect to fb1 and


fb2 to zero then yield the linear relationship
   
 F1
b
AF AD
= 1
(10)
F2
b2
for the solution of (9). This equation can be solved using an
iterative solver for linear equations, for instance the conjugate
gradient method. The operators in AF are realized using standard FFT, and the operators in AD are computed using a combinations of Fourier transforms and differential schemes, that
also may be implemented by using FFT.
RESULTS
We have tested the proposed method on a synthetic data set
representative for North Sea measurements. This data set is
blended using apparition shifts according to (2), and it is depicted in the top row of Figure 2, where the left panel shows
d (where f1 is non-apparated) and T d (where f2 is nonapparated). The original data is shown in the middle row of
Figure 2, with f1 in the left panel and f2 in the right panel.
Given the blended data, filter versions h1 and h2 are computed
using (6) and (7). Structure tensors are then computed using
these, and from here the penalty tensors S(T1 ) and S(T2 ) are
computed. The minimization problem (9) is then solved by applying the conjugate gradient to (10) and the results are shown
in the bottom row of Figure 2, with the reconstruction of f1 in
the left panel and the reconstruction of f2 in the right panel.
We can see, when comparing the reference data in Figure 2
(middle row) with the separated data in Figure 2 (bottom row)
that the two sources are well separated. The parts that are not
captured are mainly those that are aliased already in the reference data set.
Figure 2: Top row: Blended data. Left panel d and right panel
T d; Middle row: Original data. Left panel f1 and right panel
f2 . Bottom row: Decoded data. Left panel f1rec and right panel
f2rec . See text for further explanations.
with the same cone constraint. To find the minima of (9), we
compute the Frechet derivatives of the objective function with
respect to the functions fb1 and fb2 and equal them to zero. The
first term in (9) is straightforward to derive, and concerning
the other two terms it is readily verified using partial integrations that their Frechet derivaties are described by the elliptic
operators
Dm ( f ) = (S(Tm ) f ) .
To formulate the solution to (9), let
b1 = Fc d,

and

Furthermore, let us introduce



Fc Fc
AF =
Fc T Fc

CONCLUSIONS
The current main principle in seismic multishooting has been
to shoot with random dithering for one of the sources. We challenge this thinking by demonstrating that the methodology of
seismic apparition in combination with directional dealiasing
is a powerful combination to be able to rely on periodic shooting for accurate separation of the data. The proposed dealiasing method using directional regularization was applied to a
synthetic North Sea data set to appropriately precondition the
data to exploit the wavefield seismic apparition effects. The
numerical example shows a low-residual simultaneous sources
separation in a realistic acquisition scenario. This multishooting, dealias and decoding method can be generalized to more
than two sources.

b2 = Fc T d.
ACKNOWLEDGEMENTS
Fc T Fc
Fc Fc

2016 SEG
SEG International Exposition and 86th Annual Meeting

We thank Seismic Apparition GmbH for permission to publish


proprietary, patent-pending work.

Page 59

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Andersson, F., and A. A. Duchkov, 2013, Extended structure tensors for multiple directionality
estimation: Geophysical Prospecting, 61, 11351149, http://dx.doi.org/10.1111/13652478.12067.
Andersson, F., Y. Morimoto, and J. Wittsten, 2015, A variational formulation for interpolation of seismic
traces with derivative information: Inverse Problems, 31, 055002, http://dx.doi.org/10.1088/02665611/31/5/055002.
Bouska, J., 2009, Distance separated simultaneous sweeping: Efficient 3D vibroseis acquisition in Oman:
79th Annual International Meeting, SEG, Expanded Abstracts, 15,
http://dx.doi.org/10.1190/1.3255248.
Ellis, D., 2013, Simultaneous source acquisition-achievements and challenges: Presented at the 75th
EAGE Annual International Conference & Exhibition, SPE EUROPEC 2013,
http://dx.doi.org/10.3997/2214-4609.20130080.
Hale, D., 2011, Structure-oriented bilateral filtering of seismic images: Presented at the 81st Annual
International Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/1.3627947.
Howe, D., A. Allen, M. Foster, I. Jack, and B. Taylor, 2008, Independent simultaneous sweeping:
Presented at the 70th EAGE Annual International Conference and Exhibition, SPE EUROPEC
2008, http://dx.doi.org/10.3997/2214-4609.20147588.
Ji, Y., E. Kragh, P. Christie, 2012, A new simultaneous source separation algorithm using frequencydiverse filtering: Presented at the 82nd Annual International Meeting, S SEG, Expanded
Abstracts, http://dx.doi.org/10.1190/segam2012-0259.1.
Knutsson, H., 1989, Representing local structure using tensors: 6th Scandinavian Conference on Image
Analysis, Linkping University Electronic Press, 244251.
Langhammer, J., and P. Bennion, 2015, Triple-source simultaneous shooting (ts3), a future for higher
density seismic?: Presented at the 77th Annual International Conference and Exhibition, EAGE,
Extended Abstracts.
Moore, I., D. Monk, L. Hansen, and C. Beasley, 2012, Simultaneous sources: The inaugural full-field,
marine seismic case history from australia: ASEG Extended Abstracts, 2012, 14,
http://dx.doi.org/10.1071/ASEG2012ab181.
Ramirez, A., F. Andersson, T. Wiik, and P. Riste, 2015, Datadriven interpolation of multicomponent data
by directionality tensors: Presented at the 77th Annual International Conference and Exhibition,
EAGE, Extended Abstracts.
Robertsson, J. O. A., L. Amundsen, and A. S. Pedersen, 2016, Wavefield signal apparition, part I:
Theory: Presented at the 78th Annual International Conference and Exhibition, EAGE, Extended
Abstracts.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 60

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Which WATS ? Design and evaluation of an efficient Bi-WATS geometry


J. Colonge, L. Di Pietro*, R. Lencrerot (TOTAL S.A.)
Summary
Wide Azimuth Towed Streamer (WATS) acquisition plays
an important increasing role in salt-affected areas to
improve subsalt image quality. A thorough feasibility study
is necessary to optimize the required design and maximize
the benefit-to-cost ratio. With todays High Performance
Computers (HPC) capability, Full Wave modeling is
suitable for those feasibilities involving complex
acquisition designs and elaborated geological models. The
purpose of this paper is to relate the process that leads to an
efficient WATS geometry using an innovative model
building. The intricacy of the employed model relies on
small density contrasts inserted at target level that will
allow to a qualitative assessment of the imaging ability of
each tested acquisition design in this specific geological
context.

overburden geological heterogeneity and introducing


scattering noise in the synthetic data. Those perturbations
can typically be high-frequency perturbations related to the
actual seismic data, or a reservoir grid integrating sonic

Introduction
Wide Azimuth (WAZ) geometries have become a standard
technique for acquiring seismic data in subsalt areas such
as Gulf of Mexico or Gulf of Guinea. Compared to Narrow
Azimuth (NAZ) geometries, WAZ geometries provide a
huge uplift in terms of illumination and imaging as they are
characterized by a higher fold and a richer azimuth
distribution. However, considering the investment that
represents such an acquisition, optimizing the design is
paramount and essential to ensure that the acquisition
meets the imaging objectives but also to minimize the
survey costs. Due to their limitations to properly handle
complex geology, classic ray-tracing modeling tools have
been found not sufficient to provide robust enough support
for optimization of advanced survey geometry. Looking for
more reliability, 3D acoustic Full Wave modeling has
therefore been used to design and optimize Wide Azimuth
marine acquisition in complex geological context. Because
of the always developing HPC capacity, several
cumbersome marine geometries have been evaluated within
a reasonable time frame.
Model building
The acoustic model is composed of a 3D velocity cube and
a 3D density cube. The velocity field contains all the
information related to the geological complexity of the area
whereas the density field is a layer cake model with 3D
structures inserted at target level to evaluate the
illumination capacity of different acquisition designs. The
velocity cube is typically built with two main elements: a
background PSDM model which brings the kinematic
information and perturbations aiming at representing the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: interpretated turbiditic channels inserted into FD model.

logs krigged to the model or an heterogenous salt body


(Lencrerot et al, 2016).
The density field is a layer cake model built based on
regional horizons. The aim is to highlight the main regional
geological boundaries well marked on the seismic data. On
top of that, small scale objects are inserted at target level
for illumination assessment, this can be for instance a
barcode structure or localized density contrasts
representing stratigraphical elements such as channels or
fairways.
These elements will be used to qualitatively evaluate the
imaging and detection capability of each assessed
acquisition design (Figure 1).
Modeling results and acquisition design building
The first step of this study was to assess the added value of
a WAZ acquisition compared to the legacy NAZ. The
results showed a significant illumination uplift with any
Wide Azimuth Towed Streamers (WATS) geometry
compared to the existing Narrow Azimuth Towed

Page 61

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Which WATS? Design and evaluation of a Bi-WATS

Figure 2:: Schematic draw of Octopus type geometry (left) and Mad
Mad-Dog
Dog type (right) with four tiles. Tiles are built-up
built
by successive
passes of the streamer vessel away from the source sail line. Both source vessels sail on the same line as many times as the required
number of tiles.

Streamers (NATS) geometry. The analysis was then


focused on optimizing the WATS design, thus several
scenarios weree investigated with different parameters (i.e.
number of tiles, different sail line interval (SLI) as well as
different acquisition azimuths). For each scenario, the
streamer configuration (or tile) remains the same: 12
streamers of 8000 m in length with 100
00 m separation
between streamers. Two
wo designs were chosen as st
starting
points for the modeling (Figure 2):
The Mad-Dog type geometry
The Octopus type geometry
These two geometries were compared for similar fold
through FW modeling.
ng. The results show similar
illumination but, in this specific case, a significant
improvement in signal-to-noise
noise ratio for the Mad
Mad-Dog type
geometry. Furthermore, the need for two streamer vessels
with the Octopus type layout, for the same fold effort, is a
considerable drawback from an operational and economical
point of view. This is why the Mad-dog
dog type geometry was
retained for the rest of the study.
Hence comparisons between Mad-Dog geometries with 3
and 4 Tiles and Sail Line Interval (SLI) of 30
300 m and 600
m were carried out.
Results of the comparison between 300 m and 600 m of
SLI show a significant enhancement of the signal-to-noise
ratio with a SLI of 300 m, mainly related to the increase in
cross-line fold from 3 to 6 for a Mad-Dog
Dog 3 Tiles geometry
geometry.
This improvement leads to better events continuity, with
clearer base of salt and sub-salt
salt image as well as better
delimitated channels.
Geometry with 4 Tiles was then compared to the one with 3
Tiles. The two geometries were evaluated on two
orthogonal directions.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Results show that adding one tile increases the range of


illuminated azimuth in the cross line direction and therefore
improves the imaging of structures perpendicularly oriented
from the acquisition direction.Our
Our initial base case was
therefore a dense Mad-Dog type acquisition with 4 Tiles
(MD4T) and 300m SLI.
Another key parameter is the selection of the acquisition
direction. Two headings corresponding to dip and strike
directions off the main geological objects of the area were
tested. As shown in Figure 3,, both directions give a better
illumination than the other one but in different areas, i.e.
there is no preferential direction. This observation
highlights the fact that the polarization
olarization of the azimuth
distribution, which only depends on the acquisition
direction, has a strong impact on the illumination and
imaging of subsalt targets,
ets, especially in such an area
without any predominant
dominant acquisition direction.
This polarization issue
ssue was overcome by designing a BiBi
WATS geometry, composed of two orthogonal WAZ
acquisitions. The non-symmetry
symmetry of the acquisition due to
the inherent geometry of WATS designs (sparse sampling
in the cross line direction compared to dense sampling in
inline direction) is indeed reduced in two directions and is
therefore a new step towards the perfect design: a full
azimuth, isometrically sampled acquisition. To validate this
assumption, four different cases were modeled and
evaluated on amplitude maps extracted
tracted at channels level:
-

MD3T with 300m of SLI, acquired in N57.5


MD4T with 300m of SLI, acquired in N57.5
Bi-MD3T with 600m of SLI
Bi-MD4T with 600m of SLI

Page 62

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Which WATS? Design and evaluation of a Bi-WATS

Figure 3: FD modeling results and model. Upper left: time slice for dense Mad-dog type acquisition in one direction.
Upper right, same layout for orthogonal direction. Lower left: density model. Lower right time slice for the Bi-Waz
configuration. Green circles highlight areas with better imaging compared to orange circles. The Bi-WAZ geometry
combines positive results from both directions.

As expected, FD modeling clearly demonstrates that the bidirectional acquisition combines the benefits from each
azimuth. It also shows that a sparse bi-directional Mad-Dog
acquisition with 3 Tiles returns an equivalent imaging
quality as our initial base case. Indeed, most of the seismic
information recorded by the 4th Tile of a mono-directional
WATS is captured by the orthogonal direction of a BiWATS design. Also, Bi-WATS with 3 and 4 Tiles return
similar target illumination, proving that the 4th Tile is
optional.
Even though bi-WATS improves the imaging quality in this
complex geological environment, its cost has to be
carefully evaluated and benchmarked against other designs.
Such challenging acquisition layout has to be justified not
only on a technical side, but also from the cost effective
aspect. Hence, simulations taken into account several
operational parameters such as the amount of stand-by, line
change duration, vessels speed, run-in and run-out length or
infill rate have been carried out. The results of those
simulations show that even with two directions of
acquisition, the Bi-WATS design reduces the acquisition

2016 SEG
SEG International Exposition and 86th Annual Meeting

cost, duration and thus HSE exposure by approximately


20% compared to our initial base case layout.
The Bi-Azimuthal MD3T with 600 m of SLI was therefore
the selected final design. This design has been later
improved with the help of the seismic acquisition
contractor in order to design an equivalent geometry but
adapted to broadband acquisition technologies.
Modeled vs. acquired data
Following the acquisition, a post-mortem was performed by
comparing the fast track PSDM processing on the real data
and the synthetic one.
As shown on Figure 5, the recommended acquisition
parameters were able to improve on illuminating sub-salt
area previously poorly imaged. Beyond the image uplift
already obtained, the new acquisition allowed a better
model update in the depth imaging workflow that provided
at the end a much better image quality. The velocity model
could hardly be more updated with the vintage data.
Conclusion

Page 63

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Which WATS? Design and evaluation of a Bi-WATS

Full-Wave modeling is a suitable method to design wide


azimuth surveys in complex geological area where ray
raytracing reached its limitation. Thanks to the development of
the HPC capability, full wave modeling workflow is
affordable to assess acquisition parameters. Particularly,
Totals Pangea HPC eases
ses to make this workflow
systematic for most acquisition feasibility studies.
In this offshore West Africa example, a thorough survey
design, supported by advanced model
el building and full
wave techniques,, allowed to predict the image uplift
actually obtained after the acquisition. Furthermore, this
feasibility study led to significantly reduce the acquisition
costs by selecting the sparse Bi-azimuth
azimuth WATS compared
too a denser WATS. The final data obtained is suitable to
remarkably reduce uncertainties related to sub
sub-salt
environment.

Figure 4:: Summary table outlining the evaluated geometries.

Figure 5: comparison between the synthetic Bi


Bi-WATS (upper right), the legacy NATS (lower left) and the real data image obtained
after fast-track intermediate processing of the recommended acquisition. The corresponding density model is displayed in the upper left
panel.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 64

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Lencrerot, R., J. Colonge, and F. Studer, 2016, Optimising an acquisition design for sub-salt targets using
full wave modelling: 78th Annual International Conference and Exhibition, EAGE, Extended
Abstract, Tu STZ0 01.
Naranjo, J. C., E. J. Ekstrand, J. Etgen, K. L. Hargrove, J. G. Brewton, O. Garcia, G. Astvatsaturov, D.
Hite, B. Howieson, B. Bai, M. Bhargava, and C. Barousse, 2011, Survey design and
implementation of a multiple wide-azimuth towed streamer seismic acquisition strategy at the
tiber discovery; Deep Water Gulf of Mexico, USA: 81st Annual International Meeting, SEG,
Expanded Abstracts, 132135.
Regone, C., 2006, Using 3D finite-difference modeling to design wide azimuth surveys for improved
subsalt imaging: 76th Annual International Meeting, SEG, Expanded Abstracts, 28962899.
Sadeghi, E., A. Melois, and A. C. Bon, 2009, Wave equation based on 3D SRME impact on wide azimuth
towed streamer survey design: 79th Annual International Meeting, SEG, Expanded Abstracts, 75
78, http://dx.doi.org/10.1190/1.3054859.
Vermeer, G. J. O., 2009, Wide-azimuth towed-streamer data acquisition and simultaneous sources: The
Leading Edge, 28, 950958, http://dx.doi.org/10.1190/1.3192843.
VerWest, B. J., and D. Lin, 2007, Modeling the impact of wide-azimuth acquisition on subsalt imaging:
Geophysics, 72, no. 5, SM241SM250, http://dx.doi.org/10.1190/1.2736516.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 65

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Challenges to extending the usable seismic bandwidth at the seafloor in the deep water GoM
Joe Dellinger*, BP America
Summary

Repeating signals at Atlantis

We desire increased useful bandwidth in our seismic


surveys: higher frequencies to increase resolution, and
lower frequencies for deeper penetration and in particular
to enable better velocity-model-building by full-waveform
inversion. To design an acquisition strategy to achieve
these goals we must understand both the signal produced
by our sources and the noise to be overcome. What matters
is signal-to-noise, not signal or noise alone. We examine
two ocean-bottom node datasets from the Atlantis field in
the deep water Gulf of Mexico to learn more.

Figure 1 shows acoustic power spectral density levels


averaged across the Atlantis array for three cases. The
black curve, on top, is for a minute when the Atlantis
airgun array was centered over the ocean-bottom node
array. The red curve (about 8 dB lower) is for a minute at
the end of a shot line, when the airgun array was maximally
distant, about 7 km away from the Atlantis node array. The
blue curve corresponds to a minute when the Atlantis array
was not sounding, and thus represents the background noise
level. Note that we use the acousticians preferred units of
power spectral density, not the SEGs usual units of
energy in one source excitation, because there is no
concept of a single source excitation for background
noise and we want to put the three cases on an equal
footing for comparison purposes. In these graphs the units
of the data have been maintained and the frequency
response of the nodes corrected for.

We find that the primary source of noise above about 4 Hz


is seismic interference. The signal and noise are of similar
character and the signal-to-noise ratio is thus approximately
constant with frequency. Instead of increasing our signal,
we may instead use the predictable nature of the noise to
better attenuate it. Below 2 Hz the noise is dominated by
the natural microseismic background of the earth, which
rapidly increases in amplitude with decreasing frequency.
In contrast, below about 7 Hz airguns decrease in
amplitude at about 16 dB / octave. Below 2 Hz the signalto-noise ratio thus declines precipitously with decreasing
frequency. New acquisition paradigms may be required to
meet the low-frequency challenge.
Introduction
Atlantis is a deep water Gulf of Mexico field located along
the Sigsbee escarpment offshore Louisiana. BP has
performed three ocean-bottom-node seismic surveys there,
in 2005-2006, 2009, and 2014-2015. The active node patch
at any one time in 2005-2006 contained about 500 nodes in
a rectangular 11 by 6 km patch, with the nodes about 426
meters apart on a triangular grid (Clarke et al., 2006). The
complex salt geology at Atlantis makes seismic imaging
difficult. We performed a study to learn about the signal
our airgun arrays emit versus the noise they must
overcome, in order to understand how the bandwidth of our
usable signal might be improved. Note we only consider
ambient noise here, not source-generated noise or noise
associated with the receivers such as streamer noise.
To measure what our airgun array radiates we take the
direct approach of comparing data recorded when our
airgun array was sourcing nearby, far away, and not at all.
We then attempt to classify all the sound sources in the data
that we can, and draw conclusions as to what the challenges
are for increasing the signal-to-noise and bandwidth of our
seismic data at both low and high frequencies.

2016 SEG
SEG International Exposition and 86th Annual Meeting

The first thing to notice is that the spectra are all thick or
spikey except at the lowest frequencies. Close inspection
of the top two curves reveals that the thickness is the result
of a uniformly spaced spike comb with a spacing of 12
spikes per Hertz. This is because the Atlantis shot interval
was 12 seconds (= 12 / Hz). A repeating signal can be
modeled mathematically as single instance of the signal
convolved with a Dirac comb in the time domain. In the
frequency domain, the convolution becomes multiplication.
The effect of repeating the signal is thus to multiply its
spectrum by the Fourier transform of a Dirac comb, which
is also a Dirac comb. A thick, spikey spectrum thus
indicates a repeating signal in the time domain. Note above
20 Hz the airgun spectra are not as thick, because the
Atlantis airgun signature was less repeatable at these
frequencies.
Figure 2 demonstrates that there is another strong repeating
signal in the background noise above 35 Hz. The wide
interval between the prominent spectral spikes indicates a
repeat rate of about 7.5 per second. This is likely the
signature of a ships propeller or a pump. What about the
rest of the background noise? It rolls off below 7 Hz with
the same slope of about 16 dB / octave as the Atlantis
airgun signatures do, and has a thick, spikey spectra
indicating a repeating source. Is it another airgun? In this
case the spikes are not uniformly spaced. Figure 3
demonstrates that this noise is the combined spectrum of
three distant airguns, each with a different repeat interval.
With the giant ear of our 500+ ocean-bottom nodes we can
monitor all the seismic activity in the Gulf.

Page 66

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Seismic bandwidth in the deep water GoM

Figure 1: Three power spectral density plots calculated over a one-minute time window, averaged over all the ocean-bottom nodes in
the Atlantis array on 10 February 2006. The black curve is for a minute when the Atlantis airgun array was centered over the array. The
red curve is for a minute when the airgun array was 7 km away, at the far end of a shot line. The blue curve is for a minute when the
Atlantis airgun array was not firing, and thus represents the baseline noise level to be overcome. The frequency axis has a log scale.

Figure 2: The background noise from Figure 1, this time plotted with a linear frequency scale to better show the prominent regularly
spaced spectral lines above 35 Hz, corresponding to a signal repeating about 7.5 times a second, most likely from a propellor or a pump.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 67

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Seismic bandwidth in the deep water GoM

C: CGG Symphony, 82km

F: Fugro Geoarctic, 172km


W: WesternGeco Regent, 161km

Figure 3: Left: Beam-forming the envelope of the seismic noise in a time


window with no Atlantis airgun shots reveals that it contains three planar
wavefronts within it. Right: the arrival azimuths of the wavefronts
correspond to the known directions to the three other seismic surveys active
in the Gulf that day. The A marks the location of the Atlantis array.

S
Figure 4: Beam-forming the low-frequency seismic
noise at frequencies below 2 Hz reveals that the
microseismic background travels with phase
velocities of about 2000 m/s and predominantly
arrives from within 90 degrees of a southeasterly
direction at Atlantis. The amplitude of this energy
varies with the sea state.

Figure 5: Smoothed power spectral densities recorded at a single node on the abyssal plane for fourteen different 3-hour windows
from a 2009 Atlantis dataset. Above 4 Hz, the acoustic power level primarily depends on the proximity and repeat interval of the
nearest operating airgun array. Below 2 Hz, the acoustic power level depends on the sea state.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 68

Seismic bandwidth in the deep water GoM

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Non-repeating signals at Atlantis


At low frequencies the spikes in Figure 1 damp out and the
three spectra converge, indicating that at these frequencies
repeating signals no longer dominate and the Atlantis
airguns are no longer a significant source of acoustic
energy. (A faint trace of the overhead airgun is discernible
down to about 1.5 Hz.) Instead at these frequencies we see
a noise ramp steadily increasing at about 16 dB / octave
as the frequency lowers. In a dataset from shallow water
Reilly (2016) reports a steeper slope of 30 dB / octave.
This ramp is the microseismic noise background. It is
created by waves at beaches and non-linear interactions
between waves at sea (Longuet-Higgins, 1950). It reaches a
peak at about 0.15 Hz and then steeply declines at even
lower frequencies (Crawford and Webb, 2002). At these
frequencies the nominal Atlantis node spacing of 426
meters becomes less than the Nyquist spacing, and we can
thus beamform the data to characterize the phase velocity
and azimuth of this energy crossing the array (Regone,
1997). Figure 4 shows that the energy below 2 Hz
predominantly comes from the southeast, from the direction
of the open deep sea. It moves across the array at about
2000 m/s. Comparison of data from days with differing
meteorological conditions confirms that the noise level
below 2 Hz strongly correlates with the ocean wave state.
The larger the waves, the louder the microseismic
background noise.
There is also noise from the recording system in the data,
from instrumental thermal noise at low frequencies, but it is
much smaller than the microseismic background here. We
have also edited out of the spectra prominent spikes at 1
and 2 Hz caused by electronic crosstalk within the nodes, a
problem unique to the 2006 dataset.
Comparison with data from 2009
Figure 5 shows spectra over 14 different 3-hour windows
from a 2009 reshoot of the 2006 acquisition. The spectra
have been smoothed to make it easier to compare a large
number of curves. We can recognize the same
characteristic features of the seismic background at Atlantis
in this 2009 dataset as we already saw in the 2006 data: a
minimum around 2-3 Hz, with airguns dominating for
higher frequencies and the microseismic noise dominating
for lower frequencies.
Precisely where the tradeoff occurs depends on the
proximity of the nearest airgun array and the sea state. In
Figure 5, we can see a range of sea states cause the
microseismic noise to vary in amplitude below 1.5 Hz over
about 7 dB across different time windows. Above 7 Hz, the
topmost curves (around 133 dB) are for the Atlantis airguns
shooting a shot line near by the node. The single curve at

2016 SEG
SEG International Exposition and 86th Annual Meeting

125 dB is from shots during a line turn. These shots are


both more widely spaced and farther away than production
shots, hence the average power is reduced. The curves
around 115 dB are seismic interference from a survey about
50 km away. The lowest curve is for the quietest 3-hour
time window in the 2009 data, when neither Atlantis nor
the nearest interfering survey appear to have been
operating. In this dataset the propeller or pump noise that
we saw in 2006 was absent.
Discussion
There has been much discussion about how to improve the
signal from airguns below about 7 Hz, where they start to
roll off with decreasing frequency (Hegna and Parkes,
2011). Here we see that for data acquired in a popular
exploration basin like the Gulf of Mexico and recorded into
ocean-bottom nodes, in practice the signal-to-noise ratio
will be about constant down to 4 Hz despite the rolloff. The
noise, interference from other airguns (Estabrook, et al,
2016), is rolling off at the same rate as the signal.
Somewhere below 2-4 Hz, the situation completely
changes. The airguns presumably continue to roll off at
about the same rate, but the microseismic background noise
rapidly increases and quickly dominates the noise floor.
I.e., somewhere below 4 Hz a significant noise wall
rather abruptly appears. One strategy for getting through
this noise wall is to design a more efficient seismic source
that produces only the acoustic energy needed for
processing. Another strategy is to attenuate the
microseismic noise using receiver arrays.
Above 4 Hz the dominant noise sources in ocean-bottom
data may be manmade. These are not truly random. Instead
of boosting signal, then, a better way to improve the signalto-noise ratio might be to take advantage of the nonrandom nature of such noise. For example, we could treat
the dataset as an independent simultaneous source
acquisition, with noise from seismic interference treated as
independent sources and removed by processing.
Conclusions
Seismic acquisition into ocean-bottom nodes at frequencies
below about 4 Hz has fundamentally different challenges
from what we have been used to at conventional seismic
frequencies. There is a significant noise problem that will
not be easily solved by minor modifications to our existing
airgun toolkit.
Acknowledgements
Thanks to BP America and BHP Billiton for approval to
present this work. Thanks to CGG, Western-Geco, and
Fugro for providing the location of their seismic sources.

Page 69

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Clarke, R., G. Xia, N. Kabir, L. Sirgue, and S. Michell, 2006, Case study: A large 3D wide-azimuth
ocean-bottom-node survey in deepwater GoM: 76th Annual International Meeting, SEG,
Expanded Abstracts, MC 1.2, 11281132, http://dx.doi.org/10.1190/1.2369717.
Crawford, W. C., and S. C. Webb, 2002, Variations in the distribution of magma in the lower crust and at
the Moho beneath the East Pacific Rise at 910N: Earth and Planetary Science Letters, 203,
117130, http://dx.doi.org/10.1016/S0012-821X(02)00831-2.
Dellinger, J., and J. Ehlers, 2007, Low frequencies with a dense OBS array: The Atlantis-Green Canyon
earthquake dataset: 77th Annual International Meeting, SEG, Expanded Abstracts, 3640,
http://dx.doi.org/10.1190/1.2792377.
Estabrook, B. J., D. W. Ponirakis, C. W. Clark, and A. N. Rice, 2016, Widespread spatial and temporal
extent of anthropogenic noise across the Gulf of Mexico: Implications of Ocean Noise and the
Marine Ecosystem (Endangered Species Research).
Hegna, S., and G. Parkes, 2011, The low frequency output of marine air-gun arrays: 81st Annual
International Meeting, SEG, Expanded Abstracts, 7781, http://dx.doi.org/10.1190/1.3628192.
Longuet-Higgins, M. S., 1950, A theory of the origin of microseisms: Philosophical Transactions of the
Royal Society of London A: Mathematical: Physical and Engineering Sciences, 243, 135,
http://dx.doi.org/10.1098/rsta.1950.0012.
Regone, C. J., 1997, Measurement and identification of 3-D coherent noise generated from irregular
surface carbonates, in Marfurt, K. J, ed., Carbonate seismology: SEG, 281306,
http://dx.doi.org/10.1190/1.9781560802099.ch11.
Reilly, J. M., 2016, Marine broadband technology: History and remaining challenges from an end-user
perspective: The Leading Edge, 936941, http://dx.doi.org/10.1190/tle35040316.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 70

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source deghosting
Johan O. A. Robertsson*, Dirk-Jan van Manen, ETH-Zurich, Seismic Apparition GmbH; Fredrik Andersson,
Lund University; Kurt Eggenberger, Seismic Apparition GmbH; Lasse Amundsen, NTNU, Statoil

Marine seismic data are distorted by ghosts as waves


propagating upwards reflect downwards from the sea
surface. Ghosts appear both on the source-side as well as
on the receiver-side. However, whereas the receiver-side
ghost problem has been studied in detail and many different
solutions have been proposed and implemented
commercially, the source-side ghost problem has remained
largely unsolved with few satisfactory commercial
solutions available. In this paper we propose a radically
new and simple method to remove sea-surface ghosts that
relies on using sources at different depths but not at the
same lateral positions. The new method promises to be
particularly suitable for 3D applications on sparse or
incomplete acquisition geometries.
Introduction
Seismic ghosts are a long-standing issue in the marine
seismic exploration industry. A source ghost is an event
starting its propagation upward from the seismic source,
and a receiver ghost ends its propagation moving
downward at the receiver. They both have a reflection at
the sea surface, which leads to a reduction of the useful
frequency bandwidth and therefore damages seismic
resolution. The ghost removal process is known as
deghosting and can be applied both on the receiver side and
on the source side. The literature for receiver-side
deghosting techniques is vast (e.g., Amundsen, 1993;
Fokkema and van den Berg, 1993; Robertsson and Kragh,
2002; Weglein et al., 2002; Amundsen et al., 2005;
Amundsen et al., 2013; Beasley et al., 2013; Robertsson
and Amundsen, 2014; Amundsen and Robertsson, 2014).
Source side deghosting was first described by van Melle
and Weatherburn (1953) who dubbed the reflections from
energy initially reflected above the level of the source, by
optical analogy, ghosts. Source-side deghosting, has
largely been an unsolved problem although some authors
have suggested to use techniques that bear resemblance of
receiver-side deghosting techniques. In particular,
deghosting techniques developed for over/under streamers
have been utilized for over/under sources (Moldoveanu,
2000) by applying the Posthumus method (Posthumus,
1993) in the receiver gather domain.
Furthermore,
Halliday et al. (2012) and Robertsson et al. (2012) describe
a solution to this problem relying on the computation of
source-side vertical gradients (dipole sources). They
proposed to do this from finite-difference approximations

(B)!

(A)!

Summary

!(rad/s)!

!(rad/s)!

Signal!cone!

H
Zero!!
signal!

kN

Zero!
signal!
k!(1/m)!

+kN

Zero!
signal!

kN

H+

H
Zero!
signal!

k!(1/m)!

+kN

Figure 1: Left: In a conventional marine seismic survey all signal


energy of sources typically sits inside a signal cone bounded by
the propagation velocity of the recording medium. Right: the
proposed acquisition geometry splits the response from physical
and virtual ghost sources into two separated signal cones.

to over/under sources. Aliasing is likely to be a major


problem to acquire such data sets that can be deghosted for
the entire frequency band without requiring an extremely
dense carpet of shot points beyond what is economically
and practically feasible. Tian et al. (2015) describe how to
separate over/under sources using a simultaneous source
separation method and then to use conventional receiverside deghosting methods to source-side deghost the data.
Source-side deghosting continues to be an area of active
research. We refer the reader to the low-frequency
deghosting technique of Amundsen and Zhou (2013) and
the echo-deblending technique of Berkhout and Blacquire
(2016).
In this abstract we present a radically different approach for
source-side deghosting. The method requires using sources
at different depths although not at the same lateral
positions. The method bears a vague resemblance to the
method of seismic signal apparition presented by
Robertsson et al. (2016a, 2016b) and Pedersen et al. (2016)
as it relies on the separate mapping of physical source and
ghost source contributions in different mixes into different
parts of frequency-wavenumber space after transformation.
Theory
The slowest observable (apparent) velocity of a signal
along a line or carpet of recordings in any kind of wave
experimentation is identical to the slowest physical
propagation velocity in the medium where the recordings

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source deghosting

are made. As a result, after a spatial and temporal Fourier


transform, large parts of the frequency-wavenumber ()
spectrum inside the Nyquist frequency and wavenumber
tend to be empty. Figure 1 (left) illustrates how all signal
energy sits inside a signal cone centered around = 0
and bounded by the propagation velocity in water
(1500m/s).

where = ! ! .
In the frequency-wavenumber
domain we therefore obtain the following expression for
the ghost wavefield using capital letters for the variables in
the frequency-wavenumber domain:
! , ! , ! =
!
!

In the following, let us consider an acquisition geometry


where every second shot along a shot line is fired at a
source depth ! and every other second shot is fired at a
source depth ! > ! . In a marine seismic survey, the free
surface reflector acts as a mirror with negative reflection
coefficient. The source ghost can therefore be modelled as
virtual ghost sources with opposite polarity mirrored across
the sea surface in a situation where the sea surface is
removed and replaced with an infinite water layer as
illustrated in Figure. 2. Note that two sources are
activated simultaneously at each shot point, i.e., both the
physical source (solid black) as well as the ghost source
(open white) with opposite polarity.

!
!

1 + , ! , ! ! , ! , ! +

1 , ! , ! ! , ! ! , !

(4)

which follows from a standard Fourier transform result


(wavenumber shift; Bracewell, 1999). Equation (4) shows
that similar to the seismic signal apparition method
described by Robertsson et al. (2016a, 2016b) and Pedersen
et al. (2016), the data from the (staggered) ghost sources
! ! , ! will be mapped or partitioned into two places in
the spectral domain. Part of the data will remain in a cone
centred around = 0 (denoted by ! in the right part of
Figure 1) with the limits of the cone defined by the slowest
propagation velocity of the wavefield in the medium and
part of the data will be mapped or partitioned into a signal

The line of ghost sources (white stars in Figure 2) with


alternating depths ! ! , ! can be thought of as being
constructed from ghost sources at a uniform distance from
the original sea surface location (! , i.e., the level closest
to the original sea surface location) ! (! , ! ) through the
convolution of a modulation function:
! ! , ! = ! ! , ! ! (! , ! )

(1)

where the lower case variables denote expressions in the


time space domain, ! and ! are the shot numbers in the
-direction and -direction, respectively. Without loss of
generality we consider in the following the case of
uniformly distributed parallel shot lines in the -direction
where the source depth profiles along each shot line are
identical (no variation in the -direction) for which the
modulating function ! ! , ! in equation (1) takes the
form:
! ! , ! =
!
!

1+

!"!!

!
!

1 + 1
!

+ 1
!

!!

!"!!

+ 1 1
!

!!

=
(2)

The function is a redatuming operator which is both a


temporal and spatial filter [hence the convolution in space
in equation (1)], which can be written in the frequencywavenumber domain as

Figure 2: The effect of the source ghost can be thought of as ghost


sources (white stars) excited at mirror locations across the sea
surface with opposite polarity compared to the physical sources
(black stars).

cone centered around = ! (denoted by ! in the right


part of Figure 1) along the ! -axis with ! denoting the
Nyquist wavenumber representing the sampling frequency
of the data.
The carpet of desired (non-ghost) sources (solid black stars
in Figure 2) with alternating depths ! ! , ! can also be
constructed from desired sources ! (! , ! ) at a uniform
depth ! through the convolution of another modulation
function:
! ! , ! = ! ! , ! ! (! , ! )
where ! ! , ! is the modulating function:
! ! , ! =
!

, ! , ! = !!! !! =

!/! ! !!! ! !!! ! !!

(3)

(5)

1+

!"!!

!
!
!

1 + 1

+ 1
!

!!

!"!!

+ 1 1
!

!!

=
(6)

Again, the function is a redatuming operator in the


opposite direction compared to such that:

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source deghosting

, ! , ! = !! .

(7)

In the frequency-wavenumber domain we therefore obtain


the following expression:
! , ! , ! =
!!

!
!

1 + !! ! , ! , ! +

, ! ! , !

!
!

1
(8)

Again, equation (8) shows that the data from the


(staggered) desired sources ! ! , ! will also be mapped
or partitioned into the two separate cones in the spectral
domain but with different weighting functions. Equations
(4) and (8) tell us what the mix of desired and ghost sources
that occur around = 0 (in the following denoted by ! ):
! =
!!

!
!
!

1 + , ! , ! ! , ! , ! +
, ! , ! ,

!
!

1+
(9)

and around = ! (in the following denoted by ! ):


! =
!!

!
!
!

1 , ! , ! ! , ! , ! +
, ! , ! .

!
!

1
(10)

Equations (9) and (10) can be combined to give us an


expression of the wavefield of interest emitted from the
desired source (i.e., the source-side deghosted wavefield):
! =

!!! ! ! ! !!! ! !
!!!!!

(11)

Note, however, that so far we have not included any


information about the reflection coefficient of the surface.
Assuming that the sea surface reflection coefficient is -1 we
can therefore make further use of one more equation that
relates the ghost sources to the desired sources:
! = !!! /! ! .

(12)

Equations (4), (8) and (12) can be further simplified two


the following two equations with two unknowns:
! + ! = 1 !!!! /!! ! ,

(13)

and
! ! = !! !!!!! /!! ! .

(14)

Equations (13) and (14) therefore allow us to isolate the


wavefields due to the (virtual) ghost sources and or the
desired (physical) sources separately without knowing the
redatuming operator. In fact, the system of equations also

allows to solve for the redatuming operator itself.


Knowledge of such redatuming operators is important in
many applications in seismic data processing such as in
imaging of seismic data or in removing multiples in the
data for instance.
Example
Synthetic data were computed using a finite-difference
solution of the wave equation for a typical North Sea
subsurface velocity and density model consisting of sub
horizontal sediment layers overlying rotated fault blocks.
The model is invariant in the cross-line direction and we
therefore illustrate the result for a common-receiver gather
of one line of shots only.
The data consist of the waveforms recorded at a single
stationary receiver (located on the seafloor) for a line of
regularly spaced shots, where two sources at ! =10m and
! =15m depth are fired alternately as illustrated in Figure
2. The data, which include source- and receiver-side ghost
reflections from the free surface (located at =0m) are
shown in Figure 3 (top). We then apply our source-side
deghosting technique as shown in Figure 3 (middle). Figure
3 (bottom) shows the difference between the source-side
deghosted results (Figure 3, middle) and a reference
solution. Note that besides the horizontally propagating
direct waves and some steeply dipping artifacts the
separation performed can be considered as being very
accurate.
Discussion and conclusions
We have presented a method for source-side deghosting
that is applicable to both 2D and 3D seismic data. The
method requires sources at different depths to be used but
does not require the sources to be fired at different depths
at the same lateral locations. The method bears a vague
resemblance to the method of seismic signal apparition
presented by Robertsson et al. (2016a, 2016b) and Pedersen
et al. (2016) and relies on the separate mapping of physical
source and ghost source contributions in different mixes
into different parts of frequency-wavenumber space after
transformation
Just as many other seismic data processing methods, the
new source-side deghosting requires unaliased data. We
note, however, that as opposed to most other deghosting
methods, our method is fully compatible with for instance
applying a normal moveout transform before deghosting as
long as the NMO process is incorporated in the apparition
process (van Manen et al., 2016). This represents a major
advance over state-of-the-art deghosting methods as it

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Source deghosting

enables the application of the method to towed marine


streamer data with missing near offsets and sparse
acquisition geometries.
van Manen et al. (2016) also describe how the signal
apparition method can be generalized to irregular
acquisition geometries to for instance accommodate for
realistic perturbations in acquisition parameters. The
source-side deghosting method presented here can in a
similar manor be extended to accommodate for
perturbations in positions of sources as well as depth
variations due to for instance a moving rough sea.
In this abstract we applied our source-side deghosting
method to a simple synthetic data set resembling a North
Sea geology. Apart from the direct wave arrival (which in
principle can be better handled by taking more care during
the Fourier transform and separation operations), the results
were excellent demonstrating the accuracy and robustness
of the new source-side deghosting method.
Acknowledgments
We thank Seismic Apparition GmbH for permission to
publish the results of proprietary patent pending seismic
data processing techniques.

Figure 3: Top: Modelled seismic data for a stationary receiver and


a line of regularly spaced shots, where two sources at 10 and 15 m
depth were fired alternately. Middle: Source-side deghosted result
at 15m depth. Bottom: Difference between deghosted result
(middle) and a reference solution.

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Uses of near-field hydrophone measurements for shallow-water imaging and demultiple


Haiyang Wang*, John Anderson, Michael Norris, Young Ho Cha
Exxonmobil Upstream Research Company
Summary
Near-field hydrophone (NFH) sensors record pressure data
from locations in close proximity to air gun sources in a
marine seismic survey. In a shallow-water environment,
high-resolution pre-critical reflection data can be estimated
from NFH data to improve estimates of the near surface and
to create shallow earth model parameter maps. Moreover,
NFH data provide some of the small-offset small-traveltime
data missing from streamer data that are needed for
demultiple. Improved estimates of missing near-offset data
can be estimated and used to greatly improve multiple
prediction capabilities. These NFH data can be combined
with streamer data to improve traditional seismic processing
methods for de-multiple, wavelet processing and de-bubble.

Introduction

Two airgun sub-arrays are often used for a 3D flip-flop


shooting pattern in marine streamer acquisition. Each subarray consists of 2 or 3 strings of airguns. For NFHs above
the active air gun array, i.e. the active NFHs, each
hydrophone initially records mainly the energy emitted from
the active air gun just below the NFH. The signals observed
at the NFHs above the passive air gun array, i.e. the passive
NFHs, are more similar to conventional streamer data due to
the increased distance to the active air gun array. If NFH data
are carefully calibrated and appropriately pre-processed, the
recorded reflection data are equivalent to streamer data and
can be combined with streamer data and used for seismic
processing and imaging (Norris et al. 2011; Kragh et al.
2015). Since the NFH data are almost at zero offset, the
primary reflection can be easily identified and the
reconstruction of missing offset data can be turned into an
interpolation problem instead of an extrapolation problem.

Shallow water streamer seismic data processing is very


challenging, because of (1) the strong wavelet overlapping
of multiple and primary due to the short multiple period and
(2) the lack of near-offset data (< ~100 m) due to the
complex logistics of towing streamers in modern 3D seismic
acquisition. In extremely shallow water environments (water
bottom depth < 100 m), the primary shallow sediment
reflections and direct arrival are often overwhelmed by
multiples on the first available streamer offsets, making the
direct wave removal and demultiple very difficult. In
practice, an extrapolation of available data to zero-offset is
required prior to demultiple, which is usually very
problematic because the pre-critical primary reflection
might not be recorded. One way to alleviate these difficulties
is to take full advantage of extra information carried by nearfield hydrophone (NFH) data, available in many marine
seismic acquisitions nowadays.

The biggest challenge of shallow-water NFH data


processing is the separation of direct wave and reflection
data on both active and passive array. Reflection events are
usually overshadowed by direct wave bubbles. If the water
bottom depth is less than 25 m, the direct wave peak is also
overlapped with reflection data. Ni et al. (2015) proposed to
use spectrum separation or statistical features to perform the
separation. However, if the geological structure is relatively
flat and the airgun signature varies substantially from shot to
shot, none of the proposed methods will work properly.
Aiming at this issue, we propose a more general workflow
to separate the direct arrival and reflectivity information
from NFH data. In addition, we show the benefits of
including NFH data into conventional seismic data
processing via a synthetic test.

Ziolkowski et al. (1982) proposed to place an array of nearfield hydrophones in close proximity to the air guns in an air
gun array (usually roughly 1 m above each air gun in an air
gun array), with the initial purpose to measure near-field
wavelets from all the air guns. By treating the air guns as
notional sources, a recursive inversion process was proposed
(Ziolkowski et al. 1982; Parkes et al. 1984) to separate the
NFH data into individual point sources located at the air gun
locations. With these point sources, a direct water arrival at
any location, including a far-field source signature, could be
obtained by various modeling methods. Lots of studies have
been carried out to improve the far-field source signature
estimation (Kragh et al. 2000; Hopperstad et al. 2006; Niang
et al 2013; Ni et al 2014; Ni et al. 2015).

Near-field hydrophones record the signals from all the active


air guns. Based on this idea, Ziolkowski et al. (1982) and
Parkes et al. (1984) proposed that the superposition of the
direct and sea-surface reflected waves from N air gun
notional sources ( ) can be combined to create the predicted
pressure field ( ) at any near-field hydrophone (Equation
1),

2016 SEG
SEG International Exposition and 86th Annual Meeting

Method

() =
=1 () {

},

(1)

where is the distance between i-th near-field hydrophone


and j-th notional source and is the distance between i-th

Page 77

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

near-field hydrophone and j-th notional sources mirror


image. Bubble motions are included in the computation of
distances. is a constant sea-surface reflectivity (usually 1) and c is the water velocity. After rearranging equation 1,
a recursive equation can be derived to invert for notional
sources (Equation 2).
() =
=1, () {
()

+ ()

(2)

Ziolkowskis method enjoys some success in fullycalibrated and well-controlled experimental cases
(Ziolkowski 1987). However, real data exhibit uncertainties
in amplitude and airgun geometry, as well as angle- and
frequency-dependent sea-surface reflectivity. Therefore, we
improve the notional source inversion method by including
extra terms to tackle amplitude uncertainty and sea-surface
reflectivity, and an extra step to update the geometry. The
updated equation is as follows:
() = {
=1, () [
()

+ (, , )

} + (),
2(

configuration of the air gun array that fixes the distance


between air guns in the in-string direction.
Workflow
With the notional source inversion information, one can
model the direct wave at any location, including the passive
NFH array locations, given available geometry information.
Then clean reflection data on the passive NFH array could
be obtained by removing the modeled direct waves.
However, in a shallow-water environment, the direct arrivals
on the active NFH array are contaminated by relatively
strong reflection data from the water bottom and below. The
reflection-contaminated active NFH data result in inaccurate
notional sources that include reflection energy.
Consequently, removing the modeled direct arrivals on the
passive NFH array will also remove some coherent
reflection data.
In order to improve the above process, we propose a
workflow that manages to separate the direct wave and
reflections iteratively (Figure 1).

]+
(3)

where (, , ) = 2.83 and = 1. A common


model for the sea surface reflectivity is used as a function of
the sea state h (which is specified in meters), the angular
frequency , and the cosine of the reflection angle relative
to the sea surface . is a damping factor that absorbs the
amplitude uncertainty. Usually a number between 0.75 and
1 is used.
The airgun geometry factor is even harder to control. Due
to the uncertain sea state and significant tow vessel motion,
the towed air gun strings often deviate from nominal
geometrical locations. We propose to use travel time
information to update the airgun geometry. Each NFH
receiver records the energy emitted from all the active air
guns for a single shot. For a NFH, picking the arrival times
from individual active air gun sources is virtually impossible
except for the hydrophone that is typically one meter above
an active air gun. But each string of active air guns will
result in a peak in each NFH record, which is easy to pick.
By minimizing the picked peak time and modeled peak time,
the geometric information for source string separation
distances can be updated. An assumption could be made to
further simplify the problem: air guns only deviate in the
cross-string direction from their nominal locations. This
assumption is mostly true because of the mechanical

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1. Iterative workflow to separate direct arrival


information and reflectivity information using both active
array and passive array NFH data. Updated geometry
information for the source array separations is estimated.
Initialize. Set the direct arrival estimate for the active NFH
data equal to the active NFH data.
Step 1. From the direct arrival estimate for the active NFH
data, run notional source separation and model the direct
wave on the passive NFH array.
Step 2. Adaptively subtract the modeled direct wave from
the data on passive NFH array and mute the early residual of
the direct wave. The output from this step is ideally
reflection data on the passive NFH array.

Page 78

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Step 3. Stack the reflection data on the passive NFH array on


a string-by-string basis or on a combination of strings basis,
and obtain nearly zero-offset traces. Then apply NMO and
stacking to get a zero-offset reflectivity estimate.
Step 4. Adaptively subtract the zero-offset reflectivity
estimate from the data on active NFH array. The output from
this step is a new estimate for the direct wave on the active
NFH array. Return to step 1 and iterate the process.
Note least-square adaptive subtraction is actively used in the
proposed workflow to compensate for small errors in the
modeling. After the iterations, one can further model and
subtract the direct wave on the active NFH array data. The
output is ideally the zero-offset reflection data on active
NFH array. However, the processing residuals on active
NFHs might be still large enough to overshadow the target
reflection data. Therefore, we propose to calculate the crosscorrelation between the zero-offset reflection data and
derived zero-offset image from step 3, and then perform
optimal stacking based on the cross-correlation coefficient,
outputting the final zero-offset reflection data.
Synthetic example
We modify the Marmousi-II elastic models to generate a set
of 2.5D shallow-water marine models (Vp, Vs, density
models) with a water layer of 25 m on top (Figure 2). Grid
spacing is 1.25 m in three dimensions, which enables the
correct modeling of seismic wave-fields up to 120 Hz.

fashion is adopted, with a source spacing of 25 m. A total


number of 160 shots are fired. For each shot, 24 traces of
NFH data, as well as a single streamer data are available.
The streamer length is 3 km and its depth is 7.5 m. Source
wavelets are derived from a real survey, consisting of peaks
and bubbles. The frequency span of the sources are 1~100
Hz.
A shot gather is shown in Figure 3a. Active array (left) and
passive array (right) exhibit a large amplitude contrast. Due
to the 25 m water layer, the water bottom reflection is very
close to the seafloor reflection. Following our workflow, the
final modeled NFH data with clean notional sources are
shown in Figure 3b. Figure 3c is the adaptive subtraction
result of the true NFH data and modeled data, which contains
ideally only reflection data. From the results in Figure 3c, we
can derive 3 NFH near-offset traces: 0 m, 20m, and 30 m.
Note in practice, these near-offset data can provide valuable
information on the geological structures of shallow seafloor,
which cannot be recorded by streamer.
A far-field source wavelet can also be derived based on the
clean notional sources. A de-bubble operator can then be
designed based on the modeled wavelet and be applied to
streamer data.

Figure 3. (a) True NFH data. (b) Modeled direct wave. (c)
Adaptive subtraction result of (a) and (b). Ideally (c) only contains
reflection data.

Figure 2. Modified shallow-water Marmousi-II Vp model. A 25 m


water layer is used for the top of the elastic models. Dashed line
shows the shot location for Figures 3 and 4. Dashed white
rectangular is the target area for demultiple result shown in Figure
5.

The source array consists of 2 sub-arrays with 25 meters


nominal distance in between, each including two strings of
air guns, separated by a 10 m distance. Each string has 6
mounted near-field hydrophones (NFHs). Source depth is 5
meters and NFH depth is 3.75 m. A flip-flop shooting

2016 SEG
SEG International Exposition and 86th Annual Meeting

Using the NFH near-offset traces, a set of near-offset


streamer traces can be derived after de-ghosting, stacking,
spectrum shaping and interpolation to the streamer bin grid.
Near-offset traces are then merged with streamer data that
have undergone de-bubble, de-noise and de-ghosting. The
true full-offset streamer data is shown in Figure 4a. For
testing purpose, 120 m near-offset traces from streamer data
are muted prior to interpolation. Near-offset trace
reconstruction is achieved via high-resolution parabolic
Radon transformation. The final interpolation result is

Page 79

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

displayed in Figure 4b. Figure 4c illustrates the conventional


near-offset extrapolation result using only near-offsettruncated streamer data. Comparing the results, clearly the
interpolated result with NFH data has a much better match
to the true wavelet, as well as a better phase consistence,
when compared to the extrapolation result without NFH
data. The extrapolation result without NFH data contains
many artificial arrivals and severely stretched shallow
reflection data.

compatible with the results after de-multiple processing


using true full-offset data.
Acknowledgment
The authors thank University of Houston Allied Geophysical
Laboratory for the Marmousi-II elastic earth model. We
thank Norm Allegar and Simon Dewing for their insights.
The authors also would like to thank ExxonMobil for
permission to publish this paper.

Finally, the interpolated data are injected into a two-pass


shallow water de-multiple workflow that consists of one pass
shallow water demultiple only targeting for peg-lags in water
and one pass SMRE targeting for long-period multiples. The
de-multiple results are shown in Figure 5. As a comparison,
the full stack true data is shown in Figure 5a. The demultiple
result using the full offset streamer data in shown in Figure
5b. It can be observed that the de-multiple result with the
help of NFH data (Figure 5c) outperforms the de-multiple
result without NFH (Figure 5d), and very close to the result
of using original full offset data.

Figure 4. (a) True full-offset streamer data. (b) Interpolation result


using NHF data and streamer data. (c) Extrapolation result using
only streamer data. Prior to interpolation and extrapolation, 120 m
offset streamer data are removed from true data.

Conclusion
We propose new algorithms and workflows to better take
advantage of near-field hydrophone data for signal
enhancements, including separating direct wave and
reflections on both active and passive NFH arrays, and
deriving zero and nearly-zero offset traces for shallow water
de-multiple processing. The synthetic results shown in this
paper cover the use of NFH data for the derivation of farfield source signature, mitigating bubble effects, and doing
shallow water de-multiple. The shallow water de-multiple
results show that with the help of NFH data, the missing
near-offset data reconstructions are much more accurate, and
hence the stacked results after the de-multiple processing are

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5. (a) True stack. (2) Demultiple result using the full offset
data. (3) Demultiple result using interpolated data with NFH data.
(4) Demultiple result using extrapolated data. All stacks use only
offset beyond 120 m.

Page 80

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Hopperstad, J. F., and R. O. B. E. R. T. Laws, 2006, Source signature estimation Attenuation of the
seafloor reflection error in shallow water: 68th Annual International Conference and Exhibition,
EAGE, Extended Abstracts, http://dx.doi.org/10.3997/2214-4609.201402115.
Kragh, E., R. Laws, and A. zbek, 2000, Source signature estimation Attenuation of the sea-bottom
reflection error from near-field measurements: 62nd Annual International Conference and
Exhibition, EAGE, Extended Abstracts.
Kragh, J. E., J. O. A. Robertsson, E. J. Muyzert, and C. Kostov, 2015, Zero-offset seismic trace
construction: U.S. Patent 8,958,266 B2.
Martin, G. S., K. J. Marfurt, and S. Larsen, 2002, Marmousi-2: An updated model for the investigation of
AVO in structurally complex areas: 72nd Annual International Meeting, SEG, Expanded
Abstracts, 21, 19791982, http://dx.doi.org/10.1190/1.1817083.
Martin, G. S., R. Wiley, and K. J. Marfurt, 2006, Marmousi-2: An elastic upgrade for Marmousi: The
Leading Edge, 25, 156166, http://dx.doi.org/10.1190/1.2172306.
Ni, Y., F. Haouam, and R. Siliqi, 2015, Source signature estimation in shallow water surveys: 85th
Annual International Meeting, SEG, Expanded Abstracts, 7175,
http://dx.doi.org/10.1190/segam2015-5844754.1.
Ni, Y., T. Payen, and A. Vesin, 2014, Joint inversion of near-field and far-field hydrophone data for
source signature estimation: 84th Annual International Meeting, SEG, Expanded Abstracts, 57
61, http://dx.doi.org/10.1190/segam2014-1193.1.
Niang, C., Y. Ni, and J. Svay, 2013, Monitoring of air-gun source signature directivity: 83rd Annual
International Meeting, SEG, Expanded Abstracts, 4145, http://dx.doi.org/10.1190/segam20130940.1.
Norris, M. W., R. E. Lory, and Y. A. Paisant, 2011, Zero-offset profile from near-field hydrophone data:
73rd Annual International Conference and Exhibition, EAGE, Extended Abstracts,
http://dx.doi.org/10.3997/2214-4609.20149255.
Parkes, G. E., A. Ziolkowski, L. Hatton, and T. Haugland, 1984, The signature of an airgun array:
Computation from near-field measurements including interactions Practical considerations:
Geophysics, 49, 105111, http://dx.doi.org/10.1190/1.1441640.
Ziolkowski, A., G. Parkes, L. Hatton, and T. Haugland, 1982, The signature of an air gun array:
Computation from near-field measurements including interactions: Geophysics, 47, 14131421,
http://dx.doi.org/10.1190/1.1441289.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 81

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Amundsen, L., 1993, Wavenumber-based filtering of marine point-source data: Geophysics, 58, 1335
1348, http://dx.doi.org/10.1190/1.1443516.
Amundsen, L., and J. O. A. Robertsson, 2014, Wave equation processing using finite-difference
propagators, Part 1: Wavefield dissection and imaging of marine multicomponent seismic data:
Geophysics, 79, no. 6, T287T300, http://dx.doi.org/10.1190/geo2014-0151.1.
Amundsen, L., and H. Zhou, 2013, Low-frequency seismic deghosting: Geophysics, 78, no. 2, WA15
WA20, http://dx.doi.org/10.1190/geo2012-0276.1.
Amundsen, L., T. Rsten, J. O. A. Robertsson, and E. Kragh, 2005, On rough-sea deghosting of streamer
seismic data using pressure gradient approximations: Geophysics, 70, no. 1, V1V9,
http://dx.doi.org/10.1190/1.1852892.
Amundsen, L., H. Zhou, A. Reitan, and A. B. Weglein, 2013, On seismic deghosting by spatial
deconvolution: Geophysics, 78, no. 6, V267V271, http://dx.doi.org/10.1190/geo2013-0198.1.
Beasley, C. J., R. T. Coates, Y. Ji, and J. Perdomo, 2013, Wave equation receiver deghosting: A
provocative example: 83rd Annual International Meeting, SEG, Expanded Abstracts, 42264230,
http://dx.doi.org/10.1190/segam2013-0693.1.
Berkhout, A. J., and G. Blacquire, 2016, Deghosting by echo-deblending: Geophysical Prospecting, 64,
406420, http://dx.doi.org/10.1111/1365-2478.12293.
Bracewell, R., 1999, The Fourier transform and its applications (3rd ed.): McGraw-Hill Science.
Fokkema, J. T., and P. M. van den Berg, 1993, Seismic applications of acoustic reciprocity: Elsevier
Science, http://dx.doi.org/10.3997/2214-4609.201411552.
Halliday, D., J. O. A. Robertsson, I. Vasconcelos, D. J. van Manen, R. Laws, K. Ozdemir, and H.
Groenaas, 2012, Full-wavefield, towed-marine seismic acquisition and applications: 82nd Annual
International Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/segam2012-0446.1.
Moldoveanu, N., 2000, Vertical source array in marine seismic exploration: 70th Annual International
Meeting, SEG, Expanded Abstracts, 5356, http://dx.doi.org/10.1190/1.1816117.
Posthumus, B. J., 1993, Deghosting using a twin streamer configuration: Geophysical Prospecting, 41,
267286, http://dx.doi.org/10.1111/j.1365-2478.1993.tb00570.x.
Robertsson, J. O. A., and L. Amundsen, 2014, Wave equation processing using finite-difference
propagators, Part 2: Deghosting of marine hydrophone seismic data: Geophysics, 79, no. 6,
T301T312, http://dx.doi.org/10.1190/geo2014-0152.1.
Robertsson, J. O. A., and E. Kragh, 2002, Rough sea deghosting using a single streamer and a pressure
gradient approximation: Geophysics, 67, 20052011, http://dx.doi.org/10.1190/1.1527100.
Robertsson, J. O. A., L. Amundsen, and . Sjen Pedersen, 2016a, A seismic shift: Wavefield signal
apparition: Physical Review Letters, submitted.
Robertsson, J. O. A., L. Amundsen, and . Sjen Pedersen, 2016b, Wavefield signal apparition, Part I:
Theory: EAGE.
Robertsson, J. O. A., D. F. Halliday, D. J. van Manen, I. Vasconcelos, R. M. Laws, and H. Gronaas, 2012,
Full-wavefield, towed-marine seismic acquisition and applications: 74th Annual International
Conference and Exhibition, EAGE, Extended Abstracts, http://dx.doi.org/10.3997/22144609.20148849.
Sjen Pedersen, ., L. Amundsen, and J. O. A. Robertsson, 2016, Wavefield signal apparition, Part II:
Applications: EAGE.

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Tian, Y., G. Hampson, K. Davies, J. Cocker, and A. Strudley, 2015, GeoSource de-blending, de-ghosting,
and designature: EAGE.
van Manen, D. J., J. O. A. Robertsson, F. Andersson, and K. Eggenberger, 2016, Aperiodic wavefield
signal apparition: Dealiased simultaneous source separation: 86th Annual International Meeting,
SEG, Expanded Abstracts, submitted.
van Melle, F. A., and K. R. Weatherburn, 1953, Ghost reflections caused by energy initially reflected
above the level of the shot: Geophysics, 18, 793804, http://dx.doi.org/10.1190/1.1437929.
Weglein, A. B., S. A. Shaw, K. H. Matson, J. L. Sheiman, R. H. Stolt, T. H. Tan, A. Osen, G. P. Correa,
K. A. Innanen, Z. Guo, and J. Zhang, 2002, New approaches to deghosting towed-streamer and
ocean-bottom pressure measurements: 72nd Annual International Meeting, SEG, Expanded
Abstracts, 10161019, http://dx.doi.org/10.1190/1.1817121.

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Accelerating deep water seismic acquisition through continuous recording


Tim Seher* (Spectrum Geo Ltd.) and Richard Clarke (Spectrum Geo Inc)
Summary
Deblending and continuous recording promise more
economical seismic experiments, but by design cannot
deliver the interference free seismic record. In this study,
we explore how to leverage the ideas behind simultaneous
source experiments to design more efficient surveys
without jeopardizing the data in a target window. This
requires some modifications to standard acquisition and
processing procedures. During blended acquisition we
prescribe a minimum shot delay to protect the survey
target. During processing we combine wavefield estimation
and subtraction with random noise elimination to attenuate
the energy from the blended secondary seismic sources.
Applying our approach to both a synthetic example and real
field data from the Gulf of Mexico demonstrates that our
approach can successfully attenuate seismic interference
due to blended acquisition in deep water environments.
Most importantly, our approach does not require special
acquisition (multiple vessels or random time delays). The
attenuation of the blended direct wave by 20 dB is an
important benchmark for the evaluation of more elaborate
deblending methods.

Introduction
In the current economic climate cost and efficiency of a
new marine seismic experiment are more crucial than ever.
However, current economic conditions also lead to
increased risk adversity towards novel acquisition or
processing techniques. In consequence, seismic explorers
find themselves in a situation where they are tasked with
improving survey performance without jeopardizing the
survey target.
The last decade has seen the emergence of blended and
continuous acquisition techniques that promise to deliver
more data in less time by near-simultaneously triggering
multiple seismic sources (cf. Beasley, 2008; Berkhout,
2008; Hampson et al., 2008). However, deploying these
techniques in the field usually entails specialized
acquisition with multiple source vessels and randomized
time delays (cf. Borselen et al., 2012; Maraschini et al.,
2012). These specialized acquisition schemes inevitably
increase the cost and complexity of seismic operations.
While deblending techniques based on random noise
attenuation and dip separation achieve a good separation of
the blended sources, they cannot deliver the unperturbed
shot records in the target interval.
In this abstract, we will present a novel acquisition and

2016 SEG
SEG International Exposition and 86th Annual Meeting

processing solution that exploits concepts from blended and


continuous acquisition to improve survey efficiency. To
date we have successfully applied our new approach during
two seismic surveys offshore Somalia and in the Gulf of
Mexico. Our new methodology has allowed us to increase
the shot density and record length at limited additional
survey cost. Crucially, our new method does not require
specialized acquisition schemes. In addition, our approach
does deliver an unperturbed seismic record in a predefined
target interval. Deblending only affects the seismic image
for the deeper part of the seismic section, which is more
acceptable in exploration or frontier environments.
In the following sections, we first describe our new
deblending solution. We then apply our method to a
synthetic dataset and demonstrate its efficiency compared
to another deblending method. Last, we present a field data
example from the deep water Gulf of Mexico.

Method
During blended acquisition seismic sources are triggered at
intervals shorter than the total record length. In
consequence, each seismic record contains signal from
multiple sources. To protect the primary survey target from
seismic interference, we define a minimum unperturbed
record length. The next blended source is triggered no
earlier than the resulting time delay. This allows us to
deliver unperturbed seismic data over the primary target
window, while still having access to longer records.
To separate the blended seismic sources, we combine two
complimentary processing approaches. In a first step, we
estimate and remove parts of the seismic wavefield
effectively eliminating the predictable part of the blended
seismic signal. This estimation and subtraction (SES)
approach is inspired by demultiple processing sequences. In
a second step, we attenuate residual seismic interference
using erratic noise attenuation and/or time-variant low-pass
filtering. We note that we treat linear events (direct wave)
and hyperbolic events (seafloor reflection) separately.
To estimate the blended direct wave, we carry out a median
stack of all blended shot records in the common channel
domain. This approach honors the linear moveout of the
direct wave. Some attention is required for near-offset
traces to avoid wavelet truncation effects and filter roll-on.
This method of direct wave estimation makes two
assumptions: stability of the source signature along the
profile and sufficient variations in shot time delay to
remove wrap-around signal. The estimated blended direct

Page 82

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Accelerating seismic acquisition

wave is then removed using adaptive subtraction in the shot


domain, which compensates for small shot-to-shot
variations in source signature.
The blended reflected energy is estimated using a parabolic
Radon transform after application of a normal moveout
correction. The estimate of the blended reflection is
removed from the seismic records using direct subtraction.
Application of the Radon transform has two effects on the
blended data. First, the transform acts as a coherency filter
that attenuates random interference. Second, the transform
acts as a dip filter and exploits the moveout differences
between the target and blended signals.
After prediction and subtraction of the direct and reflected
waves, some residual seismic interference due to the
blended shots is still present in the seismic records. This
blended energy is further attenuated using erratic noise
attenuation techniques such as the Karhunen-Love (KL)
transform (Hemon & Mace, 1978), vector median filtering
(Huo et al., 2012), and time-variant frequency filtering. We
note that the use of a time-variant low-pass filter exploits
frequency differences between the low frequency target
energy at late times in the seismic record and the high
frequency blended signal. This characteristic of the seismic
wavefield is rarely exploited in seismic deblending.

Synthetic Example
The following synthetic example compares two deblending
methods: multi-directional vector median filtering (MDVMF) (Huo et al., 2012) and our SES method. This
example demonstrates that the SES method works at least
as well as other deblending methods.

Figure 1: Velocity model used for synthetic data creation


representative of the abyssal plain of the Gulf of Mexico.

direct wave masks part of the target interval between 10 s


and 13 s, but the primary target interval between 4 s and 10
s is unperturbed (Figure 2d).
The data are first deblended using a MD-VMF applied in
the common channel domain, where the blended sources
appear random. The filter effectively removes the
interfering blended sources (Figures 2b&e). Next, the
seismic data were deblended in the common shot domain
using our SES method. Again, the interfering sources are
effectively attenuated (Figures 2c&f). For both approaches
the energy in the water column was removed using a top
mute. Comparing the two results we observe that the
median filter has removed some of the steeply dipping,
aliased energy in the seismic section. The dipping energy
was not attenuated by our SES method.

To evaluate our new deblending method, we first created


synthetic shot gathers using a velocity model representative
of the deep water Gulf of Mexico (Figure 1). The synthetic
data were modelled using a finite difference method; a
minimum phase wavelet with a dominant frequency of 12
Hz was used as a source and the free surface was included
in modelling. Each synthetic shot was modelled for 601
channels spaced 12.5 m apart. The synthetic shot spacing
was 100 m.

This synthetic example has demonstrated the effectiveness


of our deblending method compared with MD-VMF. In
particular, our method is more conservative as only the
predicted component of the wavefield is removed.
Crucially, our SES method does not require random time
delays, since it operates in the shot domain.

The synthetic data were then blended using variable time


delays between subsequent shots. The total time delay for
each shot had a systematic component (10.5 s to 12.5 s time
delay varied by 0.5 s between subsequent shots) and a
random component with a variance of 166 ms. Each
blended shot gather (Figure 2a) shows the blended direct
arrival from the following shot at the bottom of the record
and contains wrap-around reflected energy from the
preceding shot at the beginning of the record. The blended

Each new data processing method should be applied to real


seismic data to assess its efficiency. In this section, we
apply our SES method to a 2D seismic profile acquired in
the Gulf of Mexico and demonstrate that our method
effectively removes the interfering signal from the blended
shot records.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Field Data Example

The 2D seismic profile presented here has a total length of


96 km and is part of a large 2D survey with a total length of

Page 83

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Accelerating seismic acquisition

Figure 2: Synthetic shot gathers (a-c) and stacked seismic sections (d-f) before and after deblending. Only the near-offset traces (< 1 km) were
used in the stack. Flow 1 included a multi-directional vector median filter in the channel domain. Flow 2 applied our SES method.

30,989 km acquired in the Gulf of Mexico. The data were


acquired with a shot point interval of 25 m and a receiver
spacing of 12.5 m. Each shot was recorded on 960 channels
and the total record length is 15 s (Figure 3a). However, the
time delay to the next shot is only 9-12 s, which creates
significant seismic interference. We stress that the shot
times were not randomized and the variations we see are
solely due to variations in vessel speed (Figure 3b).
Crucially, the first 9 s of the seismic record do not suffer
from seismic interference, which protects of the primary
target interval.
In a first step, we removed the direct wave (Figures 3b&e)
using a median stack in the common channel domain
followed by adaptive subtraction of the model in the
common shot domain. After adaptive subtraction some
residual direct wave energy on the very near traces was
attenuated using a KL transform in the common shot
domain. The data were first flattened with respect to the
direct wave travel time. The KL transform was then used to
remove the coherent component of the residual direct wave.
Comparing the RMS amplitudes before and after direct
wave attenuation shows a ~20 dB decrease. A comparison

2016 SEG
SEG International Exposition and 86th Annual Meeting

of RMS amplitudes before and after arrival of the direct


wave shows similar amplitude levels after attenuation of
the direct wave. Deblending has effectively removed the
high amplitude direct wave energy and decreased the
average energy to the background energy.
In a second step, we targeted seismic interference caused
by seafloor and other shallow reflections (Figures 3c&f).
The interfering signal was estimated using a parabolic
Radon transform in the common mid-point domain after
applying a normal moveout correction. After subtraction of
the Radon transform model most low frequency, long
wavelength interference has been removed. Most of the
remaining interference appears erratic and can efficiently
be attenuated using median filtering or other random noise
attenuation techniques.
In a third and final step, we apply a time variant low-pass
filter to attenuate high frequency energy starting at 13 s.
While the blending noise has a significant high frequency
component, the subsurface reflections at late times will be
low frequency. This frequency difference allows

Page 84

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Accelerating seismic acquisition

Figure 3: Observed shot gathers (a-c) and near-offset stacked seismic sections (d-f) before and after attenuation of the direct wave and near
seafloor reflections. Only the near-offset traces (< 1 km) were used in the stack. The direct wave was estimated using a median stack and the near
seafloor energy was estimated using a Radon transform.

attenuation of a significant part of the remaining blending


noise.

primary target window and longer records, which are of


great value for exploration and regional seismic studies.

This real data example has demonstrated that our


deblending method allows attenuating seismic interference
due to multiple sources in the common shot domain
without randomization of shot times. We stress that
deblending of the direct wave is almost lossless. To be able
to attenuate the seafloor reflection, processing parameters
had to be more aggressive and some primary energy was
removed by random noise attenuation and time-variant
low-pass filtering.

We have successfully applied this new method to both


synthetic and real seismic data, which demonstrates that
careful processing can deblend seismic data in deep water
settings. Since our approach requires estimation of the
blended wavefield, we are currently evaluating whether a
similar methodology can be used in shallow water
environments or for complex wavefields. Most importantly,
this pure processing solution establishes an important
baseline to benchmark more advanced deblending methods.

Discussion and Conclusions

Acknowledgements

This manuscript has presented our new method for


processing blended seismic shot records acquired using a
single vessel without randomization of time delays. By
setting a minimum time delay between subsequent shots,
we can deliver both unperturbed seismic records in the

We are grateful to our colleagues, in particular, to Milos


Cvetkovic for sharing his velocity model. We are thankful
to Spectrum management for permission to publish this
abstract.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 85

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Beasley, C., 2008, A new look at marine simultaneous sources: The Leading Edge, 27, 914917,
http://dx.doi.org/10.1190/1.2954033.
Berkhout, A., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924938,
http://dx.doi.org/10.1190/1.2954035.
Hampson, G., J. Stefani, and F. Herkenhoff, 2008, Acquisition using simultaneous sources: The Leading
Edge, 27, 918923, http://dx.doi.org/10.1190/1.2954034.
Huo, S., Y. Luo, and P. Kelamis, 2012, Simultaneous Sources Separation via Multidirectional VectorMedian Filtering: Geophysics, 77, V123V131, http://dx.doi.org/10.1190/geo2011-0254.1.
Hemon, C., and D. Mace, 1978, Essai dune application de la transformation de Karhunen-Love au
traitement sismique: Geophysical Prospecting, 26, 600626, http://dx.doi.org/10.1111/j.13652478.1978.tb01620.x.
Maraschini, M., R. Dyer, K. Stevens, and D. Bird, 2012, Source Separation by Iterative Rank Reduction Theory and Applications, 74th EAGE Conference and Exhibition incorporating EUROPEC 2012,
A044, http://dx.doi.org/10.3997/2214-4609.20148370.
van Borselen, R., R. Baardman, T. Martin, B. Goswami, and E. Fromyr, 2012, An Inversion Approach to
Separating Sources in Marine Simultaneous Shooting Acquisition Application to a Gulf of
Mexico Data Set: Geophysical Prospecting, 60, 640647, http://dx.doi.org/10.1111/j.13652478.2012.01076.x.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 86

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Using source ghost in blended marine acquisition


Sixue Wu*, Gert-Jan van Groenestijn, Gerrit Blacquire
Summary
In a blended acquisition, source encoding is needed for the
separation of the blended source responses. The source
ghost introduced by the strong sea surface reflection can be
considered as a virtual source located at the mirror position
of the actual source. In this abstract, we propose an
acquisition concept that includes the source ghost as a
natural source encoding such that it can be used for
deblending, where the end result is deblended as well as
deghosted. This acquisition method is easy to combine with
man-made source encoding and also the concept of using
the source ghost provides an interesting alternative to deal
with the current depth distributed source for the broadband
seismic data.

Introduction
In a blended acquisition source encoding is needed for the
separation of the blended sources. In marine seismic
surveys, many approaches of temporal source encoding
have been employed (Abma et al., 2015; Mueller et al.,
2015; Wu et al., 2015; Vaage, 2002). In this work, we
consider the naturally blended source, i.e. source ghosts, as
part of the blending code (Berkhout and Blacquire, 2014).
With the help of this natural blending code in depth, it is
possible to use the source ghost for deblending.
Additionally, it is easy to combine with man-made source
codes and provides an interesting alternative to deal with
the current depth distributed broadband source.
In this abstract, we present three cases where source ghosts
are treated as signal and then separated from the source
response. In the first case, two sources are activated near
simultaneously at different lateral locations. They are
towed at different depths, and therefore these two sources
also have different source ghosts correspondingly. In the
second case, the blended source geometry is the same as in
the first case. However this time each physical source is
activated in a shot-repetition fashion, i.e. activated twice
with certain time delays (Wu et al., 2015). The third case
contains two sources situated at the same lateral position
but at different depths. This is an analog of the current
depth distributed source and a field data example will be
discussed.

Forward model
The forward model of blending with the source ghost is
formulated based on the matrix representation described in

2016 SEG
SEG International Exposition and 86th Annual Meeting

Berkhout (1982). With the premise that the source


sampling is sufficient, the monochromatic blended data can
be formulated as:
(z0 ; zm ) = (0 ; 0 )(0 , ) ,

(1)

where (0 ; 0 ) represents the unblended data acquired


with both source and receiver arrays at the sea surface 0 .
represents the source ghost operator that generates the real
source response with the ghost source response from all the
sources presented in (0 ; 0 ):
(z0 , zm ) = + (z0 , + ) + (z0 ) (z0 , ),
= (1 , 2 , ).
(2)
Each row and column of corresponds to the lateral source
location in the unblended and ghost free data (0 ; 0 ). In
the nonzero elements of the source ghost operator
(z0 , zm ) , + (z0 , + ) downward extrapolates the
wavefield to the actual source depth , while
(z0 ) (z0 , ) upward extrapolates the wavefield to
the source ghost depth and applies the sea surface
reflectivity (z0 ) , which generates the source ghost
response. The depth level used in extrapolation is denoted
by the function , which is a function of the depth level of
each blended source. After applying (z0 , zm ), all the
sources have been extrapolated from the sea surface level
to their respective depths below and above the sea surface.
The abovementioned extrapolation process can be
implemented in the wavenumber frequency domain in the
case of laterally invariant parameter values (speed of sound
in water and sea surface reflectivity).
In equation 1, is the blending matrix that contains the
temporal source encoding. Each column of corresponds
to one blended seismic experiment, and each row
corresponds to a source location. In this work, each
nonzero element of is formulated as a sum of phase
terms, based on the source time delay , :
,
=
.
=1

(3)

In the case where no temporal source encoding is applied,


each nonzero element of equals one. Figure 1a shows an
example of blending two sources at different tow depths. In
the case where only one shot is fired at each source position
( = 1 in equation 3), the blending matrix represents the
dithering blending code. In the case of shot repetition
( 2 in equation 3), becomes a sum of time shifts
which adds the benefit of deblending within a single
common shot gather (Wu et al., 2015). In Figure 1d, an

Page 87

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Using source ghost in a blended acquisition

Figure 1: Blending two sources of different depths: a) blended data, b) pseudo-deblended left source, c) deblended and deghosted left source;
blending two sources of different depths and shot repetition encoding: d) blended data, e) pseudo-deblended left source, f) deblended and
deghosted left source; blending multi-level sources: g) blended data, h) pseudo-deblended data, i) deblended and deghosted source.

example of two sources at different tow depths blended in a


shot repetition fashion is shown.
Deblending method
By minimizing the objective function , a
least-square solution is obtained for , and used as the start
of an iterative process. This solution, , is often referred
to as the pseudo-deblended data:

2016 SEG
SEG International Exposition and 86th Annual Meeting

= ( H ) H (H ) H .

(4)

In the pseudo-deblending process described by the above


equation, the blended data is correlated with the source
encoding which is a combination of time and depth.
To use the source ghost, the blended sources are required to
be towed at different depths such that their corresponding

Page 88

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Using source ghost in a blended acquisition

source ghosts are different (Figure 1a). No temporal source


encoding has been applied. It can be clearly observed that
the ghosts of the blended sources have different phase
shifts. Figure 1b shows a pseudo-deblended shot gather
related to the left source. The amplitude of the left source
signal is twice as strong compared with both its side lobes
(ghosts) and the dispersed right source. Note that the
pseudo-deblended data has a lower amplitude as a result of
the amplitude shaping terms ( H ) and (H ) in
equation 4. By using the fact that the desired signal has the
strongest amplitude in the shot gather, an iterative scheme
which is similar to the one by Mahdad et al. (2011) is
applied to obtain the deblended and deghosted shot record
of the left source (Figure 1c). The notches in the f-k
spectrum of the pseudo-deblended data are recovered
(Berkhout and Blacquire, 2014), though one can still
observe a small imprint of blending in the f-k spectrum of
the deblended shot. For the right source which is not
displayed, the pseudo-deblended data and the deblending
result have the same behaviour. This example shows that it
is possible to deblend by using only the source ghost,
without involving any temporal source code.
To illustrate the concept of using the source ghost in
conjunction with temporal source encoding, we present an
example where two blended sources are both activated
twice with certain time delays. The blended shot record is
displayed in Figure 1d. After pseudo-deblending, again the
left source signal has a higher amplitude than the dispersed
right source (Figure 1e), and the side lobes of the signal are
dispersed due to the shot-repetition code. Compared with
the previous case, the signal to blending noise ratio in the
pseudo-deblended data is higher and also the result in
Figure 1f shows a better recovered f-k spectrum. This
example demonstrates that with a more sophisticated
temporal source code, it is possible to improve the
deblending and deghosting result. Note that the inclusion of
the temporal source code can be easily implemented by
modifying the blending matrix in the forward model.

Example on field data


We tested the method on field data acquired in the Mre
margin high, Norwegian sea. Two identical sub-sources are
deployed with random time delays and located at the same
lateral position and different depth at 10 m and 14 m. The
streamers are located at a depth of 25 m with the receiver
spacing being 12.5 m. The data is interpolated to have
denser spatial sampling, and a simple receiver deghosting
process has been applied. Additionally a time window
which mutes both direct waves is applied. Figure 2a
illustrates the blended shot gather with two overlapping
shots that both contain sources ghosts, and Figure 2c shows
its f-k spectrum with frequencies up to 60 Hz. With a
source at depth 14 m we normally expect the first source
ghost notch at around 53 Hz. The reason that we dont
observe this source ghost notch is that this notch is filled up
by the overlapping shot at a depth of 10m. The horizontal
notches are a result of vertical blending.
Figure 2b shows the deblended and deghosted data. From
offset 0 to 6000 m, the shot is separated quite well,
including the later weak events around 10 s. From offset
6000 m to 10000 m, some parts of the interfering shot are
not completely removed, see e.g. the erroneous events that
appear to be faster than the water bottom reflection. It is
likely because of the small phase difference between the 10
m and the 14 m source and both their ghosts for high-angle
events. After deblending, the source ghost has been
removed which results in a higher resolution, see Figure 2b.
The f-k spectrum of deblended shot has no ghost notch at
around 53Hz (Figure 2d). The horizontal notches have been
mostly recovered, though it shows again the difficulty of
recovering the events at high angles.

The third example constitutes of sources at the same lateral


location but different depths (Figure 1g). This means that
instead of manually activating the source twice, an extra
physical source is added and allocated at a different depth.
In Figure 1h, the signal in the pseudo-deblended data
contains the contribution of four responses, i.e. two sources
and their corresponding ghosts. The deblended and
deghosted shot is shown in Figure 1i. The f-k spectrum of
the result is again notch free. This example demonstrates an
alternative method to deal with the current depth distributed
sources. A field data example of this case will be discussed
in the following section.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 89

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Using source ghost in a blended acquisition

Conclusions and discussion

Acknowledgements

In a blended acquisition, source encoding is needed. The


ghost, as a result of the strong sea reflectivity, has a phase
difference with respect to the real source which depends on
the tow-depth. Therefore it can be considered a source code
and benefit the deblending process. It is possible to use the
source ghost as a type of source encoding for deblending
purposes. In addition, the combination of a more
sophisticated temporal source code and the source code in
depth can improve the results and can be easily
implemented.

The authors would like to acknowledge the members of the


DELPHI consortium at TU Delft for their support, and PGS
for permission to publish the field data.

For the case where physical sources are situated at different


depths, we consider our method as an interesting alternative
to obtain a broadband solution. The test on field data shows
promising results. Under the same source geometry, it
should also be possible to treat the two real sources which
have not reflected at the free surface, and the two ghost
sources which have reflected at the free surface separately.
In this way, the deblending and deghosting scheme will not
require an assumption of the free sea surface.

Figure 2: a) Blended field data and d) its f-k spectrum; b) deblended and deghosted data and e) its f-k spectrum.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 90

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Abma, R., D. Howe, M. Foster, I. Ahmed, M. Tanis, Q. Zhang, A. Arogunmati, and G. Alexander, 2015,
Independent simultaneous source acquisition and processing: Geophysics, 80, WD37WD44,
http://dx.doi.org/10.1190/geo2015-0078.1.
Berkhout, A. J., 1982, Seismic migration, imaging of acoustic energy by wave field extrapolation, A:
Theoretical aspects: Elsevier.
Berkhout, A. J., and G. Blacquire, 2014, Combining deblending with source deghosting: 76th EAGE
Meeting, Extended Abstracts.
Mahdad, A., P. Doulgeris, and G. Blacquiere, 2011, Separation of blended data by iterative estimation
and subtraction of blending interference noise: Geophysics, 76, Q9Q17,
http://dx.doi.org/10.1190/1.3556597.
Mueller, M. B., D. F. Halliday, D. J. van Manen, and J. O. A. Robertsson, 2015, The benefit of encoded
source sequences for simultaneous source separation: Geophysics, 80, V133V143,
http://dx.doi.org/10.1190/geo2015-0015.1.
Vaage, S. T., 2002, Method and system for acquiring marine seismic data by using multiple sources: US
patent 6,906,981.
Wu, S., G. Blacquire, and G. van Groenestijn, 2015, Shot repetition: an alternative approach to blending
in marine seismic: SEG Technical Program Expanded Abstracts,
http://dx.doi.org/10.1190/segam2015-5858788.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 91

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Efficient acquisition of deep-water node surveys


Benjamin Lewis, Michael Pfister*, Chris Brooks, Georgiy Astvatsaturov, Scott Michell, BP
Summary

full waveform inversion (FWI) and reflection tomography


to better resolve the velocity field.

Efficient deep-water nodal acquisition designs take


advantage of large node inventories, expanded battery life,
node reliability and improved efficiency to acquire surveys
containing long offsets and diverse azimuth distributions.
This provides a better understanding for complex subsalt
imaging and time-lapse analysis. The purpose of this study
is to analyze the underlying efficiency of such surveys and
suggest how the next wave of efficiency gains may be
achieved
Introduction - Survey description
From December 2014 to October 2015, two back to back
deep-water ocean bottom node (OBN) surveys were
acquired at the Atlantis and Thunder Horse fields in the
deep-water Gulf of Mexico (Figure 1). Both set new
respective records as far as the number of nodes deployed
for the purpose of time lapse deep-water OBN surveys. The
two surveys were acquired using a single node handling
vessel. The vessel could deploy two work class type
remote operating vehicles (ROVs) simultaneously,
combined with a high-speed loader (HSL) that is able to
transport up to 12 nodes from the surface to the seabed.
Having two ROVs available enabled the crew to deploy 2
node lines at a time. The ROVs distribute the nodes as per
pre-plot which was on a regular grid measuring 426m inline and 369m in cross-line (6.36 nodes per km2). The
source vessel acquired lines parallel to the node lines using
a dual source array separated by 46.53m with interleaved
shot points every 53.73m (400 shot points per km2).
The objective of the Atlantis survey was to repeat the 2005
(Beaudoin and Ross, 2007) and 2009 (Reasnor et al., 2010)
surveys to obtain time lapse information. The increased
node inventory expanded the survey beyond 4D to
improve imaging below complex salt (Van Gestel et al,
2015).
A total of 1912 nodes were deployed and retrieved over an
area of 289 km2 at the Atlantis site in a period of 151 days
(including source acquisition) in water depths ranging from
1300m to 2200m. The deployment area straddles the
Sigsbee Escarpment which introduced some slowdowns as
far as node deployment operations were concerned. The
minimum maximum inline and cross-line offsets were set
to 10km. The far offset traces were obtained to fill
illumination gaps, and to improve the velocity model using

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: Maps showing the 289Km Atlantis (top) and 316


Km2 Thunder Horse (bottom) bathymetry including the 1912
and 2031 node locations. Node spacing - 426m x 369m. Shot
Spacing - 46.53m x 53.73m. Green shaded area represents the
source area roughly extending 10km beyond the node area.
Surveys were acquired from east to west whilst maintaining
10km and 14km offset respectively in cross line direction.

Page 92

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Efficient acquisition of deep-water node surveys

During the Thunder Horse survey 2031 nodes were


deployed and retrieved over an area of 316 km2 in 138 days
(including source acquisition). The survey was acquired at
1600m to 2000m water depths with minimum maximum 14
km cross-line offset and a 10 km offset in the shooting
direction.

cross-line offset requirement during the roll off stage of the


survey nodes cannot be retrieved fast enough to keep up
with the source vessel (Figure 4). In the next section we
look at future improvements that attempt to tackle some of
these issues described above.

Acquisition Efficiency
Efficiently acquiring a survey of these dimensions depends
to a large extent on the node inventory available, the
capability of deploying a large receiver patch and reliably
recording data during the entire time the node is on the
seabed. This has been enabled by advances made in node
battery endurance helped by reduced power requirements
of modern atomic clocks. The 2015 surveys set a new
record for the largest live deep water OBN patch. Nodes
were on the seabed an average of 85 and 95 days on the
Atlantis and Thunder Horse surveys respectively, compared
to only 30 days during the 2005 survey. The low recording
endurance resulted in the 2005 acquisition being acquired
in two patches, introducing a zipper with a large percentage
of shot points repeated as a consequence. An additional
consequence of the limited battery life restricted the cross
line offset to 6Km. Node recording locations were
increased from 1628 on the Atlantis baseline survey up to
1912 stations on the 2014/5 monitor. The additional nodes
were included to extend the image. The introduction of two
ROVs and the HSL to the operation contributed to
significant improvement in efficiency compared to previous
operations.
We can visualize the Thunder Horse acquisition
productivity by extracting the daily production profiles
(Figure 2). The figure demonstrates the sensitivity of
extending
the
source
patch
on
the
node
deployment/retrieval. Extending the cross-line offset
requirement from 10km to 14km caused significant waiting
time for the node vessel, as it was waiting on the source
vessel to move a sufficient distance in order to clear nodes
for retrieval.

Figure 2: Production profile for the Thunder Horse node survey.


(Top) Node deployment chart with daily deployment rates and
cumulative node deployment planned (orange is for 10km, purple
for 14km cross line offset). Note heavy loop current activity
caused delays in retrieval of the nodes. (Bottom) Source
production chart, with almost no delays experienced .

Environmental conditions and mechanical failures may


cause a halt of the operation, with the ROV deployment
and retrieval system appearing most susceptible to
breakdowns. This was especially clear during August, 2015
when loop currents hit the Thunder Horse operation (Figure
3). Up to 4 knots of surface currents were experienced and
prevented the crew from launching the ROVs. Atlantis also
suffered from these loop currents decreasing node
deployment rates from a predicted 40 nodes per day to 31
nodes per day. Even without these delays, the bottleneck of
any node operation is the speed of node deployment. At the
start of the survey the source boat production is constrained
before all nodes are deployed. Once nodes are outside the
Figure 3: Eddy Olympus location on August 31, 2015 with
respect to the Thunder Horse field, causing operational downtime
for the node retrieval operation (image courtesy Horizon
Marine).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 93

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Efficient acquisition of deep-water node surveys

square kilometer. These methods have subsequently been


extended offshore, with ISS technology now being applied
in ocean bottom seismic surveys. Simultaneous shooting
will allow the speed of a shooting vessel to increase, as deblending techniques are continuously improved, it may also
allow shot densities to be increased in the sail line
direction. We have successfully conducted a triple source
test in Trinidad in 2012, where three source arrays are
towed behind a single vessel. In essence, the trend is to
deploy a larger paintbrush to reduce cost by acquiring
fewer sail lines. The industry is actively looking to improve
source efficiency by utilizing more arrays per vessel rather
than traditional flip-flop (Hager et al, 2015) or use wide
tow acquisition (Brice et al, 2015). However, the node
count and deployment and retrieval speeds need to improve
to fully support the many shooting efficiency opportunities.
Figure 4: Cross-line offset progress throughout the Atlantis survey.
During node deployment (blue solid curve) the source vessel (red
curve) can easily maintain pace with the node deployment vessel at
10km offset. Once all nodes are on the seabed the source vessel
moves to a more efficient racetrack acquisition. Node retrieval
begins once the source vessel has achieved 10km cross-line offset
towards the first lines of the node patch. The pattern filled area
represents the efficiency lag of the node retrieval in relation to the
source.

Future efficiencies in deep-water acquisition


Advancements in source and receiver technology have the
potential to reduce the time and cost of future surveys.
ROV operations at Atlantis and Thunder Horse were
largely limited by node deployment and retrieval speed and
were heavily affected by the weather and loop currents,
leading to a lot of unwanted standby time. However the
ROV method has its merits when operating close to seabed
infrastructure and hazards and with respect to the accuracy
of 4D repeat positions that can be achieved. With
incremental future improvements of the OBN method,
increased deployment rates should be expected. Deep-water
nodes on ropes are close to market which could yield
significant improvement in deployment / retrieval speeds
than currents methods. Whilst looking further ahead, large
inventories of flying nodes may offer comparable solutions
(Holloway et al, 2015), at the possible expense of 4D
repeatability.
Onshore, simultaneous source techniques such as
independent simultaneous source (ISS) and distance
separated simultaneous sweeping (DS3) have enabled an
order of magnitude increase in productivity vs.
conventional non-simultaneous source technologies (Ellis,
2013 & Keggin, 2012), providing increased trace density,
and improved data quality, for the same or lower cost per

2016 SEG
SEG International Exposition and 86th Annual Meeting

With the availability of the aforementioned highproductivity source methods the focus should be on how to
improve the receiver efficiency of such surveys and where
the bottlenecks are. For instance the need for high accuracy
receiver positioning is fundamentally important for 4D
repeatability, however the desire to lay or shoot as close to
the pre-plot impacts productivity, resulting in slow
deployment speeds. With the gradual increase in receiver
inventory, dense efficient OBS patch geometry options
could be exploited. With high trace density 3D surveys
there is a case to relax the pre-plot to post-plot positioning
requirements to enable faster deployment.
Time and motion studies
We have actively been conducting time and motion studies
to assess current and developing acquisition technologies
using a GOM design area. Using various ocean bottom
survey (OBS) technologies and known operating conditions
with benchmarked vessel and equipment rates, it is possible
to estimate survey durations and costs.
Figure 5 graphed results of a time and motion study for a
deepwater (approx. 2,000m) OBS surveys using different
deployment methods and source effort. Using a triple
source array reduces the total number of sail lines to
acquire. For this particular survey extent and modelled
survey geometry, adding a second source vessel further
reduces survey duration. For certain shot constrained
scenarios although survey duration has been reduced, the
additional cost of the second source boat increases the
overall survey cost. Hence a single triple source vessel
could be more desirable. Faster receiver deployment will
reduce both survey duration and cost. This combined
application of several technologies optimizes the duration
of the survey and provides significant cost savings
compared to conventional ROV guided node deployment
operation with a single, dual-source vessel. These savings

Page 94

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Efficient acquisition of deep-water node surveys

may allow higher density acquisition to be acquired at


reasonable costs in order to solve ever more complex
imaging challenges.

surveys, not only for reservoir management purposes, but


potentially in exploration settings is expected.

Acknowledgments
The Authors would like to thank BP, BHP Billiton and
ExxonMobil for permission to publish this work. Also
thanks to FairfieldNodals deep-water node crew who
acquired the surveys discussed.
ISS is a registered trade mark of BP p.l.c.

Figure 5: Results of time and motion study demonstrating the step


change potential by combining improvements in source and receiver
production. Normalized by % duration (top) and % cost (bottom).
Patch deployment of 2000 nodes. Survey assumptions are: 320 sq.km
survey area (under receivers), 400m receiver line interval, 400m
receiver point interval, 50m x50m shot grid, and full-azimuth out to
10km. Benchmarked to a survey using ROV guided depolyment (40
nodes per day) and one dual source vessel. Assumes no downtime and
24 hour shooting.

Conclusions
Advancements in deep-water OBN surveys have led to
more operational efficiency and improved imaging over the
Atlantis and Thunder Horse fields. However there is more
that can be achieved. Through the combination of new
efficient node deployment methods for deep-water,
simultaneous source techniques, and larger node
inventories more routine application of higher density OBN

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 95

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Beaudoin, G., and A. A. Ross, 2007, Field design and operation of a novel deepwater, wide azimuth node
seismic survey: The Leading Edge, 26, 494, http://dx.doi.org/10.1190/1.2723213.
Ellis, D., 2013, Simultaneous source acquisition Achievements and challenges: 75th Annual
International Conference and Exhibition, EAGE, Extended Abstracts, Th-08-01.
Hager, E., M. Rocke, and P. Fontana, 2015, Efficient multi-source and multi-streamer configuration for
dense cross-line sampling: 85th Annual International Meeting, SEG, Expanded Abstracts,
100-104, http://dx.doi.org/10.1190/segam2015-5857262.1.
Keggin, J., and R. Abma, 2012, Simultaneous shooting, today and tomorrow: 74th Annual International
Conference and Exhibition, EAGE, Extended Abstracts, http://dx.doi.org/10.3997/22144609.20149761.
Ourabah, A., J. Keggin, C. Brooks, D. Ellis, and J. Etgen, 2015, Seismic acquisition, what really matters?:
85th Annual International Meeting, SEG, Expanded Abstracts, 6-11,
http://dx.doi.org/10.1190/segam2015-5844787.1.
Reasnor, M., G. Beaudoin, M. Pfister, I. Ahmed, S. Davis, M. Roberts, J. Howie, G. Openshaw, and A.
Longo, 2010, Atlantis time-lapse ocean bottom node survey: A project team's journey from
acquisition through processing: 80th Annual International Meeting, SEG, Expanded Abstracts,
41554159, http://dx.doi.org/10.1190/1.3513730.
Van Gestel, J., E. LHeureux, J. R. Sandschaper, P. O. Ariston, N. D. Bassett, and S. Dadi, 2015, Atlantis
beyond 4D ocean bottom nodes acquisition design: 85th Annual International Meeting, SEG,
Expanded Abstracts, 125-129, http://dx.doi.org/10.1190/segam2015-5847522.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 96

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Marine simultaneous-sourcing: experiences and opportunities


Shangli Ou, ExxonMobil Upstream Research Company; Andrew Shatilo, ExxonMobil Upstream Research
Company; Fuxian Song, ExxonMobil Upstream Research Company; Dennis Willen, ExxonMobil Upstream
Research Company; Xiujun Yang*, ExxonMobil Upstream Research Company
Summary
To fully exploit the efficiency gains offered by
simultaneous sourcing will require an integrated view of
the geologic setting, survey design, operational constraints,
data processing methods, and quantitative interpretation. In
addition, a good understanding of the levels of accuracy
inherent in seismic data and in processing is key to guiding
our expectations of seismic interference and to
understanding the results of simultaneous-source tests.
Seismic data today has become a commodity item and
carries with it a historical aversion to seismic interference.
For simultaneous sourcing to become widely embraced, it
is important to fully disclose how the interference is
managed, from the shot schedules through processing,
imaging, and interpretation. In this work, we used a
combination of controlled field tests, synthetics, and
interference simulations with field data to explore
simultaneous-source acquisition in different WAZ and
long-offset geometries and to understand the impact of shot
scheduling and processing flows on data quality.
Geometric repeatability plays a role in controlled tests such
as these and highlights the potential for 4D acquisition.
Introduction
Simultaneous-sourcing holds the promise of significantly
increasing the efficiency of seismic acquisition by
illuminating more of the earth per unit time. With the
advent of deblending methods that exploit randomized
shooting, signals that were previously considered seismic
The
interference have become useful illumination.
potential impact on marine data is broad, encompassing the
illumination of subsalt shadow zones, velocity resolution
within FWI, lower unit cost of seismic surveys, and shorter
acquisition times, which can sometimes result in the faster
maturation of drillable prospects. Ultimately, simultaneous
sourcing could give rise to more frequent time-lapse
surveys and better reservoir management, provided the
processing times and data quality can be made acceptable.
During the course of a time-lapse monitor survey, we
exploited the presence of an undershoot vessel to acquire
long-offset and WAZ data in various simultaneous-source
and single-source configurations. This combination of test
and control data lets us identify and characterize the
interference, test the efficacy of deblending and its
interaction with other data processing steps, and evaluate
its impact through PSTM and angle gathers. In particular,

2016 SEG
SEG International Exposition and 86th Annual Meeting

we can show that the impact of simultaneous sourcing on


data quality is small, being comparable to the impact of
geometric repeatability. The availability of control data
also lets us simulate additional datasets, such as alternative
shooting schedules and three-vessel WAZ configurations.
The Simultaneous-Source Method
Various combinations of shot schedules and acquisition
geometries can be used to decrease the time required for
seismic acquisition through simultaneous sourcing. The
acquisition time could be purely cost-driven or be
necessary to fit within a weather or environmental window.
In the simplest NAZ scenarios, this could mean deploying
extra sources on the streamer vessel or on source-only
vessels in order to fill CMP bins faster. It could also mean
extending the listen time to recover deep information about
the basin while at the same time imaging the shallower
hydrocarbon traps with sufficient lateral resolution.
Depending on the target depths, FWI can benefit from the
additional angular illumination available in long-offset
diving waves. If long cables are not an attractive option, an
extra source-only vessel can generate long-offset data
without the need for an extra pass on the same sail line.
WAZ geometries as well can be simultaneous-sourced to
avoid the extra cost multipliers associated with repeated
passes over the same sail lines. The same strategies apply
to OBC and OBN surveys, although the relative time and
expense of receiver deployment vs. source vessels must
also be considered.
The operational aspects of simultaneous sourcing are in
transition as already-mature recording systems adjust to
accommodate shots that do not trigger recording. The need
to reliably measure and coordinate navigation data, shot
times, and thousands of channels of seismic data places
some limitations on how shots can be scheduled. As more
sources are deployed, these limitations will tend to conflict
with the desire to randomize shot times to facilitate
deblending. QC techniques appear to be up to the
challenge, but adding more source vessels to acquire WAZ
and long-offset data will likely force some rethinking of
how we specify and manage infill acquisition.
The interfering (blended) data generated by simultaneoussourcing is most commonly disentangled or deblended into
gathers associated with individual shots prior to processing.
If the shot times are all known and there are no time gaps in
the recorded data, deblending can be formulated as an
inverse problem where the separate shot gathers must sum

Page 97

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

to match the recorded data (Mahdad, et al 2011). To


prevent the inversion from being underdetermined, further
conditions must be included, such as requiring that the data
be sparse or coherent in other domains. Sparsity can be
enhanced by time-dithering or encoding the interfering
shots, preferably avoiding accidental shot-to-shot
correlations among the seismic sources.
Additional
methods have been developed to deblend data by adaptive
cancellation of the interference (Liu et al, 2014).
The deblending used in this paper is based on the SPGL1
approach (Van Den Berg and Friedlander, 2008), searching
for sparse shot gathers that fit the acquired data. Curveletbased filtering is used to cross-predict the noise from
interfering shots and the process is parallelized over
receiver channels. Imperfect deblending can be identified
by searching the deblended shot gathers for coherent
energy (crosstalk) associated with any of the other source
vessels. To recover residual signal, we have sometimes
found it useful to run the alternative signal protection
workflow described in Figure 1. Here, deblended data are
subtracted from the acquired data to form an estimate of the
seismic interference plus any residual signal that was not
correctly captured by the deblending. The full suite of
processing tools (sorting, f-x or p- filtering, NMO, partial
migration, muting, ) can then be applied to suppress the
residual signal, leaving an improved estimate of the
interference which can, in turn, be subtracted from the
original, blended data.

receiver geometry stemming from the randomized shot


schedule.
Residual interference from simultaneoussourcing is fundamentally different from the historical
problem of interference from seismic or engineering
operations not associated with the survey. Historically, if
interference could not be mitigated well enough by sorting,
filtering, and the noise-suppression inherent in migration,
then the corrupted data had to be replaced or deleted
(Jenkerson et al, 1999). Both approaches were timeconsuming and raised questions about the amplitude- and
phase-accuracy of the resulting gathers. In the context of
simultaneous-sourcing, interference is not removed but
instead apportioned to the correct shots. Random noise
tends to be spread among different shot gathers. Because of
its basis in image denoising and data compression
technologies, deblending itself tends to mitigate other noise
by scattering it among different shot gathers. Nevertheless,
it is important to verify that the impact of residual crosstalk
is not isolated at certain frequencies or in areas of low S/N
such as subsalt. Furthermore, the shooting schedule can
often be selected so as to locate the strongest interference
away from the target intervals.
Because the deblended gathers must sum to match the
recorded data, signal attenuation can usually be diagnosed
by aligning shot gathers on times for other sources. The
fluctuations in source-receiver geometries can have a more
subtle impact.
The problem is well-known in 4D
applications where the signal of interest varies widely from
field to field and it can be difficult to recreate the shooting
geometry from previous surveys. For example, Davies and
Ibram (2015) have used simulations to show that Nrms
values can fall below 3% in a simultaneous-source OBC
configuration where repeatability is less of an issue.
Test results

Figure 1: Signal-protection workflow that exploits imperfectly


deblended data to develop an improved estimate of seismic
interference, which can then be subtracted from the blended data.

Finally, our fundamental notions of data quality need to be


carefully examined in light of how the data will be imaged
and used in reservoir management. Simultaneous sourcing
introduces three areas of possible concern: interference
that finds its way through the processing flows into the
final image or angle gathers, signal attenuation during
deblending, and unplanned fluctuations in the source-

2016 SEG
SEG International Exposition and 86th Annual Meeting

The recent emphasis on broadband acquisition and


processing for resolution immediately raises questions
about an acquisition technique that mixes data from
different offsets, so one of the first areas to evaluate is the
interaction of simultaneous sourcing with deghosting.
Figure 2a refers to a simultaneous-source dataset acquired
in an undershoot-type configuration with a source-only
vessel offset 1.2 km crossline from the streamer vessel.
This plot shows the FK spectra after deblending followed
by both source- and receiver-side deghosting of the
streamer vessels data. It compares favorably to a
conventionally-acquired gather in the same location (Figure
2b), demonstrating that the additional offsets and azimuths
acquired by the source-only vessel have been effectively
separated during deblending. For reference, Figure 2c
shows the conventionally-acquired hydrophone data before
deghosting. Our experience is much the same with twocomponent deghosting.

Page 98

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 2. Hydrophone FK spectra: a) 2D-deghosted after deblending; b) 2D-deghosted single-source data; and, c) single-source hydrophone data
before deghosting.

Multiple suppression is another area where we want to


make sure that deblending has minimal adverse impact on
the data. Superficially, multiples are just another type of
coherent data and deblending should not have bias toward
multiples as opposed to primary reflections. Multiples do,
however, appear as conflicting dips and cast comparatively

strong interference into the deeper section. A simple test is


shown in Figure 3, where Radon demultiple reveals
primaries that are preserved by deblending. It is also
important to test the behavior of deblending in shallow
water and its interaction with more complex multiples such
as those requiring SRME.

Figure 3. Deblended CMP gather a) before and b) after Radon multiple suppression, showing how deblending has preserved the weak primary
reflectors below 3.5 seconds. The blue and green curves respectively indicate the water bottom and the start of its multiple train.

To the extent that velocity is known, migration itself is a


very good noise filter and tends to disperse any residual
crosstalk since it is not properly time-aligned with the
velocity model. We illustrate this in Figure 4, where we
exploit the velocity model known from a previous monitor

2016 SEG
SEG International Exposition and 86th Annual Meeting

survey to apply 3D PSTM to our 12-streamer sail lines.


Figure 4a comes from the simultaneous-source data set
after removing interference from the wide-azimuth vessel
shooting at 1.2 km in the crossline direction. Figure 4b
comes from the control data acquired in the same location

Page 99

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

as part of the monitor survey. Both datasets have been


demultipled with Radon but not SRME and show remnant
multiples from the shallow channel in the center of the
section. From an exploration point of view, the datasets are
identical and this is reflected by the amplitude spectra
which differ by less than 1 dB throughout the seismic band.
To obtain such a close match did require a 4D binning step
which discarded approximately 28% of the data
corresponding to DSDR values greater than 100 m. The
remain data have average DSDR values of about 45 m,
owing to the fact that the control data had to be acquired to
match the feathering on earlier monitor surveys. Prior to
4D binning, Nrms differences were in the range of 15-25%

with outliers exceeding 40%. After 4D-binning, Nrms


values are in the more acceptable range of 6-9% (Figure 5).
This is in line with the results of Sabel et al (2014) who
examined a similar undershoot of the Snorre B platform
and found comparable Nrms values. They found that
simultaneous-sourcing would not alter conclusions with
regards to production history matching and reservoir
management for their field. Statements such as these
clearly depend on the expected magnitude of the time-lapse
signal. In particular, for weak 4D responses, it might be
necessary to acquire the baseline and monitor surveys with
similar shooting schedules.

Figure 4. PSTM of a 12-streamer swath: (a) simultaneous-source data after removing the undershoot vessel with deblending; and, (b) control
data acquired with only the streamer vessel shooting.

Figure 5. Bin-by-bin RMS differences between PSTM images of


single-source control data and simultaneous-source data deblended
through the flow depicted in Figure 4. 4D binning has been
applied to reject source and receiver outliers.

Conclusions
Advances in 3D acquisition and processing over the past
twenty-five years have produced high-quality data and
made a tremendous impact on our business: from derisking
exploration wells to reservoir management and reserves
estimates. With that change has come a natural reluctance
to tamper with success, particularly with a technology such

2016 SEG
SEG International Exposition and 86th Annual Meeting

as simultaneous sourcing which cuts across so many steps


in the seismic workflow. We can accelerate the broad
acceptance of simultaneous-source methods by carefully
exposing all the implications from vessel operation through
to AVA analysis and well ties. This will help all of the
stakeholders understand the impact on data quality and cost
while distinguishing those key steps that must be tightly
controlled from those specifications and parameters that
have no material impact on the result
Acknowledgements
The authors would like to thank Nigerian National
Petroleum Corporation, Esso Exploration & Production
Nigeria Limited, Shell Nigeria Exploration and Production
Limited, and ExxonMobil Upstream Research Company
for permission to publish this paper. We would also like to
thank Natalie Hutzley, Gary Szurek, and Ramesh
Neelamani (ExxonMobil Upstream Research Company) for
their contributions to this project.

Page 100

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Davies, D., and M. Ibram, 2015, Evaluating the impact of ISS HD-OBC acquisition on 4D data: 77th
Annual International Conference and Exhibition, EAGE, Extended Abstracts, N101 05.
Jenkerson, M., H. Clark, R. Houck, S. Seyb, and M. Walsh, 1999, Effects of seismic interference on 3D
data: 69th Annual International meeting, SEG, Expanded Abstracts, 1208-1211,
http://dx.doi.org/10.1190/1.1820722.
Liu, Z., B. Wang, J. Specht, J. Sposato, and Y. Zhai, 2014, Enhanced adaptive subtraction method for
simultaneous source separation: 84th Annual International meeting, SEG, Expanded Abstracts,
115-119, http://dx.doi.org/10.1190/segam2014-1510.1.
Mahdad, A., P. Doulgeris, and G. Blacquiere, 2011, Separation of blended data by iterative estimation
and subtraction of Blending interference noise: Geophysics, 76, no. 3, Q9Q17,
http://dx.doi.org/10.1190/1.3556597.
Sabel, P., F. Bker, C. Otterbein, B. Szydlik, I. Moore, and C. Beasley, 2014, An interpreters view on 4D
with simultaneous-source acquisition: 76th Annual International Conference and Exhibition,
EAGE, Extended Abstracts, ELI2 07.
Van den berg, E., and M. P. Friedlander, 2008, Probing the Pareto frontier for basis pursuit solutions:
SIAM Journal on Scientific Computing, 31, 890912, http://dx.doi.org/10.1137/080714488.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 101

Aperiodic wavefield signal apparition: De-aliased simultaneous source separation

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Dirk-Jan van Manen , Johan O. A. Robertsson, ETH Zurich, Seismic Apparition GmbH; Kurt Eggenberger,
Seismic Apparition GmbH; Fredrik Andersson, Lund University; Lasse Amundsen, NTNU Trondheim
SUMMARY
In wavefield signal apparition, acquisition with periodic variations in source activation parameters shifts all or part of a signal cone out to, e.g., the Nyquist wavenumber, enabling perfect
separation of simultaneous-source (sim-source) data below a
certain temporal frequency. Cyclic convolution in the spatial
frequency domain is used to generalise the signal apparition
concept to the case of aperiodic variations in the source activation parameters. The separation is formulated as a leastsquares reconstruction in the frequency-wavenumber ( f k) domain. The methodology is used in combination with normal
moveout correction to de-alias the sim-source separation and
enables shooting on position in sim-source acquisition.

Figure 1: Left (a): Conventional acquisition: all signal energy sits inside a signal cone (yellow) bounded by the propagation velocity of the recording medium. Right (b): Acquisition
following the principles of periodic signal apparition. After
Robertsson et al. (2016).

INTRODUCTION
In simultaneous source acquisition, two (or more) sources interfere during acquisition as for instance a second source is
triggered simultaneously or close in time with the first. The
wavefields interfere and at the receiver location the sum of the
wavefields is measured. The interference of signals is handled in data processing to decode the information generated
from each source. The literature on the subject is vast. Key
references can be found in Pedersen et al. (2016). Recently,
the concept of wavefield signal apparition was introduced to
enable perfect sim-source separation below a certain temporal
frequency by shifting all or part of a wavefield signal cone out
to the Nyquist wavenumber through acquiring the data with periodic variations in a source (activation) parameter (Robertsson
et al., 2016). In the following, we briefly review the concept
of signal apparition before extending it to the case of aperiodic variations in activation parameters and showing how the
corresponding reconstruction methods can be used to improve
sim-source separation in the presence of aliasing.

WAVEFIELD SIGNAL APPARITION


An alternative way of acquiring waveform data called wavefield signal apparition was recently proposed by Robertsson
et al. (2016) whereby N consecutive shots (enumerated by n)
are activated using a periodic sequence of source signatures.
Varying the signature of every second shot using a filter, a(t),
with transform A(), can then be understood as modulating a
conventionally acquired wavefield, f (n), with the function
m(n) =

1
1
[1 + (1)n ] + A [1 (1)n ] ,
2
2

(1)

and an expression for the wavenumber spectrum of the resulting wavefield, H(k), in terms of the spectrum of the conven-

2016 SEG
SEG International Exposition and 86th Annual Meeting

tionally acquired wavefield, F(k), was obtained as:


H(k)

N1
1X
m(n) f (n)ei2kn/N
N

1
1
[1 + A] F(k) + [1 A] F(k kN ).
2
2

n=0

(2)

Equation 2, shows that the recorded data f will be mapped


into two places in the spectral domain as illustrated in Figure 1. Part of the data will remain at the signal cone centered
around k = 0 (denoted by H+ in panel b) and part of the data
will be mapped to a signal cone centered around the Nyquist
wavenumber kN (denoted by H ). Note that by only knowing
one of these parts of the data it is possible to predict the other
using Equation 2. Following Robertsson et al. (2016) we refer
to this process as wavefield apparition or signal apparition
in the meaning of the act of becoming visible.
A particular application of interest that is solved by wavefield apparition is that of sim-source separation (Robertsson
et al., 2016). Assume a first source with constant signature
is moved along a straight line with uniform sampling of the
shots generating the wavefield g. Along another straight line
a second source is also moved with uniform sampling. Its
signature is varied for every second shot according to the deterministic modulating sequence m(n), generating the wavefield h. The summed, interfering data f = g + h are recorded
at a receiver location. In the f k-domain, where the recorded
data are denoted by F = G + H, the H-part is partitioned into
two components H+ and H with H = H+ + H where the
H -component is ghostly apparent and isolated around the
Nyquist-wavenumber (Figure 1b). Furthermore, H is a known,
scaled function of H. The scaling depends on the chosen A()
function [equation 2], and can be deterministically removed,
thereby producing the full appearance of the transformed wavefield H. When H is found, then G = F H. Inverse Fourier

Page 102

0.8

0.8

0.6

0.6

f [Hz]

f [Hz]

0.4
0.2

0.2

0
1

0.5

0
k [1/m]

m(n) s(n) = F 1 {M(k) c S(k)}.

Furthermore, we exploit the fact that cyclic convolution, c ,


can be conveniently expressed as a matrix multiplication: let
s = [s(0), s(1), . . . , s(N 1)]T , m = [m(0), m(1), . . . , m(N 1)]T
and S = [S(0), S(1), . . . , S(N 1)]T , then:
m s = F 1 {CM S},

(4)

where denotes element-wise multiplication, and


M(0)
M(N 1)
M(0)
M(1)

.
..

..
CM =
.
M(N 2) M(N 3)
M(N 1) M(N 2)

..
.

M(2)
M(1)
M(3)
M(2)

..
..
.
.
.

M(0) M(N 1)
M(1)
M(0)
(5)

From the above description it is clear that the effect of a general non-periodic modulation function will be to introduce additional, scaled replications of the signal cones of the source(s)
along the wavenumber axis (or axes in 3D). Both the position
and the factor scaling the replications are determined by the
discrete Fourier transform of the aperiodic modulation function, M(k), which cyclically convolves the signal cones.
Cyclic convolution is illustrated in Figure 2. In panel (a), a signal cone in a normalized f k-domain is shown. A real and symmetric aperiodic amplitude modulation function and its discrete Fourier transform are shown in the bottom and top parts
of panel (d), respectively. The corresponding cyclic convolution matrix is shown in panel (c). The effect of applying the

2016 SEG
SEG International Exposition and 86th Annual Meeting

0.5

0.8
0.6
0.4

0.5

0.2

0.5

0
k [1/m]

0.5

amplitude [a.u.]

0
k [1/m]

(b)

0.5

(3)

0.5

(a)

APERIODIC WAVEFIELD SIGNAL APPARITION

Let s(n) represent some source wavefield acquired as a function of a spatial coordinate. Here we exploit the lesser-known
dual of the convolution theorem, which states that multiplication in the space domain of s(n) with a so-called modulation
function m(n), corresponds to cyclic convolution of the discrete Fourier transform of the source wavefield, S(k), with the
discrete Fourier transform of the modulation function M(k) =
F {m(n)}, followed by an inverse discrete Fourier transform:

0
1

0.5

The description above focuses on simple periodic modulating


sequences m(n). In practice, however, it may be difficult to obtain perfectly periodic time shifts from a measurement setup.
In the following, we generalize the theory of seismic apparition
to cases where the modulation sequence includes variations,
deviations, signal dither, or general aperiodic time shifts.

0.4

1
0.5

amplitude [a.u.]

transforms yield the separate wavefields g and h in the timespace domain. A practical choice for A(), as pointed out by
Robertsson et al. (2016), is to choose the case where every
second source is excited a time T later compared to neighboring recordings: A() = eiT or a(t) = (t T ), as it does not
require the ability to flip polarity of the source signal. The apparition concept can be extended to simultaneous acquisition
of more than two source lines by choosing different modulation functions.

k [1/m]

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Aperiodic wavefield signal apparition

0
1
x 10

0.5

0
k [1/m]

0.5

5
0
5

(c)

50

100

150
x [m]

200

250

(d)

Figure 2: Cyclic convolution example (see text for details).

convolution matrix on the signal cone is shown in panel (b).


Note the correspondence between the peaks in the wavenumber transform of the aperiodic modulation function and the location and amplitude of the shifted signal cones. We use this
formulation to setup a forward model that can be solved to
separate the sources.
Let the sim-source data recorded in s1 times be denoted by d1 .
Furthermore, let T (n) denote the time delay or advance of s2
activation relative to s1 activation. Then the effective modulation function for s2 is m12 (, n) = eiT (n) and M12 (, k) =
F {m12 (, n)} and the part of the simultaneous source data d1
due to s2 can be modeled using the cyclic convolution matrix:
(CM12 )i, j = M12 ((i j)(modN) ) i, j 0, . . . , N 1.

(6)

For d1 , since the timeshifts are formulated as relative to the s1


times, the effective modulation function m11 for the part of the
simultaneous data due to s1 is constant equal to one for each
shot and frequency [i.e., m11 (, n) = 1]. Thus, the transform
of this implied modulation function is a discrete delta function
in wavenumber at k = 0 and the corresponding cyclic convolution matrix the identity matrix, i.e.,
CM11 = I.

(7)

The data in s2 times, d2 , can be obtained by applying the inverse of the applied time shifts, T (n), to the data in s1 times:
d2 (t, n) = F 1 {F {d1 (t, n)}eiT (n) },

(8)

and the effective modulation function for the s1 contribution to


the data in s2 times is the complex conjugate of the effective
modulation function for the s2 contribution to the data in s1
times: m21 (, n) = (m12 (, n)) .
Thus, at each temporal frequency, the wavenumber transforms
of the unknown source wavefields, S1 and S2 can be appended
into a vector, S = [S1 (0), . . . , S1 (N 1), S2 (0), . . . , S2 (N 1)]T
and related to the vector of measured data in s1 and s2 times:

Page 103

Aperiodic wavefield signal apparition


0

40

30

30

1.5

20

1.5

20

10

2.5

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

f [Hz]

0.5

t [s]

40

f [Hz]

t [s]

0
0.5

3
5000

0
x [m]

0
10

5000

10

2.5

(a)

0
k [1/m]

5
3

x 10

(b)

3
5000

0
x [m]

5000

(c)

0
10

0
k [1/m]

5
3

x 10

(d)

Figure 3: Sim-source data in the tx- and f k-domain (a and b) and after reversible NMO correction (c and d). See text for details.

D = [D1 (0), . . . , D1 (N 1), D2 (0), . . . , D2 (N 1)]T through the


forward modeling operator G:


G=

CM11
CM21

CM12
CM22


=

CM12
I

CM21


.

(9)

Thus, the desired forward modeling expression is found as:


D = GS.

(10)

We use the cone-shaped support constraint to restrict the range


of the unknowns to Kmin and Kmax corresponding to the indices of the discrete wavenumbers closest to the minimum and
maximum wavenumbers to be inverted:
S = [S1 (Kmin ), . . . , S1 (Kmax ), S2 (Kmin ), . . . , S2 (Kmax )]T . (11)
Note that, because of the modulation function, the range of
observed wavenumbers is not restricted to Kmin to Kmax nor, in
general, to any particular wavenumber range. The cyclic convolution and forward modelling operators should be similarly
restricted along the columns. So for, e.g., CM12 we have:
M )i, j = M12 ((i( j +Kmin ))(modN) )
(C
12

i 0, . . . , N 1
j 0, . . . , Kmax Kmin .

Applying these restrictions to G, i.e.,


=
G

I
M
C
21

M
C
12
I


,

(12)

we can finally write the least-squares solution:


HG
+ 2 I)
H D,
1 G
S LSQ = (G

(13)

where the stabilisation 2 is usually chosen to be a percentage


HG
and H denotes complex(e.g. 0.1%) of the maximum of G
conjugate (Hermitian) transpose.

APPLICATION: DE-ALIASED SOURCE SEPARATION


We illustrate the reconstruction methodology using synthetic
data generated for a North-Sea subsurface velocity model using finite-differences. In Figure 3a, a common receiver gather
is shown where every 2nd shot of a 2nd simultaneous source
is encoded with a time shift of T = 5 ms. The shot spacing is 50 m for both sources and the sources are separated by
100 m inline. In Figure 3b the f k-spectrum of the input data
is shown. The effects of the periodic encoding are only visible around k = 0 m1 above 22 Hz, the aliasing frequency

2016 SEG
SEG International Exposition and 86th Annual Meeting

of the data. Separation according to periodic apparition principles can only be applied in the diamond-shaped region below
22 Hz. Clearly some other methodology is needed to separate
the sources without suffering from aliasing effects. In Figure 3c, the sim-source data are shown with a reversible normal
moveout (NMO) correction applied. The f k-transform of the
NMO-corrected data is shown in Figure 3d. As can be seen, the
NMO correction de-aliases the data and the apparated s2 data is
now clearly visible at low frequencies around kN . However,
due to the space- and time-variant NMO stretch, the encoding time shift is now also space- and time-variant. We model
the effective time shift using an exact expression for NMO
stretch (Barnes, 1992). To deal with the time-variant time
shift, the NMO-corrected data are transformed to the timefrequency domain using the S-transform (Stockwell, 1996).
The S-transforms of the separated, NMO-corrected sources are
then obtained by applying the aperiodic reconstruction method.
In Figure 4 (central column), reconstructed s1 data are shown
in various domains and compared to the reference solution (left
column) by differencing (right column). In Figure 4a-4c, one
slice of the inverted S-transform, i.e., the f x-spectrum of s1
at t = 0.92 s, is evaluated. In Figure 4d-4f, the estimated NMOcorrected s1 wavefield after inverse S-transform is evaluated.
In Figures 4g-4i, the f k-spectrum of the NMO-corrected s1
wavefield is evaluated. In Figures 4j-4l, the reconstructed s1
data after removing the NMO correction are evaluated. Finally, in Figures 4m-4o, the f k-spectrum of the s1 wavefield
is evaluated. Note that the wavefield has been reconstructed
successfully, mostly without suffering from the aliasing.

CONCLUSIONS
Cyclic convolution in the spatial frequency domain was used
to generalise wavefield signal apparition to aperiodic encoding
functions. This makes it possible, for example, to deal with
perturbations or deviations from nominally periodic encoding,
or to acquire sim-source data on positions. Alternatively, it
can be used when processing makes the encoding effectively
aperiodic. It was shown how this can be exploited to perform
anti-aliased source separation when used in conjunction with
an NMO correction and a time-frequency decomposition.

ACKNOWLEDGEMENTS
We thank Seismic Apparition GmbH for permission to publish
proprietary, patent-pending work.

Page 104

40

30

30

30

20

f [Hz]

40

f [Hz]

40

10

20
10

0
5000

0
x [m]

0
5000

5000

0
x [m]

0.5

0.5

0.5

1
t [s]

t [s]

t [s]

1.5

1.5

1.5

2.5

2.5

2.5

0
x [m]

3
5000

5000

0
x [m]

3
5000

5000

30

30

30

f [Hz]

40

f [Hz]

40

10

20
10

0
k [1/m]

0.005

0
0.01

0.01

20
10

0.005

(g)

0
k [1/m]

0.005

0
0.01

0.01

0.5

0.5

1.5

2.5

2.5

3
5000

5000

0
x [m]

(j)

3
5000

5000

30

30

30

f [Hz]

40

f [Hz]

40

10

20
10

0
k [1/m]

0
10

5
3

x 10

(m)

5000

(l)

40

0
10

0
x [m]

(k)

20

0.01

1.5

2.5

0
x [m]

0.005

1
t [s]

1
t [s]

0
k [1/m]
(i)

0.5

3
5000

0.005

(h)

1.5

5000

(f)

40

0.005

0
x [m]

(e)

20

5000

(c)

0
0.01

0
x [m]

(b)

(d)

f [Hz]

0
5000

5000

3
5000

t [s]

20
10

(a)

f [Hz]

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

f [Hz]

Aperiodic wavefield signal apparition

20
10

0
k [1/m]
(n)

0
10

5
3

x 10

0
k [1/m]

5
3

x 10

(o)

Figure 4: Source 1 reconstructions in the (a-c) S-transform f x-, (d-f) NMO-corrected tx-, (g-i) NMO-corrected f k-, (j-l) tx-, and
(m-o) f k-domains. The reference, reconstructed, and difference data are shown in the left, centre, and right panels, respectively.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 105

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Barnes, A. E., 1992, Another look at NMO stretch: Geophysics, 57, no. 5, 749751.
http://dx.doi.org/10.1190/1.1443289.
Pedersen, . S., L. Amundsen, and J. O. A. Robertsson, 2016, Wavefield signal apparition, Part II:
Application to simultaneous sources and their separation: 78th Conference & Exhibition, EAGE,
Extended Abstracts.
Robertsson, J. O. A., L. Amundsen, and . S. Pedersen, 2016, Wavefield signal apparition, Part I:
Theory: 78th Conference & Exhibition, EAGE, Extended Abstracts.
Stockwell, R. G., L. Mansinha, and R. P. Lowe, 1996, Localization of the complex spectrum: the S
transform: Signal Processing: IEEE Transactions on, 44, no. 4, 9981001.
http://dx.doi.org/10.1109/78.492555.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 106

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Simultaneous-source seismic acquisitions: do they allow reservoir characterization? A feasibility


study with blended onshore real data
Ekaterina Shipilova , Ilaria Barone , Jean-Luc Boelle , Matteo Giboli , Jean-Luc Piazza , Pierre Hugonnet and
Cyrille Dupinet ; Total E&P, CGG
SUMMARY

DEBLENDING METHOD

Simultaneous-source (or blended) seismic acquisitions allow


reducing exploration costs, and thus, are now gaining greater
importance. This paper presents a feasibility study based on
a simulation of a simultaneous-source acquisition for reservoir characterization purpose. A dense single-vibrator land
data set was taken as reference. The shooting times of the
sources were mixed up according to a blended acquisition design. The blending fold reaches 8, and the overall blending
strategy was ensured to be as unfavorable for processing as
possible. In order to provide reliable conclusions, an identical processing flow was applied in parallel to the non-blended
and the deblended data. The results were evaluated not only
at the PreSTM stage, but also taking into account the possibility of amplitudes interpretation. The quality control tests
gave positive conclusions: independent simultaneous shooting
proved to be compatible with further AVO/AVA studies and
other reservoir characterization techniques.

As already mentioned above, the processing of the artificially


blended data set was initialized by a deblending step: the signals from different sources are first separated. The separation
is performed in an iterative estimation-and-subtraction manner: at each iteration and for each source the algorithm estimates, in the receiver gathers, the coherent signal and the incoherent contributions of other sources. Further, these contributions are subtracted from the data and the loop goes on until
the moment when pollutions become negligible. This process
is depicted schematically in Figure 1 and has similarities with
the one described by Abma and Yan (2009). The coherentincoherent discrimination procedure is close to that proposed
by Hugonnet and Boelle (2007) for linear noise attenuation.
Indeed, most of the seismic events can be described as superposition of plane events and thus, identified by an intelligent picking in the linear Radon transform domain (Hugonnet
et al., 2012). Each of the selected events is characterized by
several parameters, which are further refined within an optimization routine. The procedure is applied simultaneously to
all sources, so in the end the whole data set is explained and
divided into single source subsets. Some cross-talk is left inevitably, though it is weak enough to be suppressed during further conventional processing and migration.

INTRODUCTION
Simultaneous-source acquisitions (Beasley et al., 1998) are already familiar to the majority of exploration geophysicists.
The decreasing oil costs on the market are even stronger forcing oil and gas companies to rethink their expenses, including the ones related to seismic data acquisitions. Furthermore,
we are now facing the necessity to speed up not only the exploration stage acquisitions, but also the ones having reservoir
characterization and monitoring purposes. Therefore, new efficient acquisition technologies are encouraged to be brought
up to an operational level of maturity. Simultaneous-source
acquisitions allow saving time and money but provide certain
complexity for data processing. Indeed, sources emitting simultaneously interfere with each other, which leads to crosstalk in the data (Berkhout, 2008). A certain number of solutions to this problem have already been proposed by several researchers in this area. There are two main approaches: direct
imaging with inversion of the blended data (Berkhout et al.,
2008; Tang and Biondi, 2009; Xue et al., 2016), and preliminary signal separation (deblending) followed by more or less
conventional processing (Spitz et al., 2008; Cheng and Sacchi,
2015; Chen, 2015). In this paper, we present results obtained
using a method based on the second approach. Several authors
have proven that the existing deblending methods allow acceptable data quality for imaging and structural interpretation
(Dai et al., 2011; Verschuur and Berkhout, 2011; Henin et al.,
2015). There are less publications on 4D and reservoir applications of simultaneous-source data (Ayeni et al., 2009; Haacke
et al., 2015). The main goal of this study is to confirm the
possibility to use blended data in reservoir characterization.

Figure 1: Deblending algorithm principle.

CASE STUDY
Initial data
A 3D land data set was chosen for the study. The data were
acquired in a non-simultaneous mode. The acquisition was
performed using high density single-vibrator (V1) technology

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Blended seismic data for reservoir studies


(Meunier et al., 2007). The survey surface is of 60 km2 , hosting a 32 lines fix spread with 311 receivers each, oriented in
the North-East/South-West direction. A dense vibrating point
pattern (50 m interval in the inline direction and 25 m in the
cross-line direction) is realized by 12 vibrator seismic sources
emitting in a slip-sweep mode. An 18 s vibrator sweep with
3-72 Hz frequency range results in 5 second correlated traces
with a 4 ms sampling interval. The bin size is 12.5 m 12.5 m,
the nominal fold is equal to 960.
Artificial blending strategy
In order to perform the deblending test, we first had to artificially blend the data which were in our disposal. Several
blending strategies (acquisition geometries) of independent simultaneous sweeping (ISS) (Abma et al., 2015) were considered before arriving to the final one seen at the same time as the
most realistic and the least favorable one for deblending (Figure 2). Two main factors were taken into account: the number
of pollutions encountered within a shot, and the intensity of
the polluting events, which is clearly related to the distance
between the sources. Eight vibrators travel within two swaths
(10 shot lines per swath), the spread of size 5 km 7.75 km is
in the middle. The time interval between sweeps differs from
one vibrator to another (see Figure 2). Moreover, a time shift
of 25 seconds is introduced to starting times of several vibrators. The model shows a blending fold of 8 (maximum 7 pollutions per shot point), the pollutions are distributed in a sufficiently uniform manner. The smallest distance between a pair
of simultaneously emitting vibrators is about 50 m.

signal while the other sources contributions appear random,


the processing was performed in the common receiver domain.
Processing results
The processing sequence began with a deblending step. Deblending was performed using the iterative method described
above. The deblending results are shown in Figures 3e and 3f.
Even though some cross-talk persists in the deblended raw
gathers, it is not fatal, as it will be suppressed during further
processing. One can also note that some of the de-correlated
noise, not related to blending, was attenuated after the deblending step. Afterwards, in order to provide reliable conclusions, an identical processing flow was applied in parallel
to the initial non-blended and the deblended data. The processing sequence included linear noise attenuation through a
3D f k filter in the cross-spread domain; anomalous traces
removal; surface-consistent amplitude correction (SCAC); 3D
surface-consistent deconvolution; surface-consistent residual
statics correction; high resolution de-aliased multiple attenuation in the Radon domain; Pre-Stack Kirchhoff Time Migration in COV domain; azimuthal residual move-out (RMO)
correction. Finally, a full azimuth full stack was created, as
well as four sets of azimuthal and angle stacks. We intentionally did not apply the same corrections on the two data sets,
but repeated all processing steps, namely, we re-estimated the
SCAC coefficients, residual statics and azimuthal RMO corrections, as our goal was to process the non-blended and the
deblended data independently and make conclusions for a case
where only a blended data set is available without any sample for comparison. Nevertheless, the choice of the parameters for all processing steps was guided by the ones chosen
within the first operational processing of the non-blended data
set. Resulting PreSTM sections of both data sets are found in
Figure 4. Negligible differences can be observed in the overburden part of the sections, however, they do not affect the
depths of interest.
Amplitudes interpretation

Figure 2: Blending strategy. Four different shot intervals: s =


40s + t, s = 42s + t, s = 45s + t, s = 48s + t; random
delay: 5s < t < 5s; two different starting times: T = Ti and
T = Ti + 25s; two different starting directions (arrows).
Blended seismic gathers are shown in Figure 3. One can note
that the pollutions appear quite sparse and random in the common receiver domain (Figure 3c), whereas in the common shot
domain (Figure 3d) all pollutions are coherent and form interfering hyperbolas which overload the gathers. As the deblending approach used is based on local coherency of one sources

Multiple seismic attributes were computed on stacks and main


horizon slices, primarily the amplitude and coherency based
ones, such as Gradient-Coherency CubeTM (GCC), proposed
by Massala et al. (2013), correlation between the near and the
far offset sub-stacks and others. Most of them show good
matching of the non-blended and the deblended cubes, since
differences on the major discontinuity horizon (Figure 5) and
within the depth interval of interest (Figure 6) are hardly visible. Some minute differences, highlighted by arrows in Figure 6, occur only in areas with the lowest signal-to-noise ratio. Moreover, several 4D attributes were computed in order
to measure the similarity between the two data sets. These
attributes show some minor differences, namely in time shift
and NRMS (Figure 7). The weak anomalies, seen especially
in Figures 7a and 7b, in the northwestern area of the survey,
are related to the RMO correction step, where an automated
horizon picking is applied. Even very slight amplitude variations may result in different picking, which can significantly
affect the final horizon interpretation. Nevertheless, the final
amplitudes interpretation is not impacted by this minor misfit. The 4D attribute analysis is out of the scope of this paper,

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Blended seismic data for reservoir studies

Figure 3: Non-blended (a and b), blended (c and d) and deblended (e and f) raw common receiver and common shot gathers.

Figure 4: PreSTM results for non-blended (a) and deblended (b) seismic data.

Blended seismic data for reservoir studies

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

one should however note that these kinds of anomalies would


not have appeared if a real 4D processing flow had been used.
For our study, we intended to process these two data sets in a
completely independent manner.

Figure 5: Horizon maps on major discontinuity at approximately 600 ms through the GCC (a) and the amplitude cube
(b) of the non-blended data set; same horizon maps through
the GCC (c) and the amplitude cube (d) of the deblended data
set. Hardly any differences seen between the two data sets.

Figure 6: Horizon maps within the depth interval of interest


through the GCC (a) and the correlation between the near and
the far offsets attribute (b) of the non-blended data set; same
horizon maps through the GCC (c) and the correlation between
the near and the far offsets attribute (d) of the deblended data
set. Few minute differences, occurring in areas with the lowest
signal-to-noise ratio, are highlighted by arrows.

CONCLUSIONS
The advantages of simultaneous-source seismic acquisitions
are evident: the same (or even bigger) amounts of data can be
acquired in shorter time delays, which turns out to be beneficial for marine acquisitions as well as for land ones. The only
possible drawback of such acquisition designs cross-talk
is imposing challenges for processing and imaging, especially,
when passing on to highly precise and delicate production surveys. We presented a study which confirms that simultaneous
sources are not only applicable to structural interpretation purpose exploration surveys, but can also be used for reservoir
characterization without any loss in data quality. The overall
conclusion of the amplitudes interpretation quality control test
may be formulated as follows: even though the non-blended
and the deblended data sets are not exactly identical after processing, their level of semblance allows making the same amplitudes interpretation for the zones of interest. Consequently,
we have proven that ISS blended acquisitions are totally compatible with inversion and reservoir characterization.
ACKNOWLEDGMENTS
We are particularly grateful to Sonatrach, Cepsa and Total for
permission to publish the field data example. We would like to
thank Total and CGG for permission to publish this work.

Figure 7: Horizon maps above the depth interval of interest


through time shift 4D attribute (a) and the relative RMS amplitude difference cube (b); horizon maps within the depth interval of interest through time shift 4D attribute (c) and the relative RMS amplitude difference cube (d). The weak anomaly
visible in the northwestern area of the plots (a) and (b) is due
to the automated picking used to compute the azimuthal RMO
corrections. These kinds of anomalies would not have appeared if a real 4D processing flow had been used.

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Abma, R., D. Howe, M. Foster, I. Ahmed, M. Tanis, Q. Zhang, A. Arogunmati, and G. Alexander, 2015,
Independent simultaneous source acquisition and processing: Geophysics, 80, no. 6, WD37
WD44, http://dx.doi.org/10.1190/geo2015-0078.1.
Abma, R., and J. Yan, 2009, Separating simultaneous sources by inversion: Presented at the 71st
European Association of Geoscientists and Engineers Conference and Exhibition,
http://dx.doi.org/10.3997/2214-4609.201400403.
Ayeni, G., Y. Tang, and B. Biondi, 2009, Joint preconditioned least-squares inversion of simultaneous
source time-lapse seismic data sets: 79th Annual International Meeting, SEG, Expanded
Abstracts, 39143918, http://dx.doi.org/10.1190/1.3255685.
Beasley, C. J., R. E. Chambers, and Z. Jiang, 1998, A new look at simultaneous sources: 68th Annual
International Meeting, SEG, Expanded Abstracts, 133135, http://dx.doi.org/10.1190/1.1820149.
Berkhout, A. J., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924938,
http://dx.doi.org/10.1190/1.2954035.
Berkhout, A. J., G. Blaquiere, and D. J. Verschuur, 2008, From simultaneous shooting to blended
acquisition: 78th Annual International Meeting, SEG, Expanded Abstracts, 28312838,
http://dx.doi.org/10.1190/1.3063933.
Chen, Y., 2015, Deblending by iterative orthogonalization and seislet thresholding: 85th Annual
International Meeting, SEG, Expanded Abstracts, 5358, http://dx.doi.org/10.1190/segam20155835651.1.
Cheng, J., and M. Sacchi, 2015, Separation and reconstruction of simultaneous source data via iterative
rank reduction: Geophysics, 80, no. 4, V57V66, http://dx.doi.org/10.1190/geo2014-0385.1.
Dai, W., X. Wang, and G. T. Schuster, 2011, Least-squares migration of multisource data with deblurring
filter: Geophysics, 76, no. 5, R135R146, http://dx.doi.org/10.1190/geo2010-0159.1.
Haacke, R. R., G. Hampson, and B. Golebiowski, 2015, Simultaneous shooting for sparse OBN 4D
surveys and deblending using modified Radon operators: Presented at the 77th European
Association of Geoscientists and Engineers Conference and Exhibition,
http://dx.doi.org/10.3997/2214-4609.201412873.
Henin, G., D. Marin, S. Maitra, A. Rollet, S. K. Chandola, S. Kumar, N. E. Kady, and L. C. Foo, 2015,
Deblending 4-component simultaneous-source data A 2D OBC case study in Malaysia: 85th
International Annual Meeting, SEG, Expanded Abstracts, 4347,
http://dx.doi.org/10.1190/segam2015-5899893.1.
Hugonnet, P., and J.-L. Boelle, 2007, Beyond aliasing regularisation by plane event extraction: Presented
at the 69th European Association of Geoscientists and Engineers Conference and Exhibition,
http://dx.doi.org/10.3997/2214-4609.201401830.
Hugonnet, P., J.-L. Boelle, and F. Prat, 2012, Local linear events extraction and filtering in the presence
of time-shifts: Presented at the 74th European Association of Geoscientists and Engineers
Conference and Exhibition, http://dx.doi.org/10.3997/2214-4609.20148307.
Massala, A., N. Keskes, L. Goncalves-Ferreira, C. Onu, S. Cordier, and P. Cordier, 2013, An innovative
attribute for enhancing faults lineaments and sedimentary features during 2G&R interpretation:
Presented at the Society of Petroleum engineers Annual Technical Conference and Exhibition,
http://dx.doi.org/10.2118/166122-MS.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 111

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Meunier, J., T. Bianchi, and S. Ragab, 2007, Experimenting single vibrator seismic acquisition: Presented
at the 69th European Association of Geoscientists and Engineers Conference and Exhibition,
http://dx.doi.org/10.3997/2214-4609.201401981.
Spitz, S., G. Hampson, and A. Pica, 2008, Simultaneous source separation: A prediction-subtraction
approach: 78th Annual International Meeting, SEG, Expanded Abstracts, 28112815,
http://dx.doi.org/10.1190/1.3063929.
Tang, Y., and B. Biondi, 2009, Least-squares migration/inversion of blended data: 79th Annual
International Meeting, SEG, Expanded Abstracts, 28592863,
http://dx.doi.org/10.1190/1.3255444.
Verschuur, D. J., and A. J. Berkhout, 2011, Seismic migration of blended shot records with surfacerelated multiple scattering: Geophysics, 76, A7A13, http://dx.doi.org/10.1190/1.3521658.
Xue, Z., Y. Chen, S. Fomel, and J. Sun, 2016, Seismic imaging of incomplete data and simultaneoussource data using least-squares reverse time migration with shaping regularization: Geophysics,
81, no. 1, S11S20, http://dx.doi.org/10.1190/geo2014-0524.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 112

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A marine field trial for iterative deblending of simultaneous sources

Shaohuan Zu and Hui Zhou, China University of Petroleum Beijing; Yangkang Chen, The University of Texas at
Austin; Haolin Chen and Mingqiang Cao, Dagang of BGP; Chunlin Xie, E.&D. Research Institute, Daqing Oilfield
Company
SUMMARY
Simultaneous sources acquisition (continuous recording, significant overlap in time) has many advantages over the traditional seismic acquisition (discontinuous recording, zero overlap in time). When focusing on data quality, blended acquisition (simultaneous sources acquisition) allows significantly
denser spatial sources sampling and much wider range of azimuths. This can improve the quality of subsurface illumination. When focusing on economic aspect, the blended acquisition can greatly shorten the survey time. However, many
challenges such as the continuous recording equipment, the
availability of boats units and the implementation of speed cable vessels are emerging when using simultaneous sources acquisition. Dagang Geophysical Prospecting Branch of BGP,
CNPC has made two field trials to explore the advantages of
simultaneous sources and obtained some experience. The goal
of this paper is to give an overview of the latest field trial and
display the very successful deblending results, which may give
an inspiration to the depressing oil price.

INTRODUCTION
Currently, in the climate of oil price drop keeps the oil and gas
industry on their toes, the advantages of simultaneous sources,
which can improve the quality of illumination with denser shot
coverage or shorten the survey time with wider shot coverage,
have attracted much attention from industry (Berkhout, 1982;
Abma and Yan, 2009; Abma et al., 2012; Abma, 2014; Zhang
et al., 2013) and academia (Bagaini, 2006; Beasley et al., 1998;
Li et al., 2013; Chen et al., 2014; Chen, 2014; Chen et al.,
2015; Chen, 2015; Gan et al., 2015). In land seismics, the simultaneous sources acquisition is the well known vibroseis acquisition. The slip-sweep technique can tremendously reduce
the cycle time and significantly increase productivity. This
method is based on transmitting specially encoded source sweeps
such that the interfering sources responses can be separated in
a preprocessing step.
However, in marine seismics, the use of impulsive sources is
not easy for signal encoding methods. Beasley et al. (1998)
proposes a method that does not require sources signature encoding, but relies on spatial sources positioning to allow for
separation of the signals in subsequent data processing. Hampson et al. (2008) utilizes a technique that two or more shots
with a small random time delay are acquired during the survey time. They demonstrate on three-dimensional (3D) field
data that for deep water with modest water-bottom reflectivity,
no special processing is required, whereas in shallow water
with stronger water-bottom reflectivity the use of shot separation techniques is necessary. Berkhout (2008) introduces the

2016 SEG
SEG International Exposition and 86th Annual Meeting

blended acquisition from simultaneous shooting and proposes


a theoretical framework that enables the design of blended
seismic acquisition with a focus on quality and economics.
Blacquiere et al. (2009) proposes two quantitative quality measures for the survey design of blended acquisition. Jiang and
Abma (2010) gives a limit superimposed on a plot of signal
separation residuals from a series of separation experiments.
The lowest frequency of interest to be separated decreases, the
random shifts required between shots increases. The reason is
that a small time delay for a very low frequency event will have
little effect on the coherency. Abma (2014) points out that the
delay time is important to the quality of shot separation. The
lowest frequency of interest determines the minimum range
of the random time differences required. The minimum time
range determines the maximum number of sources that can be
used simultaneously. Zu et al. (2016) introduces a periodically
varying delay code which can easily control the incoherency of
the interference in the common receiver domain and demonstrates the better deblending performance than random time
delay within the same dithering range by numerically blended
field dataset.
The first marine trial of simultaneous source in China was carried out in 2014 in Caofeidian area. Unfortunately, this trial is
failed due to the failure of dithering code. So, Dagang Geophysical Prospecting Branch of BGP, CNPC carried out the
second marine trial in the same area in 2015. In the last trial,
the dithering code is periodically varying (Zu et al., 2016).
To compare the deblending performance accurately, they also
gather traditional seismic data. The goal of this paper is to
make an overview of this trial and to display some successful
results from comparisons in shot gather, receiver gather and
stack profile, which maybe gives an inspiration to the currently
dispirited oil price.
METHOD
In this field trial, two shooting boats are used. The blending of
two sources in common receiver gather (CRG) is expressed as
(Chen et al., 2014; Zu et al., 2016)
d = d1 + d2 ,

(1)

where d denotes the blended record, d1 and d2 represent the


seismic record from the first boat and the second boat, respectively. is a periodically varying dithering operator (Zu et al.,
2016)
= F diag(e jt1 , , e jt p , e jt1 , , e jt p )F 1 ,
(2)
where F and F 1 are a pair of forward and inverse Fourier
transforms, p denoting the period is equal to 5 in this trial.
Note that in the blended record d, the record from the first

Page 113

A marine field trial for iterative deblending of simultaneous sources

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

source (d1 ) is coherent, but the record from the second source
(d2 ) is incoherent due to the dithering operator . Then we
apply the inverse of to Equation 1 and get the second blended
equation:
1 d = 1 d1 + d2 ,
(3)
where 1 stands for the inverse of . In the record of 1 d,
the record from the second source (d2 ) is coherent, however,
the record from the first source (1 d1 ) is incoherent. We process the blended record in the common receiver domain because one shot record is coherent and others are incoherent
(Hampson et al., 2008). Combing Equations 1 and 3, we can
obtain
Fm = e
d,
(4)






d
I

d1
.
where e
d=
,F=
and m =
d2
1 d
1 I

Equation 4 is an under-determined system (the number of Equation is less the number of unknown items). One way to overcome the ill-posedness of Equation 4 is to solve the minimization problem of the cost function
= arg minkFm e
m
dk22 + kSmk1 ,
m

(5)

where k k22 stands for the square of L2 norm of the data fitting that measures the differences between the observed and
the modeled data. k k1 stands for the L1 norm of the model
shaping which is used to penalize nonsparse solutions, S is
a sparsity promoting transform and is a penalty coefficient
which keeps the balance between the data fitting and the mode
shaping. Equation 5 can be minimized using iterative shrinkage thresholding algorithm (ISTA) (Beck and Teboulle, 2009)
h

i
mn+1 = C 1 T C mn + FT e
d Fmn ,
(6)
where C and C 1 are a pair of forward and inverse curvelet
transforms. T denotes the threshold function with an threshold parameter , FT is the adjoint of F. The threshold function can be divided into two types: soft and hard thresholdings.
Soft thresholding aims to remove data whose value are smaller
than a certain level and subtract the other values by this level
(Donoho, 1995). Hard thresholding only removes data whose
values are smaller than a certain level. In this paper, we choose
the hard threshold function because according to many numerical tests, it can achieve a higher signal-to-noise ratio (S/N)
than the soft thresholding function. Although the hard thresholding tends to bigger oscillation due to the discontinuity of
the thresholding function.
EXAMPLE
The marine test is carried out in a shallow water and oceanbottom-nodes are used (Figure 1). There are four node lines
(the red dotted lines in Figure 1) at the bottom of water to
receive the blended seismic record and each node line has 400
nodes. The blue and green dotted lines represent the two shot
lines, respectively, and each source vessel shots 301 times. The
sailing direction of the first source boat is A B C D (the
blue dots in Figure 1) and the direction of the second source

2016 SEG
SEG International Exposition and 86th Annual Meeting

boat is E B C F (the green dots in Figure 1). The gap


of node lines is 200m and that of source lines is 400m. The
parameters of air gun in each source vessel is the same. In
the record, the temporal sampling interval is 1ms, the record
length is 4s and the dithering range is from -500ms to 500ms.
The code scheme is periodically varying which was proposed
by Zu et al. (2016).
1

2
3
4

Figure 1: The geometry of the survey.


The blended common receiver gather is shown in Figure 2a.
From Figure 2a, we can see that the record from the first source
boat is coherent, but, due to the dithering operator, the record
of the second boat is incoherent. The incoherent interference
has polluted the useful signal seriously and we cannot see signals at the position of ground roll. Note that the shot lines
are not straight, so there is a discontinuity as the black arrow
shown in Figure 2a. Figure 2b illustrates the dithering code.
The dithering range is from -500ms to 500ms and the period
is 5. Readers are suggested to refer to Zu et al. (2016) for
more details about the periodically varying code. After solving
Equation 5, we obtain the deblending results. Figure 3 shows
the deblending results of the two source boats. Because the frequency of ground roll is very low, the dithering operator will
have little effect on the ground roll. So the deblending method
has difficulties to separate the ground roll. Fortunately, we can
handle this problem easily by attenuating the ground roll via
the character that the ground roll has low velocity and low frequency. Figure 3a is the deblended result of the first source,
in which the incoherent interference of the second source boat
has been separated out well. Similarly, Figure 3b shows the deblended result of the second source boat. Likewise, we hardly
see any incoherent interference left in Figure 3b.
After processing all the common receiver gathers, we can demonstrate the good deblending performance from the common shot
gather. Figure 4 displays the blended common shot gather.
From Figure 4, we can find that both useful signals and interference are coherent in the shot domain. This is the main
reason why it is difficult to separate the blended data in common shot domain. In Figure 4, the direct wave of record B
pollutes the useful signal of record A seriously, because the
direct wave is much stronger than the reflected signal. Figure
5a shows the deblended result of record A after filtering the
residual ground roll of the record B. The deblending performance shown in common shot gather is excellent, where the
coherent record of B has been separated out drastically and we
cannot find the damage to signals. Figure 5b shows the deblended result of record B, which is very good too. Without
any interference of record A lefts in Figure 5b.

Page 114

A marine field trial for iterative deblending of simultaneous sources

40

90

Trace
140 190

240

290

15

Trace
25

90
35

Trace
190

290

390

Trace
190

90

290

390

45

-500

A
B

Time (s)

Time (s)

Delay time (ms)

Time (s)

-250

6
6

250

500

(b)

(b)

Figure 2: (a) The blended common receiver gather, (b) the


dithering code, the dithering range is -500ms to 500ms and
the period is five.

Trace
140 190

240

290

40

Trace
140 190

240

290

Time (s)

90

Figure 5: The deblended common shot gather of (a) the first


source and (b) the second source, respectively.

In order to quantitatively analyze the damage to signal, we select two unblended sections from Figure 4 as red and blue rectangles shown to analyze the frequency spectrum. In Figure 6a,
the red line represents the amplitude spectrum of the red rectangle shown in Figure 4 and the blue line denotes that of the
same section shown in Figure 5a. The coincidence of the two
lines demonstrates that the damage to the useful signal is very
weak. Figure 6b is the comparison of frequency spectrum between the two blue rectangles shown in Figure 4 and Figure
5b, respectively. Similarly, the blue line is very close to the
red line. From Figure 6, it is very obvious that the deblending
performance is very good and hardly find the damage to the
useful signal.

60

(a)

(b)

Figure 3: The deblended common receiver gather of (a) the


first source and (b) the second source, respectively.

10

20

30

40

Frequency
50
60

70

80

90

100

blended data
deblended record of A

50

Amplitude

90

(a)

(a)

40

Time (s)

40
30
20
10

90

Trace
190

290

390

(a)

A
B

60

10

20

30

40

Frequency
50
60

70

80

90

100

blended data
deblended record of B

50

Amplitude

Time (s)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

40
30
20
10

6
0

(b)
8

Figure 4: The blended common shot gather in which the


records of two source boats are coherent.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 6: The comparison of frequency between the two rectangles shown in Figure 4 and that of deblended results shown
in Figure 5a and 5b.

Page 115

200

400

CMP
600

800

CMP
250

350

450

700

450

700

2
Time (s)

After the conventional seismic data processing, we obtain the


stacked sections shown in Figure 7. Figure 7a is the stacked
section of the blended record and Figure 7b shows the stacked
section of the deblended record. As the red rectangles shows,
the events in Figure 7b is much clearer than that of Figure 7a
and the S/N of Figure 7b is higher than that of Figure 7a. In
this marine field trial, the conventional seismic data have been
collected to compare the deblending performance. However,
we only have the partial stacked profile of the conventional
seismic data. Figure 8a is the partial stacked profile of the conventional seismic data. We choose the same part of the stacked
profile of the deblended record shown in Figure 8b to compare.
Both the two stacked profiles shown in Figure 8 are almost the
same, which denotes that the deblending performance is very
successful.

(a)
CMP
250

350

1000

0.5

Time (s)

1.5

Time (s)

3
2.5

4
3.5

(b)

Figure 8: The comparison of stacked profiles between (a) the


deblended record and (b) the conventional record.

4.5

(a)
200

400

CMP
600

800

1000

0.5

1.5

Time (s)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A marine field trial for iterative deblending of simultaneous sources

2.5

3.5

4.5

the marine trial directly. Without any preprocessing before deblending, the results of the simultaneous sources field trial are
very encouraging. We have got some experiences of simultaneous shooting from this trial. A comparison of two marine trials that are successful and failed respectively shows that
shot scheduling of simultaneous sources is an important factor in the success of the survey. Good shot scheduling can get
high quality separation performance with no impact from the
blended acquisition. Further analysis, such as AVO and migration comparisons of this trial, will be discussed in the near
future.
ACKNOWLEDGEMENTS

(b)

Figure 7: The stacked profile of (a) the blended record and


(b) the deblended record. The differences are shown in red
rectangles.

CONCLUSIONS

The authors would like to thank Dagang Geophysical Prospecting Branch of BGP, CNPC permission to present this paper.
We would like to thank Shuwei Gan, Zhaoyu Jin, Xingye Liu
and Lin Zhou for the inspiring discussion about simultaneous sources. Specially Zu would like to thank Ms. Shan Qu
for teaching him the knowledge about simultaneous sources.
This research is partly supported by the 973 Program of China
(2013CB228603), National of Major Science and Technology
Program (2016ZX05010001-002).

Most published deblending results are based on numerically


blended synthetic example or numerically blended field data
example. In this paper, the blended data are obtained from

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 116

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Abma, R., 2014, Shot scheduling in simultaneous shooting: 84th Annual International Meeting, SEG,
Expanded Abstracts, 9498.
Abma, R., and J. Yan, 2009, Separating simultaneous sources by inversion: 71st Annual International
Conference and Exhibition, EAGE, Extended Abstract.
Abma, R., Q. Zhang, A. Arogunmati, and G. Beaudoin, 2012, An overview of BPs marine independent
simultaneous source field trials: 82nd Annual International Meeting, SEG, Expanded Abstracts,
15.
Bagaini, C., 2006, Overview of simultaneous vibroseis acquisition methods: 76th Annual International
Meeting, SEG, Expanded Abstracts, 7074.
Beasley, C. J., R. E. Chambers, and Z. Jiang, 1998, A new look at simultaneous sources: 68th Annual
International Meeting, SEG, Expanded Abstracts, 133135.
Beck, A., and M. Teboulle, 2009, A fast iterative shrinkage thresholding algorithm for linear inverse
problems: SIAM Journal on Imaging Sciences, 2, 183202, http://dx.doi.org/10.1137/080716542.
Berkhout, A., 1982, Seismic migration: Imaging of acoustic energy by wave field extrapolation, part a:
Theoretical aspects: Elsevier Scientific Publishing Company.
Berkhout, A. J. G., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924
938, http://dx.doi.org/10.1190/1.2954035.
Blacquiere, G., G. Berkhout, and E. Verschuur, 2009, Survey design for blended acquisition: SEG
Technical Program Expanded Abstracts, 2009, 5660.
Chen, Y., 2014, Deblending using a space-varying median filter: Exploration Geophysics, 5183.
Chen, Y., 2015, Iterative deblending with multiple constraints based on shaping regularization:
Geoscience and Remote Sensing Letters IEEE, 12, 22472251,
http://dx.doi.org/10.1109/LGRS.2015.2463815.
Chen, Y., S. Fomel, and J. Hu, 2014, Iterative deblending of simultaneous-source seismic data using
seislet-domain shaping regularization: Geophysics, 79, V179V189,
http://dx.doi.org/10.1190/geo2013-0449.1.
Chen, Y., J. Yuan, S. Zu, S. Qu, and S. Gan, 2015, Seismic imaging of simultaneous-source data using
constrained least-squares reverse time migration: Journal of Applied Geophysics, 114, 3235,
http://dx.doi.org/10.1016/j.jappgeo.2015.01.004.
Donoho, D., 1995, De-noising by soft-thresholding: Information Theory: IEEE Transactions on
Information Theory, 41, 613627, http://dx.doi.org/10.1109/18.382009.
Gan, S., S. Wang, Y. Chen, S. Qu, and S. Zu, 2015, Velocity analysis of simultaneous-source data using
high-resolution semblancecoping with the strong noise: Geophysical Journal International, 204,
768779, http://dx.doi.org/10.1093/gji/ggv484.
Hampson, G., J. Stefani, and F. Herkenhoff, 2008, Acquisition using simultaneous sources: 78th Annual
International Meeting, SEG, Expanded Abstracts, 28162820.
Jiang, Z., and R. Abma, 2010, An analysis on the simultaneous imaging of simultaneous source data:
Annual International Meeting, SEG, Expanded Abstracts, 31153119.
Li, C., C. C. Mosher, L. C. Morley, Y. Ji, and J. D. Brewer, 2013, Joint source deblending and
reconstruction for seismic data: 83rd Annual International Meeting, SEG, Expanded Abstracts,
8287.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 117

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Zhang, Q., R. Abma, and I. Ahmed, 2013, A marine node simultaneous source acquisition trial at
Atlantis, Gulf of Mexico: Annual International Meeting, SEG, Expanded Abstracts, 2013, 5258.
Zu, S., H. Zhou, Y. Chen, S. Qu, X. Zou, H. Chen, and R. Liu, 2016, A periodically varying code for
improving deblending of simultaneous sources in marine acquisition: Geophysics, 81, 113,
http://dx.doi.org/10.1190/geo2015-0447.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 118

Time-jittered marine acquisitiona rank-minimization approach for 5D source separation

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Rajiv Kumar*, Shashin Sharan, Haneet Wason, and Felix J. Herrmann


Seismic Laboratory for Imaging and Modeling (SLIM), University of British Columbia
SUMMARY
Simultaneous source marine acquisition has been recognized as
an economic way of improving spatial sampling and speedup acquisition time, where a single- (or multiple-) source vessel fires
at jittered source locations and time instances. Consequently,
the acquired simultaneous data volume is processed to separate
the overlapping shot records resulting in densely sampled data
volume. It has been shown in the past that the simultaneous
source acquisition design and source separation process can be
setup as a compressed sensing problem, where conventional
seismic data is reconstructed from simultaneous data via a
sparsity-promoting optimization formulation. While the recovery quality of separated data is reasonably well, the recovery
process can be computationally expensive due to transformdomain redundancy. In this paper, we present a computationally
tractable rank-minimization algorithm to separate simultaneous
data volumes. The proposed algorithm is suitable for large-scale
seismic data, since it avoids singular-value decompositions and
uses a low-rank based factorized formulation instead. Results
are illustrated for simulations of simultaneous time-jittered continuous recording for a 3D ocean-bottom cable survey.

INTRODUCTION
Simultaneous source marine acquisition mitigates the challenges posed by conventional marine acquisition in terms of
sampling and survey efficiency, since more than one shot can
be fired at the same time (Beasley et al., 1998; de Kok and
Gillespie, 2002; Berkhout, 2008; Beasley, 2008; Hampson
et al., 2008). The final objective of source separation is to
get interference-free shot records. Wason and Herrmann (2013)
have shown that the challenge of separating simultaneous data
can be addressed through a combination of tailored single- (or
multiple-) source simultaneous acquisition design and curveletbased sparsity-promoting recovery. The idea is to design a
pragmatic time-jittered marine acquisition scheme where acquisition time is reduced and spatial sampling is improved by
separating overlapping shot records and interpolating jittered
coarse source locations to fine source sampling grid. While the
proposed sparsity-promoting approach recovers densely sampled conventional data reasonably well, it poses computational
challenges since curvelet-based sparsity-promoting methods
can become computationally intractablein terms of speed and
memory storageespecially for large-scale 5D seismic data
volumes.
Recently, nuclear-norm minimization based methods have
shown the potential to overcome the computational bottleneck
(Kumar et al., 2015a), hence, these methods are successfully
used for source separation (Maraschini et al., 2012; Cheng
and Sacchi, 2013; Kumar et al., 2015b). The general idea is
that conventional seismic data can be well approximated in

2016 SEG
SEG International Exposition and 86th Annual Meeting

some rank-revealing transform domain where the data exhibit


low-rank structure or fast decay of singular values. Therefore,
in order to use nuclear-norm minimization based algorithms
for source separation, the acquisition design should increase
the rank or slow the decay of the singular values. In Kumar
et al. (2015b) we used nuclear-norm minimization formulation
to separate simultaneous data acquired from an over/under
acquisition design, where the separation is performed on
each monochromatic data matrix independently. However, by
virtue of the design of the simultaneous time-jittered marine
acquisition we formulate a nuclear-norm minimization formulation that works on the temporal-frequency domaini.e.,
using all monochromatic data matrices together. One of the
computational bottlenecks of working with the nuclear-norm
minimization formulation is the computation of singular values.
Therefore, in this paper we combine the modified nuclear-norm
minimization approach with the factorization approach recently
developed by Lee et al. (2010). The experimental results on a
synthetic 5D data set demonstrate successful implementation
of the proposed methodology.
METHODOLOGY
Simultaneous source separation problem can be perceived as
a rank-minimization problem. In this paper, we follow the
time-jittered marine acquisition setting proposed by Wason and
Herrmann (2013), where a single source vessel sails across an
ocean-bottom array firing two airgun arrays at jittered source
locations and time instances with receivers recording continuously (Figure 1). This results in a continuous time-jittered
simultaneous data volume.

Figure 1: Aerial view of the 3D time-jittered marine acquisition.


Here, we consider one source vessel with two airgun arrays
firing at jittered times and locations. Starting from point a,
the source vessel follows the acquisition path shown by black
lines and ends at point b. The receivers are placed at the ocean
bottom (red dashed lines).
Conventional 5D seismic data volume can be represented as a
tensor D 2 Cn f nrx nsx nry nsy , where (nsx , nsy ) and (nrx , nry )
represents number of sources and receivers along x, y coordinates and n f represents number of frequencies. The aim is to

Page 119

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

5D time-jittered source separation


recover the data volume D from the continuous time-domain simultaneous data volume b 2 CnT nrx nry by finding a minimum
rank solution D that satisfies the system of equations A (D) = b.
Here, A represents a linear sampling-transformation operator, nT < nt nsx nsy is the total number of time samples in
the continuous time-domain simultaneous data volume, nt is
the total number of time samples in the conventional seismic
data. Note that the operator A maps D to a lower dimensional simultaneous data volume b since the acquisition process
superimposes shot records shifted with respect to their firing
times. The sampling-transformation operator A is defined as
A = M RS , where the operator S permutes the tensor coordinates from (nrx , nsx , nry , nsy ) (rank-revealing domain, i.e.,
Figure 2a) to (nrx , nry , nsx , nsy ) (standard acquisition ordering,
i.e., Figure 2b) and its adjoint reverses this permutation. The restriction operator R subsamples the conventional data volume
at jittered source locations (Figure 2c), the sampling operator
M maps the conventional subsampled temporal-frequency domain data to the simultaneous time-domain data (Figure 2d).
Note that Figure 2d represents a time slice from the continuous
(simultaneous) data volume where the stars represent locations
of jittered sources in the simultaneous acquisition.

curve). This scenario is much closer to the matrix-completion


problem (Recht et al. (2010)), where samples are removed at
random points in a matrix. Therefore, we address the source
separation problem by exploiting low-rank structure in the matricization i = (nrx , nsx ).

(a)

(b)

(c)

(d)

Figure 2: Schematic representation of the samplingtransformation operator A during the forward operation. The
adjoint of the operator A follows accordingly. (a, b, c) represent a monochromatic data slice from conventional data volume
and (d) represents a time slice from the continuous data volume.
Rank-minimization formulations require that the target data set
should exhibit a low-rank structure or fast decay of singular values. Consequently, the sampling-restriction (M R) operation
should increase the rank or slow the decay of singular values.
As we know, there is no unique notion of rank for tensors,
therefore, we can choose the rank of different matricizations
of D (Kreimer and Sacchi, 2012) where the idea is to create
the matrix D(i) by group the dimensions of D(i) specified by i
and vectorize them along the rows while vectorizing the other
dimensions along the columns. In this work, we consider the
matricization proposed by Silva and Herrmann (2013), where
i = (nsx , nsy )i.e., placing both source coordinates along the
columns (Figure 3a), or i = (nrx , nsx )i.e., placing receiver-x
and source-x coordinates along the columns (Figure 3b). As
we see in Figure 3e, the matricization i = (nsx , nsy ) has higher
rank or slow decay of the singular values (solid red curve) compared to the matricization i = (nrx , nsx ) (solid blue curve). The
sampling-restriction operator removes random columns in the
matricization i = (nsx , nsy ) (Figure 3c), as a result the overall
singular values decay faster (dotted red curve). This is because
missing columns put the singular values to zero, which is opposite to the requirement of rank-minimization algorithms. On the
other hand, the sampling-restriction operator removes random
blocks in the matricization i = (nrx , nsx ) (Figure 3d), hence,
slowing down the decay of the singular values (dotted blue

2016 SEG
SEG International Exposition and 86th Annual Meeting

(e)

Figure 3: Monochromatic slice at 10.0 Hz. Fully sampled


data volume and simultaneous data volume matricized as (a,
c) i = (nsx , nsy ), and (b, d) i = (nrx , nsx ). (e) Decay of singular
values. Notice that fully sampled data organized as i = (nsx , nsy )
has slow decay of the singular values (solid red curve) compared to the i = (nrx , nsx ) organization (solid blue curve). However, the sampling-restriction operator slows the decay of the
singular values in the i = (nrx , nsx ) organization (dotted blue
curve) compared to the i = (nsx , nsy ) organization (dotted red
curve), which is a favorable scenario for the rank-minimization
formulation.
Since rank-minimization problems are NP hard and therefore
computationally intractable, Recht et al. (2010) showed that
solutions to rank-minimization problems can be found by solving a nuclear-norm minimization problem. Silva and Herrmann
(2013) showed that for seismic data interpolation the sampling
operator M is separable, hence, data can be interpolated by
working on each monochromatic data tensor independently.
Since in continuous time-jittered marine acquisition, the sampling operator M is nonseparable as it is a combined timeshifting and shot-jittering operator, we can not perform source
separation independently over different monochromatic data
tensors. Therefore, we formulate the nuclear-norm minimiza-

Page 120

5D time-jittered source separation

(1)

acquisition, the simultaneous data volume b is 4-times subsampled compared to conventional acquisition. Consequently, the
spatial sampling of recovered data is improved by a factor of 4
and the acquisition time is reduced by the same factor.

Pn
Pn
s j k1 and s j is the vector of singular
where j f k.k = j f ks
values for each monochromatic data matricization. One of the
main drawbacks of the nuclear-norm minimization problem is
that it involves computation of the singular-value decomposition (SVD) of the matrices, which is prohibitively expensive
for large-scale seismic data. Therefore, we avoid the direct
approach to nuclear-norm minimization problem and follow a
factorization-based approach (Rennie and Srebro, 2005; Lee
et al., 2010; Recht and R, 2011). The factorization-based
approach parametrizes each monochromatic data matrix D(i)
as a product of two low-rank factors L(i) 2 C(nrx nsx )k and
H
R(i) 2 C(nry nsy )k such that, D(i) = L(i) R(i) , where k represents the rank of the underlying matrix and H represents the Hermitian transpose. Note that tensors L, R can be formed by concatenating each matrix L(i) , R(i) , respectively. The optimization
scheme can then be carried out using the tensors L, R instead
of D, thereby significantly reducing the size of the decision
variable from nrx nry nsx nsy n f to 2k nrx nsx n f
when k nrx nsx . Following Rennie and Srebro (2005), the
sum of the nuclear norm obeys the relationship:

Simply applying the adjoint of the sampling operator M to


simultaneous data b results in strong interferences from other
sources as shown in Figure 4c. Therefore, to recover the
interference-free conventional seismic data volume from the simultaneous time-jittered data, we solve the factorization based
nuclear-norm minimization formulation. We perform the source
separation for a range of rank k values of the two low-rank
factors L(i) , R(i) and find that k = 100 gives the best signal-tonoise ratio (SNR) of the recovered conventional data. Figure 4d
shows the recovered shot gather, with an SNR of 20.8 dB, and
the corresponding residual is shown in Figure 4e. As illustrated,
we are able to separate the shots along with interpolating the
data to the finer grid of 6.25 m. To establish that we loose
very small coherent energy during source separation, we intensify the amplitudes of the residual plot by a factor of 8
(Figure 4e). The late arriving events, which are often weak
in energy, are also separated reasonably well. Computational
efficiency of the rank-minimization approachin terms of the
memory storagein comparison to the curvelet-based sparsitypromoting approach is approximately 7.2 when compared with
2D curvelets and 24 when compared with 3D curvelets.

tion formulation over the temporal-frequency domain as follows:


min

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

nf
X
j

(i)

kD j k

nf
X
j

(i)

subject to kA (D)

kD j k

nf
X
1
j

bk2 e,

kL j R j k2F ,
(i) (i)

where k k2F is the Frobenius norm of the matrix (sum of the


squared entries).
EXPERIMENTS & RESULTS
We test the efficacy of our method by simulating a synthetic 5D
data set using the BG Compass velocity model (provided by
the BG Group) which is a geologically complex and realistic
model. We also quantify the cost savings associated with simultaneous acquisition in terms of an improved spatial-sampling
ratio defined as a ratio between the spatial grid interval of observed simultaneous time-jittered acquisition and the spatial
grid interval of recovered conventional acquisition. The speedup in acquisition is measured using the survey-time ratio (STR),
proposed by Berkhout (2008), which measures the ratio of time
of conventional acquisition and simultaneous acquisition.
Using a time-stepping finite-difference modelling code provided by Chevron, we simulate a conventional 5D data set of
dimensions 2501 101 101 40 40 (nt nrx nry nsx
nsy ) over a survey area of approximately 4 km 4 km. Conventional time-sampling interval is 4.0 ms, source- and receiversampling interval is 6.25 m. We use a Ricker wavelet with central frequency of 15.0 Hz as source function. Figure 4a shows
a conventional common-shot gather. Applying the samplingtransformation operator (A ) to the conventional data generates
approximately 65 minutes of 3D continuous time-domain simultaneous seismic data, 30 seconds of which is shown in Figure 4b. By virtue of the design of the simultaneous time-jittered

2016 SEG
SEG International Exposition and 86th Annual Meeting

CONCLUSIONS
We propose a factorization based nuclear-norm minimization formulation for simultaneous source separation and
interpolation of 5D seismic data volume. Since the samplingtransformation operator is nonseparable in the simultaneous
time-jittered marine acquisition, we formulate the factorization based nuclear-norm minimization problem over the
entire temporal-frequency domain, contrary to solving each
monochromatic data matrix independently. We show that the
proposed methodology is able to separate and interpolate the
data to a fine underlying grid reasonably well. The proposed
approach is computationally memory efficient in comparison
to the curvelet-based sparsity-promoting approach.
ACKNOWLEDGEMENTS
We would like to thank the BG Group for permission to use the
synthetic Compass velocity model, Chevron for providing the
3D time-stepping finite-difference modelling code and Tristan
van Leeuwen for valuable discussions. The authors wish to
acknowledge the SENAI CIMATEC Supercomputing Center
for Industrial Innovation, with support from BG Brasil and
the Brazilian Authority for Oil, Gas and Biofuels (ANP), for
the provision and operation of computational facilities and
the commitment to invest in Research & Development. This
work was financially supported in part by the Natural Sciences
and Engineering Research Council of Canada Collaborative
Research and Development Grant DNOISE II (CDRP J 37514208). This research was carried out as part of the SINBAD II
project with the support of the member organizations of the
SINBAD Consortium.

Page 121

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

5D time-jittered source separation

(a)

(b)

(c)

(d)

(e)

Figure 4: Source separation recovery. A shot gather from the (a) conventional data; (b) a section of 30 seconds from the continuous
time-domain simultaneous data (b); (c) recovered data by applying the adjoint of the sampling operator M ; (d) data recovered via
the proposed formulation (SNR = 20.8 dB); (e) difference of (a) and (d) where amplitudes are magnified by a factor of 8 to illustrate
a very small loss in coherent energy.
2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 122

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Beasley, C. J., 2008, A new look at marine simultaneous sources: The Leading Edge, 27, 914917,
http://dx.doi.org/10.1190/1.2954033.
Beasley, C. J., R. E. Chambers, and Z. Jiang, 1998, A new look at simultaneous sources: 68th Annual
International Meeting, SEG, Expanded Abstracts, 17, 133135.
Berkhout, A., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924938,
http://dx.doi.org/10.1190/1.2954035.
Cheng, J., and M. D. Sacchi, 2013, Separation of simultaneous source data via iterative rank reduction:
83rd Annual International Meeting, SEG, Expanded Abstracts, 8893.
de Kok, R., and D. Gillespie, 2002, A universal simultaneous shooting technique: 64th Annual
International Conference and Exhibition, EAGE, Extended Abstracts.
Hampson, G., J. Stefani, and F. Herkenhoff, 2008, Acquisition using simultaneous sources: The Leading
Edge, 27, 918923, http://dx.doi.org/10.1190/1.2954034.
Kreimer, N., and M. D. Sacchi, 2012, A tensor higher-order singular value decomposition for prestack
seismic data noise reductionand interpolation: Geophysics, 77, no. 3, V113V122,
http://dx.doi.org/10.1190/geo2011-0399.1.
Kumar, R., C. Da Silva, O. Akalin, A. Y. Aravkin, H. Mansour, B. Recht, and F. J. Herrmann, 2015a,
Efficient matrix completion for seismic data reconstruction: Geophysics, 80, no. 5, V97V114,
http://dx.doi.org/10.1190/geo2014-0369.1.
Kumar, R., H. Wason, and F. J. Herrmann, 2015b, Source separation for simultaneous towed-streamer
marine acquisition A compressed sensing approach: Geophysics, 80, no. 6, WD73WD88,
http://dx.doi.org/10.1190/geo2015-0108.1.
Lee, J., B. Recht, R. Salakhutdinov, N. Srebro, and J. Tropp, 2010, Practical large-scale optimization for
max-norm regularization: Presented at the Advances in Neural Information Processing Systems,
2010.
Maraschini, M., R. Dyer, K. Stevens, and D. Bird, 2012, Source separation by iterative rank reductiontheory and applications: Presented at the 74th EAGE Conference and Exhibition,
http://dx.doi.org/10.3997/2214-4609.20148370.
Recht, B., M. Fazel, and P. Parrilo, 2010, Guaranteed minimum rank solutions to linear matrix equations
via nuclear norm minimization: SIAM Review, 52, 471501,
http://dx.doi.org/10.1137/070697835.
Recht, B., and C. R, 2011, Parallel stochastic gradient algorithms for large-scale matrix completion:
Technical report, University of Wisconsin-Madison.
Rennie, J. D. M., and N. Srebro, 2005, Fast maximum margin matrix factorization for collaborative
prediction: Proceedings of the 22nd international conference on Machine learning, ACM, 713
719, http://dx.doi.org/10.1145/1102351.1102441.
Da Silva, C., and F. J. Herrmann, 2013, Hierarchical Tucker tensor optimization - applications to 4D
seismic data interpolation: Presented at the EAGE Annual Conference Proceedings.
10.3997/2214-4609.20130390.
Wason, H., and F. J. Herrmann, 2013, Time-jittered ocean bottom seismic acquisition: 75th Annual
International Meeting, SEG, Expanded Abstracts, 32, 16, http://dx.doi.org/10.1190/segam20131391.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 123

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Results from a large field test using 2D ring arrays to address back-scattered surface noise in
land seismic acquisition
Christof Stork*, Carolyn Dingus, Nick Bernitsas, Dave Flentge, ION Geophysical
Summary
Back-scattered surface noise is often a large problem with
surface seismic data. This noise is badly aliased with
conventional 3D orthogonal geometry, which makes it very
difficult to remove in processing.
We performed a large field test of a new acquisition
geometry using 2D ring arrays that properly sampled the
back-scattered noise so it can be removed in processing.
This test with 4300 shots and 3300 receivers is large
enough to produce a 3D image. Since this test was
performed concurrently with a conventional 3D acquisition
survey using the same sources and similar number of
receivers, we can directly compare the processed image
quality from the 3D ring array results with the conventional
results. Results show that the new acquisition geometry
using 2D ring arrays is very effective at addressing the
back-scattered noise at similar acquisition effort.

propose the 2D ring arrays with each outer ring having a


greater distance from the next inner ring, as shown in
Figure 1.
This geometry will likely create a very irregular acquisition
pattern and irregular fold footprint. Will this irregular
illumination of the subsurface cause artifacts and can
modern regularization methods deal with the irregularities?
The key questions with this new approach are:
1) How effective are the 2D ring arrays and
associated processing at removing realistic backscattered surface noise?
2) How significant are the irregular acquisition
artifacts?
3) How do the results compare with conventional
acquisition? &
4) Can ring arrays be used in conjunction with
conventional acquisition to improve signal
quality near holes and in noisy regions?

Introduction
Because the near surface is a very strong wave guide that
traps much energy and is very heterogeneous, it often
creates strong back-scattered surface noise. This surface
noise can be 2x-10x greater than the reflection signal.
This back scattered noise is often not uniform in offset or
azimuth. As a result, this noise will effect seismic
interpretation attributes based on offset or azimuth. It is
often difficult to identify when noise artifacts influence
attribute patterns.

To address these questions, we performed a large field test


of ring array acquisition, which is shown in Figure 2a. This
data was acquired concurrently with a conventional 3D
acquisition, shown in Figure 2b. Both the new and
conventional acquisition had 4300 shots and 3300
receivers, which is large enough to produce 3D images for
comparison.
1600 feet in size

Conventional 3D seismic acquisition using orthogonal


source and receiver lines is effective at properly recording
noise that travels in an approximate straight line from the
source to the receiver. This straight noise is distinct from
reflection signal and can then be removed in processing.
However, this conventional geometry strongly aliases the
side-scattered noise and the noise can have similar patterns
as reflection energy in conventional acquisition. As a
result, the back scattered noise is very hard to remove in
processing from conventional acquisition data.
We propose a method of properly recording the backscattered noise using 2D ring arrays. When the noise is
properly recorded, it can be removed in processing.
A key challenge of properly recording the back-scattered
noise is keeping acquisition costs low enough despite the
demands for adequate receiver sampling in X & Y. We

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: Sample 2D hexagonal ring array with 60 receivers.


Receivers are closer in the middle than the outside. The
sampling in X & Y records back-scattered noise so it can be
later removed in processing.

Page 124

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

55,000 feet

use a 2D spacing dense enough for noise filtering, and the


processing of it did not perform special 3D Radon filtering
of surface noise.
Filtering of ring array data
The data for each array for each source is filtered using an
enhanced 3D Radon inversion filter. The filter must handle
the irregular spacing of the ring arrays. The filter benefits
much from sparsity constraints to handle the few receivers
and larger outer receiver spacing. The irregular spacing
aids the application of sparsity constraints.

Figure 2a: Acquisition geometry using 2D ring arrays.


4300 sources are in white. 3300 receivers are in black.
There are 54 ring arrays with about 60 receivers in each
array.
55,000 feet

A challenge with this filter is precisely separating signal


and noise so the noise removal does not harm the signal.
We seek significant signal to noise improvement of > 4x.
We demonstrate the results of this filter by displaying a
receiver gather before and after filtering in Figures 3a & b.
Since the filter is applied in the shot domain, any coherence
of reflection events on receiver gathers is an independent
analysis of the signal improvement.
The left receiver gather shows a noisy raw receiver gather
before filtering. The right gather is the same gather after
filtering scaled at the same level. Both gathers have NMO
applied to help visual identification of reflections which
will be flat for this gentle geology.
The right gather shows significant reduction of noise at the
mid and longer offsets. Subtle reflections are now visible.
The noise in the shorter offsets is much reduced but since
the initial noise is stronger, the subtle reflections are still
not visible.

Figure 2b: Conventional acquisition geometry that was


acquired concurrently with the ring array acquisition.
The sources are in white and are the same as those used
for the ring array acquisition. The receivers are in
black and are conventional lines. There are 3400
receivers in the conventional acquisition. The swath
width of the conventional receivers is 1.4x wider than
the ring array swath.
Several previous studies considered 2D arrays for resolving
and removing back-scattered noise. (Regone & Rethford,
1990; Regone, 1997; Regone, 1998; Petrochilos & Drew,
2014; Auger et al., 2013; Schissele-Rebel & Meunier,
2013). However, these studies used dense uniform arrays
which are expensive to scale to a full 3D survey.
Moreover, this study investigates the use of ring array data
in combination with 3D Radon inversion.
The ARCO Button Patch (Barr, 2013; Biondi, 2006) was a
full 3D survey geometry using 2D patches, but the
geometry was designed for acquisition convenience, did not

2016 SEG
SEG International Exposition and 86th Annual Meeting

The near offset noise cone is an area of special concern.


This area has particularly large amplitude slow velocity
back-scattered noise. If we dont address this noise well,
the near offset reflection amplitudes are suspect.
Figure 4a, b, & c compare the stack results of the raw ring
array data with no filtering, the ring array data with 3D
Radon filtering, and the raw stack of the conventional data
without filtering. The results show that the 3D Radon
filtering improves the stack, but the improvement is not as
much as on the raw gathers. We are working on
performing standard 2D filtering of the conventional data to
have a more complete comparison.
Cost comparison
Since the number of source and receivers are identical
between the two acquisition geometries, the costs are
nominally similar.

Page 125

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 3a: Raw gather before filtering showing much


noise. Gather has NMO applied to make reflection
events flat. Note the noise cone in the near offsets.

Figure 3b: Identical gather after filtering showing


significant improvement. Much noise is removed and
subtle low amplitude signal is now apparent. Gather
has NMO applied to make reflection events flat.

The near random nature of the ring array geometry did not
add significant cost since GPS was used for survey and
layout.

Discussion and Conclusions

Acknowledgements

Results to date show that the signal to noise ratio of the


stack of ring array data with 2D Radon inversion is about
2x better than the stack of conventional data. Prestack data
quality is improved more than poststack data quality.

The authors gratefully acknowledge the aid of an


anonymous resource company that allowed ION to
piggyback on their existing 3D seismic survey, which
significantly reduced the cost of this test and provided the
concurrent conventional acquisition.

We consider these results to be promising but we think


there is potential for further improvement. The room for
improvement comes from addressing the complexity of the
back-scattered noise, particularly in the noise cone.
Moreover, we expect the 3D Radon inversion can be
improved.

The authors also greatly appreciates the foresight of IONs


management to pursue this project despite the challenging
times for the i

It is still an open question of whether ring arrays can


generally replace conventional acquisition or mainly be
used in niche areas to improve signal quality near holes and
in noisy regions.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 126

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 4a: Stack of raw ring array data with no filtering. Signal quality is poor.

Figure 4b: Stack of ring array data after 3D Radon filtering. Signal quality is much improved.

Figure 4c: Stack of conventional acquisition data with no filtering. Signal quality is similar to the raw ring array stack.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 127

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Auger, E., E. Schissel-Rebel, and J. Jia, 2013, Suppressing noise while preserving signal for surface
microseismic monitoring: The case for the patch design: 83rd Annual International Meeting,
SEG, Expanded Abstracts, 20242028, http://dx.doi.org/10.1190/segam2013-1396.1.
Barr, F. J., 1993, Seismic data acquisition: Recent advances and the road ahead: 63rd Annual
International Meeting, SEG, Expanded Abstracts, 12011204,
http://dx.doi.org/10.1190/1.1822334
Biondi, B. L., 2006, 3D seismic imaging: SEG.
Petrochilos, N., and J. Drew, 2014, Noise reduction on microseismic data acquired using a patch
monitoring configuration: A fayetteville formation example: 84th Annual International Meeting,
SEG, Expanded Abstracts, 23142318, http://dx.doi.org/10.1190/segam2014-1269.1.
Regone, C. J., 1997, Measurement and identification of 3-D coherent noise generated from irregular
surface carbonates [Abstract], in I. Palaz and K. J. Marfurt, eds., Carbonate seismology: SEG,
281306.
Regone, C. J., 1998, Suppression of coherent noise in 3-D seismology [Abstract]: The Leading Edge, 17,
15841589, http://dx.doi.org/10.1190/1.1437900.
Regone, C. J., and G. L. Rethford, 1990, Identifying, quantifying, and suppressing backscattered seismic
noise: 60th Annual International Meeting, SEG, Expanded Abstracts, 748751,
http://dx.doi.org/10.1190/1.1890320.
Schissel-Rebel, E., and J. Meunier, 2013, Patch versus broadband networks for microseismic: A signalto-noise ratio analysis: 83rd Annual International Meeting, SEG, Expanded Abstracts, 2104
2108, http://dx.doi.org/10.1190/segam2013-1285.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 128

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Title: High Density 3D Nodal Seismic Acquisition in Sub Andean Mountains - An Operational
Challenge
Author: J. Uribe* (Repsol), L. Rodriguez (Repsol), P. A. Munoz (Repsol), R. Parrado (Repsol), N. Sanabria
(Repsol).

Summary

Geological Objectives

The need to improve resolution and quality of image of the


complex structural geology in the Sub-Andean mountains
requires flexible acquisition systems due to the rough
topography and limited access where heliportable support
is mandatory (Figure 1).The conventional cable systems
limitations in channel count and heavy equipment generates
an impact in the acquisition parameters and efficiency.

The main geological and geophysical objectives of


Huacaya 3D (Bolivia) study are between 3000 to 5000 m.
(C. Staff, 2014, SAE Exploration, Final operations
report).The result of the seismic volume is to improve the
structural interpretation of the image area:
Noise attenuation by adequate sampling of the seismic
wavefield.
Statics by recording support data for modeling of the
complex near Surface Geology.
Improve imaging by using regular orthogonal geometry
for better offset/azimuth distribution.
For Sagari 3D (Peru) the objectives are to get more reliable
stratigraphic and structural interpretation of the targets
between 1900 and 4000 meters depth, compared with the
legacy 2D data of the area. (C. Staff, 2015, SAE
Exploration, Final operations report).

Figure 1.General location of the projects (Google image)


Results
The sparse geometries used in similar terrains as the most
practical approach provided a coarse sampling, poor near
surface information and irregular geophysical attributes in
the seismic datasets.
The wireless nodal systems are valid alternatives for a
higher channel count and better sampling because of their
efficiency and operational flexibility over rough
topography. These systems also represent safer conditions
due to the reduction in equipment load (sensors, cable, and
batteries), people and exposure. Implementation of
geophone arrays involves a lot of operational effort and
HSE exposure in these terrains and its replacement to a
single sensor acquisition is a desirable option.

The increased spatial sampling of the project produced an


improved seismic quality not only on the depth of the
targets but in the near surface data. Figure 2,3 below shows
an example of the in-field fast track stack and post stack
migration.

Figure 2: Huacaya 3D in-field processing (Stack section)

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 129

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The advantages for the Nodal recording equipment


operation were:

Several source and receiver offsets were allowed and


innovative recovery methodologies were applied in order to
avoid foot print pattern created by sources recovered
around many obstacles (lakes, river horseshoes, creeks
found within the survey area).

Assurance to have a full spread recorded for any shot


point.
Low energy battery consumption of the nodal units.
The reduced weight of the equipment to be moved along
this rough terrain.
Good GPS (Global Positioning system) signal reception
in the receiver nodes.

Good radio communication from Base Camp to


shooters blaster

This project was acquired as planned and very restricted


timeframe and under extreme cold conditions in North
Slope - Alaska.

Figure 3: Huacaya 3D in-field processing (Post Stack


Migration)
Conclusions and Recommendations
The recording parameters defined during the feasibility
study and the seismic equipment used in the PERU and
BOLIVIA projects during 2014 provided better image of
the objectives and increase the interpretability of the data
compared with the legacy seismic. Besides better sampling,
the NWS surveys gave more detailed information of the
near surface.
The efficiency in the acquisition was improved by the
reduction of recording equipment weight. The Sagari 3D
was acquired in 47 days with around 90 million of traces
processed, the Huacaya 3D acquisition was completed in
54 days with around 560 million of traces processed. This
efficiency was reflected in the cost of the surveys.
The main challenges for the recording operation were:
Keeping rotation of the nodes in order to avoid
consumption of batteries.
Learning curve for the line crew with the new equipment.
Timing to generate the 3D SEGD shot gathers after data
download in the DTMs.
Points to improve in further projects are:
A better environmental and cultural noise monitoring
(S/N ratio).
Alternatives to data quality real time monitoring.
Generation of SPS files in a new format considering the
higher trace count contents.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 130

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Cruz, C., J. Rodriguez, J. Hechem, and H. Villar, 2008, Sistemas petroleros de cuencas andinas:
Presented at the 7th Congreso de Exploracin y Desarrollo de Hidrocarburos, IAPG,
Abstracts, 159187.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 131

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Suppression and Amplification Effects of Array Pattern to Near-surface Scattered Waves


Hongxiao Ning*, Donglei Tang, Baohua Yu, Haili Wang and Yanhong Zhang, BGP, CNPC
Summary
Traditionally, array response analyzes the suppression of
the near-surface scattering and its negative impacts on the
reflected waves. The theoretical analysis and experimental
data study in this paper suggest that array may enhance the
near-surface scattering in the direction perpendicular to
arrays. Therefore, a directional array will cause the S/N
ratio of wide-azimuth 3D seismic data to change with the
azimuth. A central symmetrical array, which does not have
directionality, will not have much interference suppression
effect. Seismic energy generated by a source array has a
radiation pattern, which significantly enhances the nearsurface multiply scattered waves in the direction
perpendicular to the array, thus seriously affect the S/N
ratio along that direction. However, both receiver and
source array methods can add additional directional effect,
which will prevent us from extract the correct pre-stack
attributes from wide-azimuth 3D seismic data. Therefore,
our suggestion is, in the 2D or narrow-azimuth 3D seismic
surveys, carefully designed arrays may improve the S/N
ratio of the raw data. In wide-azimuth 3D seismic survey, if
possible, array method should be avoided. Therefore,
comprehensively consider the parameters of shooting,
receiving, and geometry in an acquisition system will lay a
foundation for the high quality and high efficiency in
seismic survey project.
Introduction
Array shooting and receiving is an effective technology to
improve the S/N ratios in seismic data acquisition, and has
been investigated for many years. There were many
discussions regarding array sources and receivers, but most
of them focused on how to suppress interferences and
reduce the negative impact from the array (Li et al., 1984,
2008; Tang et al., 2014). Along with the development of
high-density and high-precision 3D seismic survey, there
are increasing demands for wide-azimuth and high-density
3D seismic geometry, which requires higher spatial
sampling rate. More sufficient sampling, more uniform
sampling, and more symmetrical sampling (Wu et al., 2012)
are necessity. With the increase of the azimuth width in 3D
surveys, the negative effects of the array method are
becoming more and more prominent. In this paper, through
both theoretical analysis and real data test, the following
innovative ideas are proposed. In common 2D, wide-line
2D, and narrow-azimuth 3D seismic surveys, using the
array technology can efficiently improve the S/N ratio, thus
to reduce the required number of folds and increase the
operation efficiency. In the wide-azimuth and high-density
3D seismic acquisition, the point shooting and point

2016 SEG
SEG International Exposition and 86th Annual Meeting

receiving method should be used as much as possible to


avoid the negative impact of arrays to the pre-stack
attributes and data fidelity. In the meantime, to improve the
operation efficiency and reduce the acquisition cost, the
receiving and shooting parameters should be simplified for
the realization of mechanized and automated operation
practices.

Figure 1: A schematic diagram for array receiving. R1 to R3 are


survey lines along different directions. The short bars are receiver
arrays oriented at 0, and the array length is comparable to the
wavelength of scattered waves. The star is the source. The
orientations of receiver arrays on R1 are parallel to the propagation
direction of scattered waves, on R3 are perpendicular to the
scattered waves, and on R2 are oriented at 45.

The amplification of receiver array to near-surface


scattered waves
Near-surface scattering is the main interference and in
seismic data. That is also the main factors, which causes the
S/N ratio decrease in complex areas. To better suppress
scattered waves, the array should be oriented in the same
direction as the propagation direction of scattered waves.
The dimension of the array should be comparable to the
dominant wavelength. Otherwise, there will be some
problems. Shown in Figure 1 is an example for a linear
array, where R1 to R3 are three survey lines, short bars are
receiver arrays, the star is the source, and the array length is
comparable to the wavelength of scattered waves. With
given geometry, the scattered waves can be properly
suppressed in R1, but will even be enhanced in R3.
Figure 1 shows that the performance of an array to the
scattered waves is strongly related to its orientation relative
to the line. The amplification of a linear array with respect
to its orientation can be written as:

Page 132

Suppression and amplification effects of array

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

(n , ,x , )

sin(nx cos / )
n sin(x cos / )

(1)

Where is the response coefficient, n is the number of


array elements, is the relative angle between the array
orientation and the receiver line, x is the geophone or shot
hole interval, is the apparent wavelength of interference
waves.
Shown in Figure 2 is the response curve calculated using
equation (1) for a linear array, the directional array has the
similar results, with the following parameters: the wave
velocity is 625 m/s, the dominant frequency is 13 Hz, the
geophone interval is 4 m, and the number of array elements
is 12. It can be clearly seen that the array cant suppress
scatterings along 90 and 270, but it can suppress the
scattering up to 50 dB in directions 0and 180. It follows
that, the effect of a linear array to near-surface scattering
noises varies greatly with angle . It improves the S/N ratio
in favorable directions, while worsen the situation in
unfavorable directions. In order to verify the above
mentioned effect in actual seismic acquisition, BGP carried
out a real data array-receiving test in Western China.

seismic data are obviously changing with . That is


consistent with our theoretical analysis.
Source array and its directivity characteristics
According to the principle of reciprocity, array receiving
and array shooting should have the same effect. The
seismic wave energy is proportional to the radius of the
equivalent cavity generated by explosion (Qian 2003;
Ning et al., 2011). The equivalent cavity generated by array
shooting is close to an ellipsoid as shown in Figure 4. The
radius of curvature is r1 in the x direction, and r2 in the y
direction. For array shooting, r1 is usually larger than r2,
and the wave energy radiated along x direction will be
significantly larger than that in the y direction. The seismic
wave energy propagating transversely will stimulate more
multiply scattering interferences in the x direction than that
in the y direction. That is to say the S/N ratio of seismic
data in the x direction is lower than that in the y direction.

Figure 2: Array response with its orientation.


The horizontal axis is the angle , varying from 0to 360, and the
vertical axis is the response of the array.

The test acquisition geometry is shown in Figure 1. The


receiving spread is laid out in the directions of 0, 45and
90 respectively with the array oriented at 0. A 12geophone array with a 5 m geophone interval is used for
each receiving point, an interval of 30 m is used between
groups, and there are total 180 traces for the each unilateral
receiving line. We drill a shallow hole, 3 meters, and put
3kg explosives as the scattering source. In Figure 3, shown
in the first row are raw test records, in which the effect of
array orientation can hardly be identified due to the quite
low S/N ratio. The second row is the corresponding bandpass filtered BP (30-60Hz) records. Where we can see that
the S/N ratio of d is the highest, the S/N ratio of f is the
lowest, and the S/N ratio of e is medium. The S/N ratios in

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 4: Map view of the equivalent cavity generated by a threehole array shooting.

In Figure 4, x represents the direction perpendicular to the


source array, y represents the direction along the source
array, r1 and r2 are radiuses of equivalent cavity generated
by explosion in x and y directions. The stars represent the
shot hole position. The short bars are receiver arrays.
To verify the directivity effect of a source array to the S/N
ratio of seismic data, BGP carries out a real data shooting
test in Jiuquan Basin in Western China. The test used a
three-hole shooting, the spacing between source holes is 6

Page 133

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Suppression and amplification effects of array

m, and each hole is charged with 3 kg explosives. The


interval between receiver arrays is 40 m, and the receiver
line has 240 traces. Two three-hole shots were detonated
for testing. For the first shot, the array is orientated parallel
to the receiver spread, and for the second shot the array is
oriented perpendicular to the receiver spread as Figure 4.
The records in the upper row in Figure 5 are raw data. In
Figure 5 (a) the array is parallel to the receiver spread and
in Figure 5 (b) it is perpendicular to the receiver spread.
Figure 5 (b) shows that the noise from scattering is
significantly enhanced in the time deeper than 2.5 s. Figure
5 (c) and 5 (d) are the corresponding BP (30-60Hz) filtered
records. It can be seen that the S/N ratio in Figure 5 (c) is
much higher than in Figure 5 (d) between times 1.5 s and
2.0 s. This is consistent with the theoretical analysis. The
results show that the array receiving and array shooting are
different. Array source not only has the array effects, but
also has the equivalent cavity radius effect. We can change
the distribution of seismic wave energy by adjusting the
shape of the equivalent cavity.

constructive interference in the direction perpendicular to


the array. It will cause the S/N ratio of seismic data
changing with the azimuth. That is disadvantage to the prestack attributes, related to the azimuth attributes, and the
fidelity of wide-azimuth 3D seismic data.
2. Energy from multiply scattered waves is apparently
affected by the directivity of the source array. The S/N ratio
can be improved by changing the shooting direction of the
source array. In some regions with very low S/N ratios, it is
recommended to take priority using source array to
improve the S/N ratio when we use explosive source as the
excitation source.
3. The array parameters should match the designed
geometry and wavelength. In 2D or narrow-azimuth 3D
seismic surveys, the S/N ratio can be improved using the
array method. The high-precision 3D wide-azimuth (or fullazimuth) seismic survey can be supported by a single-point
acquisition method without array, which not only helps to
increase the fidelity of seismic data, but also improve the
operation efficiency and lower the acquisition cost as well.

Conclusion

Acknowledgement

Based on the above theoretical analysis and real data tests,


the following conclusion and understanding can be reached:

The authors wish to thank Crew 2137 and Crew 249 for
their hard works spent in the test acquisition, and thanks
BGP, CNPC for permitting to publish the related results.

1. Both source and receiver arrays have apparent directivity


characteristics. It can enhance scattered waves due to

Figure 3 Array directivity receiving test records.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 134

Suppression and amplification effects of array

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The first row is raw test records. The second row is the band-pass filtered (BP30-60Hz) records corresponding with the first row. The receiving
spread direction of the first column is 0, the middle column is 45, and third column is 90. The receiver array orientation is 0.

Figure 5: Comparison between seismic records obtained from different source-array orientations.
The records in the upper row are raw data. The source-array orientation of (a) is parallel to the receiving spread, and (b) is perpendicular to the
receiving spread. The records (c) and (d) are the corresponding band-pass filtered BP (30-60Hz) records of (a) and (b).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 135

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Li, Q. Z., and Z. J. Chen, 1984, The suppression and improvement of the array effect on the reflection
wave: Oil Geophysical Prospecting, 19, 120.
Li, Q. Z., and J. D. Wei, 2008, Talk about importance of cross-line array of geophone on spread: Oil
Geophysical Prospecting, 43, 375382.
Ning, H. X., Y. H. Zhang, D. R. Zhang, Y. X. Jiang, and W. L. Jiao, 2011, Seismic source optimization in
Qilian mountain area: Oil Geophysical Prospecting, 46, 370373.
Qian, R. J., 2003, Analysis of explosive effect of dynamite source: Oil Geophysical Prospecting, 38, 583
588.
Robert, E., 2002, Sheriff, encyclopedic dictionary of applied geophysics, 4th ed.: SEG.
Tang, D. L., X. W. Cai, Y. Q. He, and H. X. Ning, 2014, Source and receiver arrays for prestack
migration: Oil Geophysical Prospecting, 49, 10341038.
Wu, Y. G., W. H. Yin, Y. Q. He, G. S. Li, H. X. Ning, D. R. Zhang, and Q. S. Pu, 2012, A uniformity
quantitative method for 3D geometry attributes: Oil Geophysical Prospecting, 47, 361365.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 136

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Reducing the operational cost of seismic vibrators


Nicolas Tellier and Jean-Jacques Postel*, Sercel
Summary
The low oil price has led to the downsizing or cancellation
of numerous exploration projects and significantly
depressed the geophysical industry. Seismic crew expenses
are closely scrutinized to ensure competitive prices, if not
positive margins. This directly concerns seismic vibrators:
even if their use enables substantial economies compared to
dynamite management, their operation, maintenance, and
logistic form a non-negligible part of the crew expenses.
This abstract proposes several solutions to ensure the
vibrator contribution to the crew operating expenses
reduction. Existing features already enable reducing gasoil
consumption and maintenance downtime. Other features or
vibrators adapted to the acquisition, for example for lowfrequency seismic, enable more efficient shooting. Finally,
a few prospective solutions are discussed to shape the
contours of what future vibrator seismic operations could
be.

succeed in dramatically reducing vibrator waiting time, but


it remains significant on numerous other projects.
Unlike road vehicles, vibrator engines are industrial-type
and consequently driven at constant RPM (Rotation Per
Minute), whatever the engine load: even when a vibrator is
within a waiting cycle, its engine runs at full RPM.
Consequently it can greatly exceed the vibrator needs and
lead to useless fuel consumption. Reducing the engine
RPM requires a driver operation which is rarely performed
on the field.

Introduction
In a context of low oil prices, the whole geophysical
industry has entered a potentially long-term, shady period
during which every cent will count to ensure
competitiveness, and even project feasibility. Marine
seismic, penalized by strong operating expenses, has been
first to suffer from the current situation, but land seismic
projects also show a downturn and are subject to important
cost cuts. Thus, it is more than ever paramount to identify
all the project expenses and reduce the less necessary or
less effective costs. This concerns vibrators that enable
acquiring low-priced VPs wherever possible, but also
require maintenance, consumables and staff, with the
associated costs. This abstract proposes several existing and
prospective solutions to ensure the vibrator contribution to
the crew operating expenses reduction.

Figure 1: Principle of fuel consumption reduction with


engine power management.

Scaling down fuel consumption


In vibroseis operations, vibrators usually pass through
operation/waiting cycles, operation time being defined
when vibrators either sweep or move to the next VP.
During waiting periods, vibrators are idle due to different
factors: for example, line testing, checking, repairing, or
third-party interferences. The vibrator waiting time usually
depends much on the ground condition (flat deserts can
offer almost continuous operations compared to detourdemanding small fields or uneven terrains), and on the
methodology used (single-source blended acquisition
yielding less standby than single fleet or flip-flop
operations). Some Middle East or North Africa crews then

2016 SEG
SEG International Exposition and 86th Annual Meeting

Vibrator management solutions (simplified overview


displayed Figure 1) have recently been released to avoid
these unnecessary costs. By measuring the gas pedal signal
out of vibration phases, the vibrator management computer
can accurately adapt the drive pump swashplate
displacement and the engine RPM to fulfill exact vibrator
needs. A lower engine RPM yields a lesser fuel
consumption, which level is dependent on the field
conditions (topography, ground type, temperatures, etc.)
and the vibrator use (sweep drive and length, move-up
duration, etc.). Field tests were recently carried out by a
seismic crew operating 2D and 3D projects in a hilly terrain
of Eastern Europe. Five vibrators worked more than 2,000

Page 137

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Reducing seismic vibrator operational cost

hours during the 8-month test period, two of which were


equipped with such a feature. This test showed significant
fuel savings of around 15% for these two vibrators when
compared to the three others.

approximately 20 minutes per day when using standard


tanks to 5 minutes per 36-hour period when using enlarged
tanks equipped with fast-refueling nozzles (Figure 2).
Multiplexed
electrical
systems
enable
faster
troubleshooting and repair of electrical failures.

Noise and exhaust emissions are also reduced during


waiting periods a clear benefit for HSE-sensitive urban
operations, particularly subject to waiting time due to third
parties.
Shortening low-frequency sweep cycle time with
appropriate sources
With the advent of broadband seismic, low-frequency
sweeps became standard on numerous projects worldwide.
Custom low-dwell sweeps follow physical vibrator
limitations up to their full-drive start frequency by reducing
the drive level (Bagaini 2007, Sallas 2010). The vibration
spectrum is kept flat by spending more time on the
concerned frequencies (proportionally to the square of the
drive reduction factor for each frequency). Acquiring low
frequencies can thus have an important impact on sweep
length (especially when sweep start frequency is low), and
consequently on the crew productivity.
Equipment
manufacturers reacted to this new paradigm by developing
vibrators with improved efficiency in low frequencies (e.g.,
Wei, 2012). Typically, heavier reaction masses, longer
mass strokes and higher, more stable hydraulic pressures
enable improving the vibrator low-frequency performance.
Note that these features are already the standard of superheavy (80,000 Lbs) vibrators, with a bulkier design highly
favorable to the efficient generation of low frequencies.
The benefits of low-frequency and super-heavy vibrators
for low-frequency acquisitions have been demonstrated
(Tellier, 2014), with reduced low-frequency sweep
durations opening the way to higher productivity levels.
Reducing source downtime
The latest seismic recording systems released have been
designed to get rid of any downtime related to equipment
condition. Even with numerous line cuts, redundancies on
data, power, and clock, and recorder line units behaving as
autonomous nodes enable carrying on with shooting
without interruption. This has to be accompanied by
equivalent performance on the source side. This is the
purpose of spare vibrators, but even with spares the source
efficiency will depend much on the crew organization and
logistics. However, several features can support the crew
effort for achieving minimum source downtime.
Maintenance standby can be partly addressed by features
such as automatic greasing or automatic tire
inflation/deflation that ease access to sandy areas. The
refueling downtime per vibrator can pass from

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 2: Downtime reducing features: (top left) a fast


refueling nozzle, (top right) a wheel equipped with
automatic inflation/deflation, (bottom) an example of driver
view on vibrator guidance. Topographic maps and
obstacles can be superimposed to ease the navigation.
A non-optimal shooting strategy, although not considered
as downtime, may have detrimental consequences:
unnecessary VP-to-VP travel-time is either time lost for
production or will induce the need for an extra source on
the field. Vibrator guidance that indicates the next VP to
drivers is now common on numerous crews. Saving the
time needed to look for the next VP and optimizing the
detours positively impacts productivity. Vibrator guidance
is also a prerequisite for stake less operations that
significantly ease the surveying field preparation.
Towards alternative designs of vibrators?
Though not yet available on the market, the main measure
for vibrator operation cost reduction may come from
innovative and alternative designs of vibrators. As stated
previously, the vibrator operating expense is mainly
induced by gasoil consumption and staff required for ever
growing amounts of vibrators on crews. Looking around at

Page 138

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Reducing seismic vibrator operational cost

other industries and gaining inspiration gives a few clues


on possible enhancements.
As regards fuel consumption reduction, implementing
electric translation transmission on vibrators may prove a
sound alternative to the standard hydrostatic transmission
(Figure 3a and 3b). Electric transmissions are already
successfully used for other applications, such as public
transportation (city, airport buses, etc.) or civil engineering
machinery. While vibrating still requires a high-load
hydraulic vibration pump, replacing the translation pump
by a generator that drives electrical motors enables taking
advantage of the higher efficiency of electrical components,
using the engine in its optimal RPM range, and electrically
driving auxiliary equipment such as the AC compressor or
the cooling fans.
The higher CAPEX related to the higher cost of electrical
components can be rapidly absorbed by the lower OPEX
due to important fuel savings: a 30% fuel reduction is
within the industry average for such a solution. For a
62,000 lb vibrator consuming on average 40 l per hour and
operating 6,000 hours per year, this represents an annual
fuel saving of 72 T. Fuel prices vary significantly from one
country to another, but even with a low hypothesis of 0.2
$/l, the 14.4 K$ saved yearly on fuel shall write off the
feature additional cost in only a few years, not taking into
account the savings on refueling logistics.
Automation is another powerful way to address seismic
project operational expenses. Its development in seismic
seems slower than in other industries, most likely due to
ever-changing and hard-to-model field conditions.
Automation is currently closely considered for marine and
ocean-bottom applications. In land seismic, gaining
efficiency through automatic layout of channels (Maas,
2013) and even geophone strings (Sacilotto, 2000) has been
considered for a long time. On the source side, piloting
autonomous vehicles on predefined progression paths is
already performed for agricultural or mining applications.
Adapting these technologies to seismic vibrators will be an
issue for the land industry in the near future, if the industryspecific challenges such as HSE and unique progression
paths can be satisfactorily overcome. A crew with fully
autonomous vibrators on the field requiring minimum
human interaction (Blacquire, 2013) is surely not to be
expected for the present decade but seems realistic in the
longer term. Until then, productivity and/or expense
benefits from partial vibrator automation are highly
probable: for example, for maintenance and positioning or
with a leader vibrator followed by automated ones.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3a: Modelization of a standard hydrostatic


transmission. Hydraulic and mechanical couplings are
indicated in blue.

Page 139

Reducing seismic vibrator operational cost

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Conclusions
Cutting costs is a major concern for land seismic crews in
difficult market conditions. The vibrator item has been
addressed for a long time by contractors (e.g., through
vibrator fleet standardization and logistic and shooting
rationalization) and manufacturers who developed
numerous solutions to ease operation and maintenance and
then help with operating expenses reduction. Other
solutions are being developed or are still at the state of a
concept. Depending on their boldness and the industry
support for developing and testing them, the technical
landscape we have been used to for vibroseis is likely to
evolve significantly in the coming years and allow even
cheaper operations and cost per VP unless too greatly
reduced exploration budgets dissuade manufacturers from
performing the necessary and cash-consuming
research and development.
Acknowledgments
The author would like to thank Pascal Buttin (Sercel) for
constructive exchanges on prospective vibroseis seismic
acquisition and the management of Sercel for permission to
publish this work.

Figure 3b: Modelization of the proposed electric


transmission. Hydraulic and mechanical couplings are
indicated in blue; electrical ones in green.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 140

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bagaini, C., 2007, Enhancing the low-frequency content of vibroseis data with maximum displacement
sweep: 69th conference and exhibition, EAGE, Extended Abstract,
http://dx.doi.org/10.1190/1.2370368.
Blacquire, G., and G. Berkhout, 2013, Robotization in Seismic Acquisition: 83rd Annual International
Meeting, SEG, Expanded Abstract, http://dx.doi.org/10.1190/segam2013-0838.1.
Mass, S., G. Baeten, K. Hornman, and G. Irwin, 2013, Seismic cable handling system and method: Patent
WO 2013/134196 A2.
Sacilotto, J., M. Rombaud, and P. Barbe, 2000, Fully automatic mobile storage, handling, deployment and
retrieval equipment for belts of seismic sensors, buries them individually over large area at
controlled speed, avoiding onerous, time-consuming manual methods: French Patent 2785684A1.
Sallas, J. J., 2010, How do hydraulic vibrators work? A look inside the black box: Geophysical
Prospecting, 58, 318, http://dx.doi.org/10.1111/j.1365-2478.2009.00837.x.
Tellier, N., G. Caradec, and G. Ollivrin, 2014, Impact of the use of low-frequency vibrators on crew
productivity: 84th Annual International Meeting, SEG, Expanded Abstract,
http://dx.doi.org/10.1190/segam2014-0343.1.
Wei, Z., M. Hall, and T. Phillips, 2012, Geophysical benefits from an improved seismic vibrator:
Geophysical Prospecting, 60, 466479, http://dx.doi.org/10.1111/j.1365-2478.2011.01008.x.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 141

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Effects of wind noise on high-density seismic data and its field operation methods
Wu Yongguo*, He Yongqing, Liu Fengzhi,Yin Wuhai, BGP, CNPC, Hu Jie, Qinghai Oilfield Company, CNPC
Summary
The high-density data acquisition technology improves the
quality of data, but it is also facing many challenges,
especially in many regions of western China, where it
belongs to the arid plateau climate being characterized as
cold, hypoxia, drought and windy. Such bad natural
conditions, windy weather and short available production
duration affect the acquisition efficiency heavily. Aiming at
this problem, a mathematical model for wind noises is
established firstly and hereby the wind noises are simulated;
then the wind noises with different wind scales are added
into the wind-noise free shot record to do imaging impact
analysis; finally, the actual field wind noise records are
recorded and added into the wind-noise free shot records to
do imaging impact analysis and thereby get the required
high-density field operation method. Actual test data shows
that this method has a good result in solving the acquistion
efficiency problem of high-density data at areas with strong
windy meather.
Introduction
The purpose of high-density 3D survey is to increase shot
and receiver density by significantly increasing the source
and receiver points so as to significantly improve the data
quality. However, it also faces many challenges, especially
in many regions in western China, with arid plateau climate
being characterized as cold, hypoxia, drought and windy.
Such bad natural conditions windy weather (the average
wind duration is 10 hours per day) shortens the available
operation time per day and therefore affects the acquisition
efficiency heavily. For example, we standed by
traditionally when the wind scale was above moderate
breeze (4 level wind) or when the environmental noise was
greater than 7 micro-volts (mv) and accordingly a highdensity 2D JB project in western China had shut down for
18 days due to gale in the three-month acquisition period.
This brought huge investment to the field crew. The
traditional wind noise field operation method seriously
affected the field acquisition efficiency, extended the
production cycle and increased the field exploration costs.
Analyzing the influences of wind noise at different wind
scales on high-density acquisition seismic data and reestablishing the wind-noise-limited high-efficiency
acquisition method according to the wind noises will
reduce the impact on the field operation.
Wind noise forward modeling and impact analysis
Forward modeling of wind noises

2016 SEG
SEG International Exposition and 86th Annual Meeting

Assuming the earth is a homogeneous, isotropic, semiinfinite elastic medium, due to the viscosity of air, wind
noises received by geophones are mainly the surface
deformation caused by the force of wind to ground surface
(Sorrells, 1971). Supposing the wind blows from left to
right, the wind force to the geophone surrounding area is
distributed in the rectangular area on the windward side in
the form of point source. Considering the working range of
wind, the record for one trace is mainly affected by the
layout range of the geophone. The expression of drag force
of wind to the ground surface is:
1
(1)
F C v 2 S
d

where Fd is the drag force of wind, Cd is the drag force


coefficient, is the air density, v is the wind velocity, and S
is the stressed area. According to the wind-induced
oscillation theory (MOHURD, 2012), the wind velocity is
the sum of the average wind velocity and the fluctuating
wind velocity, as shown in expression (2):
(2)
v v vf
where v is the average wind velocity, which is a constant;
and vf is the fluctuating wind velocity, a random amount.
The average wind velocity changes exponentially with the
height above the ground and the expression is as follows:

v ( z) z
(3)
v10


10

where v (z ) is the wind velocity at the height z above the


ground, v10 is the reference wind velocity at height 10m
above the ground and is the surface roughness coefficient.
The fluctuating wind velocity spectrum is the Davenport
wind velocity spectrum (Davenport, 1961), which is
expressed as follows:
2
(4)
2
S ( ) 4 K v10

(1 2 ) 4 / 3

where S() is the fluctuating wind velocity spectrum, is


the angular frequency, K is the surface drag coefficient,
v10 is the reference wind velocity, is the turbulence
integral scale coefficient, which meets the following
expression:
600
(5)

v10
Based on the Shinozaki theory, the fluctuating wind
velocity time-domain signal can be derived from the
fluctuating wind velocity spectrum (Shinozaka, 1972), as
shown in formula (6):
N

v f (t) 2S (i ) cos(it i )

(6)

i1

where vf is the fluctuating wind velocity; S(i) is the ith


component of the fluctuating wind velocity spectrum; N is

Page 142

the number of components; i is the ith component angular


frequency; i is initial phase of the ith component, which is a
random variable uniformly distributed in (-, ); is the
angular frequency increment; t is the time variable.
Substitute the values of v and vf into Formula (2) will get
the wind velocity v, and the force of wind to ground
surface will be achieved if substitute them into formula (1).
Suppose the wind noise travels in the semi-infinite elastic
media, the wind noise signal is obtained by solving the
wave equation. For the wind velocity of gental breeze (3
level wind), the wind noise of 3 level wind can be
calculated, as shown in figure 1.

0.5s

1.0s

0.02

Figure 3: Wind noise migration sections for different wind scales


(a-l: 1-12 level wind from calm to hurricane) .

0.01

Amplitude

Effects of wind noise on actual high-density seismic data

-0.01
0

Time(s) 2

Figure 1: Forward modeling of 3 level wind noises.

Effects of forward modeling wind noise on high-density


seismic data
Taking a work area for the study, establish a 2D geological
model on the basis of the 2D drilling, logging, geological
and seismic data and perform forward modeling through
high-density 2D survey(source interval: 20m, trace interval:
20m, fold: 720). It can be seen from figure 2 that when the
wind scale is less than moderate gale (7 level wind), the
first arrivals and the valid waves in the single shot record
are not impacted much by wind noises; after moderate gale,
then they are deformed severly.It can be seen from figure 3
that the wind is stronger than 7 level wind, the S/N ratio of
the migration sections decreased significantly, especially
the image of thin layer, faults and breakpoints, and the
imaging accuracy is reduced significantly.

To further verify the reliability of the theoretical model


about the influence of wind noises on seismic data imaging,
we recorded the acquisition background records and the
seismic records under different wind scale conditions in the
field during the on-site project tests and in the early
acquisition period. The wind noise signal and frequency
spectrum of strong breeze with 6 level wind in figure 4
indicates that the energy of wind niose does not change
significantly over time; it exists both in the high and the
low frequencies and their energies are basically the same.
Figure 5 is the shot records with wind noise in different
level winds. It shows that with the wind scale increases, the
impact of wind noise from far offset is enhanced
significantly; the shot records less than moderate breeze (4
level wind) are not affected obviously, while the S/N ratio
0. 08
Amplitude

0. 04
0

-0. 04
-0. 08
0
6
5
4
3

0.5s

Amplitude

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Effects of wind noise on high-density seismic data and field operation methods

Figure 2: Simulated single-shot records for different wind scales


(a-l: 1-12 level wind from calm to hurricane) .

2016 SEG
SEG International Exposition and 86th Annual Meeting

Time(s) 2

60

80

2
1
0

1.0s

20

40
Frequency (Hz)

Figure 4: Wind noise signals and spectrum of 6 level wind (The


upper map is the signal in time domain and the lower map is the
spectrum) .

Page 143

1.0s

2.0s

2.0s

Figure 5: Actual single shot records with different wind scales (a: 2
level wind(light breeze), b: 3 level wind(gentle breeze), c: 4 level
wind(moderate breeze), d: 5 level wind(fresh breeze), e: 6 level
wind(strong breeze), f: 7 level wind(moderate gale)) .

is not allowed to do acquisition data until the wind scale is


less than moderate breeze (4 level wind), which is very
strict for high-density seisimc acquisition in adverse
weather conditions and in returen will affect the efficiency
severely. Hence, we usually judge whether to succeed
acquisition by the first arrivals in the field. In order to make
a quantitative analysis, the ratio of the wave energy before
first arrival to the first arrival energy after first arrival in the
same time window of the wind noise short record is applied.
It can be seen from figure 5 and 7, when the offset is larger
than 4,000m, the wind noises to first arrival engery ratio in
the shot record with 4-7 level wind is more than 50%, it is
difficult to pick first arrivals and only the first arrivals

40

0.5s

35

2
3
4
5
6
7

30
25
20
15
10

Wind scale

Average energy(mv)

1.0s

5
0

-2 -1 1 2
Offset(km)

-3

-4

-5

-7 -6

0.5s

Figure 6: The average amplitude energy of wind noises in the


single shot records with different wind scales.
1.0s

100

90
70
60
50
40
30

Wind scale

2
3
4
5
6
7

80

Figure 8: The pre-stack sections acquired by high-density


geometry with different fold under the wind noise conditions of 67 level wind.

20
10
0

-2 -1 1 2
Offset (km)

-3

-4

-5

-7 -6

Average_energyr_ratio (%)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Effects of wind noise on high-density seismic data and field operation methods

Figure 7: The average energy ratio between the pre- and post- first
arrivals values of shot records with different wind scale and wind
noise.

of the shot records with wind noise of 5-7 level wind (fresh
breeze to moderate gale) is affected more obviously. The
envrionmental noises are monitored by the noise limit on
the instrument site to see whether it is suitble to continue
the acquisition. According to the previous acquisition
standards, it is not allowed to do acquisition data when the
background recording noises are greater than 7 micro-volts
(mv) or when the number of bad traces is more than 1/24 of
the total number of traces. Figure 6 is the analysis result of
the before first arrival values picked from the the different
shot records in figure 5. It indicates that the wind noises of
light-gentle breeze (2-3 level wind) is less than 7 mv, that
of moderate fresh breeze (4-5 level wind) is 10-15 mv,
and that of strong breeze to moderate gale (6-7 level wind)
is 20-40 mv. Based on the previous acquisition standard, it

2016 SEG
SEG International Exposition and 86th Annual Meeting

less than 4,000m is usually used to calculate the static


corrections, therefore, the first arrival pickup from the
records with 4-7 level wind has little effect on the statics
calcualtion. The high-density 3D acquisition mainly make
up for the low S/N ratio of shot records caused by wind
noises. To make a quantitative analysis, the allowable
minimum wind noise shot records S/N ratio for highdensity acquisition is computed based on the matching
relationship among the wind noises, the S/N ratio of valid
signals and the fold, and thereby the wind scale condition
that allows to perform acquisition operation is calcualted by
the high-density geometry parameters. For random noises,
the fold can be calcualted by Formula (7):

1
N Expect _ sec tion (2S / N )
Shot _ record

(S / N )
(7)
where N means the number of folds, Expect_section(S/N)
means the expected S/N ratio of the section and the
Shot_record(S/N) means the shot records S/N ratio; the
expected section S/N ratio in the formula is a constant; and
the fold for the high-density 3D survey is also given in the

Page 144

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Effects of wind noise on high-density seismic data and field operation methods
early operation stage of the project, so the S/N ratio of shot
record can be calculated. As can be seen obviously from
figure 8 that the S/N ratio of the stacked section for 6-7
level wind is stronger; the event is in poorer continuity and
distorted waveform at shallow-middle layer; with the
increase of fold, the S/N ratio in the section is improved
significantly, the continuity of event at shallow-middle
layer is even better; and when the fold is up to above 1,440,
the improvement of the S/N ratio in the section is no longer
obvious. This indicates that the increase of fold suppresses
the wind noises effectively and improves the S/N ratio of
the section, but when it is increased to a certain value, the
fold will no longer work much in improving the S/N ratio.
Applications
Through the above analysis, with the increase of wind scale,
the impact of wind noises on shot records and imaging is
obviously increased; different geometries also have
different wind noise suppression abilities, therefore, the
high-density acquisition methods are different for different
seismic and geologic conditions in different regions.
Different high-density field operation methods are designed
for different areas based on their own seismic-geologic and
climatic conditions and the high-density acquisition
parameters. When the NZ 3D high-density acquisition
project is implemented in western China, the wind is often
4-7 level wind in the work area, which has seriously
affected the acquisition efficiency of the project and the

1s

the two sections show basically consistent imaging wave


group characteristics without any obvious influences from
the wind; in addition, the acquisition period of this 3D
project is 45 days, 15 of which had stronge wind of 6-7
level wind. The project would have a 15-day delay if we
followed the previous stardard. In this project, the wind
scale less than 7 is allowed for collection, which has
shortened the acquisition cycle for 15 days.
Conclusions
Through the analysis of the wind noise in the study area, it
is concluded that:
(1) A wind noise mathematical model can be established
based on the force of wind to ground and the wind-induced
oscillation theory, and it can simulate the wind noise
signals at different wind scales.
(2) The analyses to the energy, frequency and true values of
wind noise records demonstrate that wind noises do not
change significantly with time; both high- and lowfrequency signals have basically the same wind noise
energy; and with the increase of wind scale, the wind noise
energy increases obviously.
(3) The tolerances of the acquisition plans to the wind
noises at different wind scales can be determined through
building a typical geological model of the project area,
performing forward modeling, stacking the wind noises at
different wind scales into the free noise modeling shot
records and analyzing the influences of different wind
noises at different wind scales to seismic imaging effects.
(4) Establishing the high-density field operation methods
by the wind noise levels to be tolerated can effectively
improve the field operation efficiency and shorten the
acquisition cycle.
Acknowledgements

2s

Thanks Qinghai Oilfield Company for providing the raw


data for research tests and the permission for this paper.

3s
a

Figure 9: NZ 3D weak- and strong-wind influenced migration


sections (a: acquired with 46-50 swaths and the wind scale less
than light breeze (2 level wind), b: acquired with 51-55 swaths and
the wind is strong breeze to moderate gale (6-7 level wind)).

fold of the NZ 3D geometry reached 720. According to the


above analysis, we applied a field operation method
applicable for the wind scale less than 7. According to
figure 9, the two NZ 3D sections are data of 51-55 swaths
under the 6-7 level wind and 46-50 swaths under a light
breeze (less than 2 level wind). The distance between the
two sections is only 600m and the acquisition time span
between them is one day. It can be seen from figure 9 that

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 145

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Davenport, A. G., 1961, The spectrum of horizontal gustiness near ground in high winds: Royal
Meteorological Society, 87, 194211, http://dx.doi.org/10.1002/qj.49708737208.
Ministry of Housing and Urban-Rural Development of the Peoples Republic of China, 2012, Load code
for the design of building structures (MOHURD) GB 50009-2012: China Building Industry Press.
Shinozaka, M., and C. M. Jan, 1972, Digital simulation of random processes and its applications: Sound
and Vibration, 25, 111128, http://dx.doi.org/10.1016/0022-460X(72)90600-1.
Sorrells, G. G., J. A. McDonald, Z. A. Der, and E. Herrin, 1971, Earth motion caused by local
atmospheric pressure changes: Geophysical Journal of the Royal Astronomical Society, 26, 83
98, http://dx.doi.org/10.1111/j.1365-246X.1971.tb03384.x.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 146

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The broadband response of a displacement sensor


Elio Poggiagliolmi * (Entec Integrated Technologies Ltd., United Kingdom), Flavio Accaino (OGS Italian
National Institute of Oceanography and Applied Geophysics, Italy) and Aldo Vesnaver (Petroleum Institute,
United Arab Emirates)
Summary

Field setup

Broadband seismic data acquisition requires a wide


spectrum down to the very low frequencies (< 1Hz) for
both the source and the receivers. Significant progress has
been made on broadband data acquisition and processing,
but the low frequency sensitivity of the land geophone to
particle velocity remains an issue. The geophone output to
particle velocity decays toward the low frequencies relative
to displacement. A similar decay occurs in accelerometers
due to particle acceleration. (Poggiagliolmi et al., 2015). In
this paper comparisons of field measurements were made
using two low frequency geophones and a new particle
displacement sensor. This comparison has demonstrated
that the sensitivity of the geophones decreases toward the
low frequencies, while the response of the displacement
sensor remains essentially flat down to less than 1Hz.

The measurement setup consisted of a hammer source


striking a grassy top soil and three tightly clustered sensors.
No attempt was made to synchronize the time of the
hammer blow to trigger the start of record. Figure 1
shows the layout of the sensors. The source-to-sensors
offset was approximately 10m. Of the three sensors
employed, two were conventional geophones, the 1Hz L4
and the 5Hz SG-5, while the third was the Uncoupled
Acoustic Sensor (UAS), which measures particle
displacement. The latter sensor was resting on the ground
by means of an acoustically insulated cylindrical structure,
which also served as an air turbulence screen. Furthermore,
the L4 geophone was placed on the grassy surface, whereas
the SG-5 was firmly planted into the ground by means of
two spikes attached under its body. The distances
separating the three sensors was less than 10cm.

Introduction
The moving coil geophone is the most widely used sensor
for land seismic acquisition. It is sensitive to velocity and
its response is essentially flat above the natural resonant
frequency. Below resonance it behaves like a low-cut filter.
Furthermore, the particle velocity sensed by the geophone,
being the derivative of particle displacement, decays
toward the low frequencies like a low cut filter, at the rate
of -6db/octave. Both filters are detrimental to broadband
applications where low frequencies not only are necessary
to increase the resolution of the signal, but also to help
quantifying the inversion of reflection data into acoustic
impedance (Poggiagliolmi et al., 2015). The recovery of
low frequencies below the geophone natural resonant
frequency by inverse filtering has been reported (Bertram
and Margrave, 2010, 2011, Zhang et al., 2012, Margrave et
al., 2012) but its success is limited by the ever presence of
noise in the data.
Unlike particle velocity, particle displacement
exhibits a broader band response especially at the low
frequencies. Therefore, it would be desirable to acquire
seismic data with displacement sensors rather than
geophones or indeed accelerometers. To the best of the
authors knowledge no progress has been made toward the
development of displacement sensors until recently
(Poggiagliolmi et al., 2012, 2013).
The purpose of the field experiment described herein
is to compare the output of a displacement sensor with
those of two commercially available geophones. From the
comparisons it is clear that the broadband response of the
displacement sensor far exceeds that of the geophones.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: The three receivers of the field test: an L4 1Hz (left), an


SG-5 5Hz (front) and the UAS (back, with the orange lining).

Field tests
Figure 2 shows an example of a recording made with the
above setup. Shown in this figure are the time and
frequency domain responses of the first arrival pulse
simultaneously picked-up by the three sensors. The UAS
senses particle displacement and its response is broadband
down to the very low frequencies (Figure 2, red trace). The

Page 147

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The broadband performance of a displacement sensor


geophones output, being sensitive to velocity, is
proportional to the derivative of the particle displacement
(Poggiagliolmi et al., 2015). This is equivalent to applying
a -6db/octave low-cut filter to the UAS signal. Superposed
to this filter there is an additional roll-off filtering effect
introduced by the geophone response. This can be
approximated by a second order minimum phase
Butterworth low-cut filter for a damping factor of 0.7
(Bertrand and Margrave, 2010). The entire spectral
responses of the two geophones, which include particle
velocity, geophone responses and source wavelet, are
shown in Figure 2. From this figure it can be seen that the
two geophones spectra track each-other closely from
100Hz to about 20Hz. Below this frequency the slope of
the 5Hz geophone represented by the green trace becomes
progressively steeper relative to that of the 1Hz geophone
(grey trace), causing the two spectral traces to diverge
toward the low frequencies. The increase in the slope
steepness is the result of the -12db/octave roll-off
introduced mainly by the 5Hz geophone response.
Worthy of note is the crossover between the

(red trace) remains essentially flat to within a few decibels


about a mean of -8db. Conversely, at frequencies above the
crossover the spectral magnitude of the geophones is higher
than that of the UAS. This behavior can be easily explained
under the assumption that the geophones signals are
proportional to the derivative of the displacement signal.
Under this assumption, the flat response of the geophone at
frequencies above resonance senses particle velocity that
decays at a low-cut rate of -6db/octave relative to particle
displacement. As mentioned above, the geophone response
below resonance for a damping factor of 0.7, decreases
approximately at a rate of -12db/octave. As a result, below
resonance the total attenuation rate relative to displacement,
down to zero frequency, is -18db/octave.
Turning now to the time domain first arrival pulses
shown in Figure 2, the UAS waveform is front loaded,
while the geophones waveforms are almost symmetrical
close to their respective onset. That is, they are both
characterized by two nearly equal negative peaks on either
side of a high amplitude positive peak. The differences
among the above waveforms may represent a qualitative

Figure 2: First arrivals in the time domain (left) and the


corresponding amplitude spectra (below), for the three
different sensors in Figure 1: UAS (1, red line), 1Hz (2, grey
line) and 5Hz (3, green line).)

geophones and UAS spectra occurring at their normalized


common dominant frequency of approximately 80Hz.
Below this frequency, the magnitudes of both geophones
spectra dive down rather steeply compared to the UAS
spectrum, which is flatter and broader from approximately
50Hz down to the very low frequencies (<0.1Hz). Within
this frequency range the trend of the displacement spectrum

2016 SEG
SEG International Exposition and 86th Annual Meeting

indication of the derivative relationship between the UAS


and the two geophones signals. Provided the observed
relationship is indeed valid it should be possible, in
principle, to convert the UAS signal into geophone signals
which exhibit the same properties as those recorded by the
5Hz and 1Hz geophones. To achieve this, the following
transformations were applied to the UAS signal: first, its

Page 148

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The broadband performance of a displacement sensor


derivative was calculated to transform displacement into
velocity. Then, to make the latter compatible with the total
geophones signals below resonance it was convolved with
two distinct 2nd order minimum phase Butterworth low-cut
filters. The chosen Butterworth filter cut-off frequency was
5Hz to simulate the SG-5 geophone response, and 1Hz for
the L4 geophone response.
Figure 3 shows the comparison between the

calculated and the true SG-5 geophone signals. The time


domain responses are very similar though the true signal is
noisier, probably due, in part, to a spurious resonance
occurring at about 245Hz. The details of the agreement
between the calculated and true SG-5 signals can be seen
more clearly in their frequency spectra, also shown in
Figure 3. The agreement is excellent for the entire
frequency band below 100Hz. Above 100Hz there is no
similarity between the calculated and true signals. Also

Figure 3: First arrivals for a 5Hz geophone: simulated with the UAS (1, red line) and field measurement (2, blue line).

Figure 4: First arrivals for a 1Hz geophone: simulated with the UAS (1, red line) and field measurement (2, blue line).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 149

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The broadband performance of a displacement sensor


visible in the true signal is the spurious resonance at 245
Hz. The lack of similarity at the high frequencies and
spurious resonance may be due to a defective spring in the
SG-5 geophone.
Moving to Figure 4, showing the 1Hz geophone
response, the agreement is excellent between calculated and
true signals for the entire frequency band below 100Hz.
Both signals are also very similar between 100Hz and
150Hz. Above this frequency, the relative flatness of both
spectra and the absence of any correlation between them
indicate that there is only noise in that part of the data.
A major benefit of broadband data is the use of the
low frequencies for the attenuation of the side lobes of
wavelets associated with seismic reflections. The side lobes
attenuation is best illustrated by means of zeroing the phase
of the first arrival pulses shown in Figure 2. An inspection
of the resulting zero phase wavelets in Figure 5 reveals that
the size of the first side-lobes relative to the peak of the
center lobe in the displacement wavelet is smaller than
those in the velocity wavelets. This observation is
confirmed by the numbers in the table on the right side of
the figure.

Conclusions
Since geophones sense particle velocity, they lack
sensitivity at the low frequencies relative to their
performance at the high frequencies. Instead, particle
displacement sensors have a broad band response
especially at the very low frequencies.
Comparisons using field data acquired with a
displacement sensor, a 5 Hz geophone and 1Hz geophone
demonstrated that the response of the displacement sensor
is essentially flat at low frequencies down to 0.1Hz, and
lower, while the geophones attenuate these low
frequencies.
The validity of the derivative relationship between
the displacement sensor and geophones has been
demonstrated by direct field measurements. This is an
important result, because it confirms that the UAS sensor
output is directly related to particle displacement.
Field-recorded first arrivals transformed to zerophase wavelets were used to demonstrate that the wavelets
side-lobes magnitude is smaller for the displacement sensor
compared to that produced by conventional geophones.

Acknowledgements
The authors would like thank Entec Integrated
Technologies Ltd., for permission to publish this work,
which was partially supported with the grant n. 14502 by
the Petroleum Institute Research Centre (Abu Dhabi,
United Arab Emirates) and by OGS Italian National
Institute of Oceanography and Applied Geophysics (Italy).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 150

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bertram, M. B., and G. F. Margrave, 2010, Recovery of low frequency data from 10Hz geophones:
CREWES Research Report 22.
Bertram, M. B., and G. F. Margrave, 2011, Recovery of low frequency data from 10 Hz geophones:
Recovery CSPG CSEG CWLS Convention, Calgary, Expanded Abstracts.
Margrave, G. F., M. B. Bertram, K. L. Bertram, K. W. Hall, K. A. H. Innanen, D. C. Lawton, L.E.
Mewhort, and T.M. Phillips, 2012, A field experiment recording low seismic frequencies: AAPG
Datapages/Search and Discovery, 90174, GeoConvention, Calgary, Canada.
Poggiagliolmi, E., F. Accaino, and A. Vesnaver, 2015, Broadband data acquisition A case for
displacement sensors: 77th Annual International Conference and Exhibition, EAGE, Extended
Abstracts, http://dx.doi.org/10.3997/2214-4609.201414421.
Poggiagliolmi, E., A. Vesnaver, D. Nieto, and L. Baradello, 2012, The application of uncoupled sensors
to seismic exploration: Presented at the SEG/KOC workshop on Single Sensor Acquisition,
Kuwait.
Poggiagliolmi, E., A. Vesnaver, D. Nieto, and L. Baradello, 2013, An uncoupled acoustic sensor for land
seismic acquisition: 75th Annual International Conference and Exhibition, EAGE, Extended
Abstracts, http://dx.doi.org/10.3997/2214-4609.20130745.
Zhang, Y., Z. Zhihui Zou, and H. Zhou, 2012, Estimating and recovering the low-frequency signals in
geophone data: 82nd Annual International Meeting, SEG, Expanded Abstracts, 15,
http://dx.doi.org/10.1190/segam2012-1178.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 151

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Land broadband seismic exploration based on adaptive vibroseis


Xianzheng Zhao1, Xishuang Wang1, Ruifeng Zhang1, Chuanzhang Tang1, Zhikai Wang2
1
PetroChina Huabei Oilfield Company; 2 China University of Petroleum (Beijing)
Summary
The seismic signal acquired using conventional vibroseis, is
affected by absorption and attenuation of subsurface layer.
The bandwidth of seismic data is relatively narrow in
complex attenuation media, which affects the resolution of
seismic imaging. In this abstract, adaptive vibroseis
acquisition is designed and high-resolution processing
technologies are adopted to improve the resolution of land
seismic exploration. The adaptive vibroseis optimizes
sweeping parameters according to the first received signals,
and designs sweeping time of different frequencies
components scientifically. Therefore, the excitation energy
in effective frequency band can be strengthened and the
signal to noise ratio (SNR) can be improved. In addition,
according to the characteristic of adaptive sweeping signals
in vibroseis, we use specialized high-resolution processing
technology to extend the frequency band of seismic data.
The filed data example from China indicates that the
adaptive vibroseis and specialized high-resolution
processing technologies can achieve seismic image with a
high resolution and quality, and improve the prediction
ability of thin interbed lithologic reservoir.
Introduction
With the improvement of demands for safety and
environmental protection, clean vibroseis exploration has
become the main method for land exploration. The
invention of low frequency vibroseis has expanded the
sweeping frequency to 3Hz (Tao et al., 2010) in the low
frequency band, which effectively improves the quality of
the information at low frequency band. However, the
excitation of vibroseis is affected by the absorption and
attenuation of unconsolidated surface layer, which causes
severe attenuation of high frequency signal, thus restricts
the ability of high resolution vibroseis exploration.
In north China, the sedimentation of Quaternary system is
relatively thick, surface layer is quite unconsolidated, and
the extraction of huge amount of ground water for
industrial and agricultural purpose has caused great
variations in the thickness of low velocity zone laterally.
The acquired seismic data note that the attenuation of
signal of 60Hz reaches 30-60 dB. Due to the severe
attenuation of middle and high frequency components of
seismic wave, the acquired data are only valid around 50Hz
at high frequency band. After vibroseis nonlinear sweeping
technology (Zhukov,2013) emerged, the weakness of linear
sweeping technology can be improved and the S/N ratio of
middle and high frequency band is also improved to some

2016 SEG
SEG International Exposition and 86th Annual Meeting

extent (Lan, 2008; Cao et al., 2009). However, the


conventional determination of nonlinear sweeping
parameters only depends on the analysis of data in several
experimental spots, and the parameters are not adaptive to
areas where the structure of surface layer has great
variations laterally. Thus, seismic data bandwidth cannot be
effectively extended. In this abstract, we use adaptive
vibroseis acquisition and high-resolution processing
technologies to improve the resolution of land seismic data.
Method
Firstly, adaptive vibroseis technology is adopted, through
designing nonlinear sweeping signal point by point. The
excitation energy of middle and high frequency
components can be strengthened and the reflection
information of middle and high frequency in target layer
can be received. Therefore, broadband raw seismic data can
be obtained by adaptive vibroseis. After that, we can
further extend the bandwidth of seismic data by protecting
low frequency and extending high frequency so that our
goal of broadband land seismic exploration can be achieved
finally.
The adaptive vibroseis technology first uses linear
sweeping data to analyze amplitude spectrum, and then
analyzes the extent of seismic attenuation. After that, we
can design nonlinear sweeping signal point by point and
achieve broadband excitation. The adaptive vibroseis
sweeping signal can be expressed as:
1
S ( f )avis
,
(1)
S ( f )response
where denotes adaptive coefficient (we can use the same
in the whole work area), S ( f )avis indicates the
amplitude spectrum of adaptive sweeping signal, and
S ( f )response is the amplitude spectrum of linear sweeping
signal.
The adaptive vibroseis can adjust the sweeping time length
of each frequency band according to the attenuation of
different frequency band. Due to the severe attenuation of
high frequencies, the time length of adaptive sweeping
signal at low frequencies is relatively short, and the time
length of high frequencies is longer. Thus, the energy of
high frequencies can be strengthened. By analyzing the
amplitude spectrum of linear and adaptive vibroseis of the
same position (Figure 1), we can note that adaptive
vibroseis signal can excite more high frequency energy and
broadband raw seismic data can be obtained. Under the

Page 152

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Land broadband seismic exploration based on adaptive vibroseis

condition of same sweeping parameter, the information of


adaptive vibroseis signal at low frequencies is relatively
weaker compared with conventional linear sweeping, thus
we must implement low frequency protection technology
by data processing. Low frequency protection technology
we used here is based on high-fidelity pre-stack denoising,
which mainly applies data-driven low frequency
compensation technology. We obtain an appropriate
matching filter from real data, and implement shaping
processing with the calculated matching filter so that the
energy of wavelet at low frequencies can be strengthened,
while the phase spectrum of the wavelet remains
unchanged. By data-driven low frequency compensation
technology, we can strengthen the information at low
frequencies and also obtain amplitude preserving seismic
data.

strengthen the energy of weak reflection and improve the


continuity of the events and resolution of seismic data. The
essence of this technology is inverse filtering of the
subsurface.

(a)

(b)

(a)

(b)
Figure 1 The amplitude spectra of adaptive (red) and
linear (shadow) vibroseis from excitation signal and
received seismic data.
High frequency processing technologies we used in this
abstract mainly include well-constrained deconvolution and
structure-constrained Q compensation technology. In the
conventional methods, the deconvolution parameters is
determined by parameter sweeping, which compares the
effect of the parameters on single shot section with stack
section. In this abstract, we use well-constrained
deconvolution technology, which gets preferable
deconvolution parameters by analyzing the crosscorrelation of the well-nearby seismic traces and synthetic
record or VSP stack and generating a series of matching
attributes (matching reliability, predictability, deliver
function, etc). The result of well-constrained deconvolution
technology can reflect the features of reflection wave in the
layer around the well objectively. The aim of Q
compensation is to compensate amplitude attenuation and
phase distortion caused by subsurface attenuation effects,
correct the stretch effect of the wavelet phase, compensate
the attenuation of wave frequencies and amplitude,

2016 SEG
SEG International Exposition and 86th Annual Meeting

(c)
(d)
Figure 2 (a) Shot gather from linear sweeping; (b) shot
gather from adaptive sweeping; (c) the spectra of time
window 0-1500ms; (d) the spectra of time window 15004000ms.
Structure-constrained Q compensation technology are used
for our high resolution processing, which uses spectrumratio method to compute the layer Q model with the help of
sliding time window along the layer on the basis of the
structure of the target layer. Structure-constrained 3D Q
model can be obtained by interpolation and smoothing
process. After achieving 3D Q model, we can implement Q
compensation. As the attenuated energy of wavelet being
compensated, the energy at high frequencies can be
strengthened effectively and the resolution of seismic data
can be improved.
Application example in China
We use the proposed method to seismic exploration in
TKX area of China, which focuses on lithologic trap. The
burial depth is between 1800-2500 m, and the thickness of
low velocity zone is between 12-25m. The 3D acquisition
and observing system is 36L4S120R. The size of bin is
2525m, and the fold number is 360. We use lowfrequency vibroseis KZ28LF, and the sweeping frequency
range is 1.5-84Hz. We use conventional acquisition and

Page 153

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Land broadband seismic exploration based on adaptive vibroseis

processing, and adaptive vibroseis acquisition and


processing to obtain two sets of 3D seismic data volume,
respectively.
Comparing the single shot gathers from different
acquisition (Figure 2), we can observe the effective
reflection information on main target layers of both shot
gathers, but the resolution of the dataset obtained from
adaptive sweeping method is higher. By analyzing the
spectra at different depth (0-1500ms and 1500-4000ms),
the spectrum of adaptive sweeping is similar to that of
linear sweeping at low frequencies, but the middle and high
frequencies are better. The effective bandwidth extension is
around 20Hz at shallow layers and it is around 14Hz at
main target layers (Figure 2d). As the information of
seismic data at high frequencies in TKX area is quite rich,
firstly we apply data-driven low frequency compensation
technology, and low frequency signal at 1.5-6Hz is
strengthened by at least 3dB (Figure 3). The information at
low frequencies in deep layers is preserved effectively and
the S/N ratio of deep layer signal is improved obviously.
Based on this result, we implement high resolution
processing method.

(a)

(b)
Figure3 The amplitude spectrum and phase spectrum
before (red) and after (blue) low-frequency-preserving
processing. (a) Amplitude spectrum; (b) Phase spectrum.
We use well-constrained deconvolution technology, which
can get preferable deconvolution parameters by analyzing
the cross-correlation of synthetic record at well and the
nearby seismic traces and generating a series of matching
attributes (matching reliability, predictability, deliver
function). After quantifying, comparing and analyzing the
outcomes, the deconvolution parameter can be determined.
By applying well-constrained deconvolution, the bandwidth
of target layer is extended by 8-10Hz. We do the
interpretation at whole area with the help of geological
stratification divided by drilling. The deep layers are
divided into 5 parts and choose time window along the

2016 SEG
SEG International Exposition and 86th Annual Meeting

layer to compute Q value of each layer, then we can


establish Q volume of the whole area using interpolation.
The S/N ratio of main target layers at middle and high
frequency is further improved, and the effective bandwidth
of layer T4 is between 1.5-60Hz. We have achieved the
goal of broadband exploration by adaptive vibroseis
acquisition and high-resolution processing technologies.

(a)

(b)

(c)
(d)
Figure 4 Seismic section and spectrum of conventional
exploration (a, c) and broadband exploration (b, d).
From the final seismic section of two methods, we compare
the main target special lithology covered on T4 and tail
sandstone at the lower part of T4 (Figures 4a and 4b). From
the final section of broadband exploration, the special
lithology layer can be recognized clearly and the lateral
variations of tail sandstone can also be recognized.
However, from the final section of regular exploration, it is
difficult to recognize the special lithology part and the tail
sandstone. By analyzing the spectra of target layer T4, we
can indicate that the energy of broadband exploration at
middle and high frequency band is strengthened and the
effective bandwidth is extended by 8-16Hz. The effective
bandwidth of target layer in the final section of broadband
exploration is 1.5-60Hz.
From P-wave impedance inversion sections (Figure 5), we
can note that the sand body is multi-periodic with rich
information and high resolution. However, we can only
recognize sands group from the inversion section of
conventional exploration. According to well drilling data,
there are large groups of sandstones at the special lithology
part and tail sandstone part of T4. The percentage of sandmud stone predicted by RMS amplitude attribute of
broadband exploration data matches well with well drilling
data, and while the prediction error of conventional
exploration data is relatively big. We use newly acquired
3D data and indicated a number of lithologic traps after
careful interpretation and reservoir prediction. We
determine well GB1 for drilling according to broadband

Page 154

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Land broadband seismic exploration based on adaptive vibroseis

exploration data, and from sand-1 part (2359m-2421m) we


can find oil layer (4m, 1 layer), oil/water same layer (6m, 2
layers), water layer with oil (15m, 1 layer). The outcome of
well drilling matches well with the prediction of broadband
exploration data (Figure 6), which demonstrates the
advantage of broadband exploration on lithology and
reservoir exploration.
Conclusions
The adaptive vibroseis broadband exploration technology
can extend the bandwidth of seismic data effectively, which
provides a new technological method for lithological
exploration. Low frequency adaptive vibroseis technology

is the foundation of broadband exploration, and the


excitation energy at high frequency band can be
strengthened through self-optimize adaptive sweeping. The
S/N ratio of raw data at high frequency band can be
improved and high resolution raw data can be obtained.
The data-driven low frequency preserving and ensuring the
fidelity of high frequency broadband data are the key
elements to further extend the bandwidth of seismic data.
The broadband acquisition is based on excitation of high
frequency signal to strengthen the propagated signal at high
frequencies, and the effective depth of broadband
exploration may be affected because the high frequency
components are absorbed severely by subsurface layer.

(a)
Figure 5 P-wave impedance inversion of conventional exploration (a) and broadband exploration (b).

(b)

Figure 6 Well GB1 (left) and seismic section (right) of broadband exploration.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 155

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Cao, W., Li, X., and Guo, H., 2009, Shaping design of vibroseis sweeping signal: Geophysical
Prospecting for Petroleum, 48, 611614.
Lan, J., 2008. Application of vibroseis nonlinear sweeping in high resolution seismic data
acquisition: Geophysical Prospecting for Petroleum, 47, 208211.
Tao, Z., Su, Z., Zhao, Y., and Ma L., 2010. The latest development of low frequency vibrator for
seismic: Equipment for geophysical prospecting, 20, 15.
Zhukov, A., 2013. The adaptive vibroseis technology: hardware, software and outcomes: 83rd Annual
International Meeting, SEG, Expanded Abstracts, 249253,
http://dx.doi.org/10.1190/segam2013-0542.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 156

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Initial 3C-2D surface seismic and walkaway VSP results from the 2015 Brooks SuperCable
experiment
Kevin W. Hall*1, J. Helen Isaac1, Joe Wong1, Kevin L. Bertram1, Malcolm B. Bertram1, Donald C. Lawton1,3,
Xuewei Bao2, and David W. Eaton2 (1CREWES and 2Microseismic Industry Consortium, University of Calgary, and
3
Containment and Monitoring Institute)

Summary
A 3C walkaway VSP and surface seismic experiment was
conducted at the Containment and Monitoring Institute
(CaMI) Field Research Station (FRS) in May of 2015. The
FRS is located near the town of Brooks in southern Alberta,
Canada. Multiple objectives for the program included
student training, surface source and receiver comparisons,
multi-component walkaway VSP acquisition, and velocity
tomography for site characterization.
Two parallel NE-SW receiver lines were laid out with one
line centered on a well and the other offset 100 m to the
northwest. Both receiver lines had single-component
geophones at a 10 m receiver spacing. In addition, the line
centered on the well had three-component geophones at a
30 m receiver spacing. A tool with three-component
geophones was deployed within the well at three different
levels, giving receiver positions from 106 to 496 m depth at
a 15 m spacing.
Two source lines centered on the well location, one linear
with a vibe point (VP) every 10 m and the other semicircular with a VP every 5 degrees were acquired three
times, once for each tool position in the well. The source
was an IVI EnviroVibe using a variety of filtered and
unfiltered maximal length sequence pilots (m-sequences) as
well as a linear 10-200 Hz sweep. This abstract presents a
first look at the data and some early results.

These data were recorded using ESG Paladin recorders. A


Geode recorder was present in the cab of the EnviroVibe in
order to record auxiliary traces from the Pelton decoder that
was in the Vibe.

Well

Figure 1: Map of survey area showing receiver lines 106 and


108 (blue dots). Buried pipelines are plotted as yellow/cyan
dash-dot lines. The access road and well pad are shown as
solid red lines, and the well location is a red bulls-eye. North
is up. Background photo courtesy of Newell County, Alberta.

Well

Introduction
A 3C walkaway VSP and surface seismic experiment was
conducted at the Containment and Monitoring Institute
(CaMI) Field Research Station (FRS) in May of 2015. Two
parallel NE-SW receiver lines were laid out with one line
(Line 108) centered on well CMCRI COUNTESS 10-2217-16, and the other (Line 106) offset 100 m to the
northwest (Figure 1). Receiver lines 106 and 108 had
single-component SM-24 geophones at a 10 m receiver
spacing connected to an Inova (ARAM) Aries SPML
recorder. In addition, receiver line 108 had threecomponent SM-7 geophones in nail-type casings at a 30 m
receiver spacing recorded by Inova Hawk nodal systems. A
three-component ESG SuperCable was deployed in the
well at three different levels, giving receiver positions in
the well from 106 to 496 meters depth at a 15 m spacing.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 2: Map of survey area (cf. Figure 1) showing source


lines 204 and 208 (red dots).

Two source lines were acquired three times, once for each
tool position in the well (Figure 2). The source was an IVI
EnviroVibe sweeping from 10-200 Hz linearly over 16 s
with an additional 4 s listening time. Source line 208 (NESW) had a Vibe Point (VP) every 10 m for surface 2D
seismic and walkaway VSP. A semi-circular source line
(Line 204) with a radius of 400 m and a VP every five
degrees was acquired for a velocity tomography study.
Finally, source line 208 was re-acquired using a variety of
filtered and unfiltered maximal length sequence pilots

Page 157

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

(m-sequences) while the SuperCable was removed from the


well.
Initial Results
P-P and P-S synthetic offset gathers calculated from well
log data show good reflectivity in the zone of interest.
Travel-time variations for source line 204 and a receiver at
383.5 m depth (Figure 3) show a several millisecond travel
time variation. The fast wave propagation direction (with
minimal travel time) coincides with the direction of the
NE-SW line and generally follows the orientation of
regional maximum horizontal compressional stress. This
indicates the existence of weak HTI anisotropy likely due
to fractures aligned by the regional maximum horizontal
stress field.
Figure 4: Vertical and radial component unprocessed P-P and P-S
correlated source gathers for VP 208149.

Figure 3: Line 204 travel time variations for receiver at 383.5


m depth

Twenty second uncorrelated VSP source gathers were


created from the ESG continuous data, and were then
vertically stacked and correlated with TREF. A maximum
power two-component rotation was applied to rotate the
horizontal components to radial and transverse
components. (Figure 4). The P-wave velocity measured for
depths 166-496 m (excluding the first four traces) is 2740
m/s. We observe strong down- and up- going events that do
not have the same slope as the first breaks. Picking a slope
from this up-going wavefield gives a velocity of 1370 m/s
which in turn gives a Vp/Vs ratio of 2.0. Average Vp/Vs
from the Vp/Vs well log is 2.09. Therefore, we are seeing
up-going S-waves for a Vibe point 20 m from the well.
Figures 5 and 6 show Figure 4 after attenuation of downgoing P-waves and flattening the up-going P- and up-going
S-wavefields based on first break pick times.
At selected vibe points on source line 208, we recorded
data on receiver lines 106 and 108 for two sets of msequence pilots: a pure m-sequence set, and a filtered msequence set. Each set has four members. The pure msequences are characterized by step-function-like

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5: VP 208149 flattened for up-going P-waves after


attenuation of down-going P-waves.

Figure 6: VP 208149 flattened for up-going S-waves after


attenuation of down-going P-waves.

Page 158

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

transitions between two values -1 and +1. The puresequences were modified by an Ormsby bandpass filter
with corners at [5-10-200-250] Hz. The m-sequence pilots
were all 16.376 seconds long. For comparison purposes, we
also recorded data using a standard linear sweep pilot (10 to
120 Hz swept over 16 seconds with 500 ms end tapers). In
all cases, listen time was 4 seconds, and in all cases we
recorded both correlated and uncorrelated data as well as
the signals from accelerometers mounted on the base plate
and reaction mass of the vibrator. It is hoped that these
accelerometer signals will provide clues as to how the
hydraulically-powered vibrator reacts to the sharp/smoother
transitions that are characteristic of pure and filtered msequences.
Pure m-sequences are used to estimate the impulse
response of linear systems (among many other uses). The
fact that the EnviroVibe generates multiples (which are
considered to be artifacts; Figure 7) in seismograms when
driven by pure m-sequences means that the EnviroVibe is
not a perfectly linear system, especially at high frequencies
since it cannot respond accurately to the step-function-like
transitions characteristic of pure m-sequences. Bandpass
filtering the m-sequence before using it as a sweep
introduces side-lobes in the wavelet, although smaller ones
than seen for a linear sweep. It also reduces the prominence
of the source-generated multiples in the recorded data
(Figure 8). Figure 9 shows the same VP acquired with a
linear sweep for comparison. It is difficult to see at this
scale, but the amplitude spectra for the m-sequence gather
contains more energy above 250 Hz than the amplitude
spectra for the filtered m-sequence gather.
Examination of these preliminary results indicates that the
pure m-sequences and the particular filtering applied are
not well suited for the EnviroVibe and its Pelton controller.
It appears that the pure and currently filtered TREF pilots
probably should not contain any energy above 125 Hz. In
addition, we have not yet succeeded in ascertaining how the
many settings available in the Pelton controller should be
set in order for the hydraulics and position controls to best
allow the ground force signals to closely follow the msequence TREF signal (for example: Should the phase lock
be disabled? Can we prevent the controller from learning,
which causes the ground force to grow with repeated
sweeps?).
Surface seismic processing included refraction statics, air
blast attenuation, spike and noise burst edit, surface wave
noise attenuation, and Gabor deconvolution. In order to
compare Aries to Hawk data, we post-stack migrated
receiver stacks using a finite difference migration and
applied a bandpass filter of 10-15-80-90 Hz. The migrated
data are shown in Figure 10, which also shows for
comparison an arbitrary line extracted from a 2014 3D

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 7: VP 208149 acquired using an m-sequence sweep.


Bandpass and AGC for display.

Figure 8: VP 208149 acquired using a 10-200 Hz bandpass


filtered m-sequence sweep. Bandpass and AGC for display.

Figure 9: VP 208149 acquired using a linear 10-200 Hz


sweep. Bandpass and AGC for display

Page 159

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

volume coinciding with the 2015 2D line. The strong event


at about 0.25 s corresponds to the Basal Belly River
sandstone, which is the primary CO2 injection target at this
site.
While we know from a 2014 3D survey that good quality
converted wave data can be obtained at this site at the same
time of year and with the same near-surface ground
conditions, multi-component surface seismic results from
the 2015 data are disappointing. Differences between the
2015 and 2015 include both decreased source and receiver
effort as well as restricted azimuthal coverage.
Discussion
A variety of seismic work was successfully completed at
the Containment and Monitoring Institute (CaMI) Field
Research Station (FRS) in May of 2015. Over the course of
two days data were acquired for a variety of experiments,
including a walk-away 3C VSP, data for a velocity
tomography study, 1C-2D and 3C-2D surface seismic, and
m-sequence sweep tests. This report has shown examples
of field data as well as some preliminary processing results.
Future work
There are a number of projects that will result from these
data. We plan to finalize processing of the radial
component of the 3C-2D surface data, as well as process
the zero-offset VSP to P-P and P-S corridor stacks and
process the multicomponent walk-away VSP data. We can
simulate multiple Vibes simultaneously running different
m-sequence sweeps, and see how successfully the source
gathers can be separated and processed to migrated
sections. We can study how best to attenuate (or use)
source- generated m-sequence multiples. Finally,
everything needs to be interpreted and inverted.

Figure 10: Post-stack migrated receiver lines. (a) and


(b) are the Aries and Hawk data, respectively, and (c)
is an arbitrary line extracted from a 2014 3D volume
corresponding to the location of the 2015 2D line. Data
have an AGC applied for display.

Acknowledgements
The authors thank the sponsors of CREWES and the
Microseismic Industry Consortium, and CMC Research
Institutes Inc. for access to the CaMI field research sites.
This work was funded by CREWES and Microseismic
Industry Consortium industrial sponsors, CMC, and
NSERC (Natural Science and Engineering Research
Council of Canada) through Collaborative Research and
Development grants. We would also like to thank ESG for
field support, as well as Halliburton (Landmark Graphics)
and Schlumberger for providing donated software.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 160

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

No references.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 161

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

On the enhancement of low frequencies in


marine seismic data

In this paper we discuss a four-step approach for including


all low frequencies, i.e., down to f = 0 Hz, by applying:
1. the optimum depth of sources and detectors,
2. source and detector deghosting,
3. dedicated processing of the low frequencies,
4. adding velocity and density trend information.
The approach will now be discussed in more detail.

Guus Berkhout and Gerrit Blacquire*, Delft


University of Technology
Summary
Low frequencies are most important in seismic imaging.
They enhance the resolution, penetrate deeper and they are
indispensable in impedance estimation. In this paper a fourstep approach for enhancing the low frequencies in marine
acquisition and (pre)processing is proposed: 1) positioning
the sources and detectors at the optimum depth, 2) carrying
out proper deghosting and 3) (pre)processing the low
frequencies separately. In step 4), the missing ultra-low
frequency information (trend) comes from the velocity
distribution and from modern gravity gradiometry.
Introduction
Today it is fully recognized that images obtained from
broadband seismic data offer significantly better resolution
and can be interpreted with much more confidence. In
broadband acquisition the bandwidth is extended at the
high-frequency side as well as at the low-frequency side.
The importance of the low frequencies, particularly for
better penetration and impedance estimation, is discussed
by Ten Kroode et al. (2013). Figure 1 shows the effects of
the low frequencies on the wavelet. More low frequencies
result in lower side lobes. As a consequence, the integrated
wavelet, visualizing the transition between two geological
layers, is much more like a step function. Therefore, the
acquisition and (pre)processing of the low frequencies
should be pushed to their limits. Complementary, the ultralow frequencies (trend information) should be integrated
with the velocity distribution and density data from gravity
gradiometry, see Figure 2.
spectrum (logarithmic frequency axis)

spectrum (logarithmic frequency axis)

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0
10 -1

10 0

10 1

10 2

10 3

spectrum (logarithmic frequency axis)


1
0.8
0.6
0.4
0.2

0
10 -1

10 0

10 1

frequency (Hz)

frequency (Hz)

wavelet

wavelet

10 2

10 3

0
10 -1

0.5

0.5

-0.5

-0.5

-0.5

-1
1.6

1.7

1.8

16.3

16.4

16.5

time (s)
integrated wavelet

16.2

0.5

0.5

0.5

-0.5

-0.5

-0.5

-1
1.6

time (s)

1.8

16.3

16.4

16.5

time (s)

1.4

In marine acquisition, the source and detector ghosts cause


the well-known angle-dependent notch areas in the spatiotemporal spectrum. In particular, the low frequencies are
severely attenuated. This can be easily understood by
looking at the source ghost model as shown in Figure 3.
The water surface at z0 is a very strong reflector, close to
-1. If the depth of the real source, +zs , is small compared to
the wavelength, the wavefield from the real source is
largely canceled by the wavefield from the ghost source at zs, see Figure 3b. Obviously this is the case for the very low
frequencies, which have the largest wavelengths. The effect
is also known as the Lloyd mirror effect. In Figure 4 it is
illustrated for a monochromatic 1 Hz signal with a
wavelength of 1500 m and a surface reflectivity of - 0.95.
Figure 4a shows the signal transmitted in the vertical
direction for a source depth increasing from 0 to 1/4. The
signals due to the real and ghost source at +zs = 30 m
(marked by the white dashed line) are shown individually
in Figure 4b. It is obvious that a large source depth is
needed for the generation of low-frequency energy.

integrated wavelet

1.2

10 3

Optimum depth

-1
16.2

time (s)
integrated wavelet

-1

10 2

wavelet

1.5

10 1

frequency (Hz)

0.5

-1

10 0

Figure 2: The detailed information (area 1) is related to the


bandwidth of seismic data. The trend information (area 2) is
obtained from velocity information and from (airborne) gravity
gradiometry. We aim at maximum overlap of the two areas.

-1
16

16.2

16.4

time (s)

16.6

16.8

16

16.2

16.4

16.6

16.8

time (s)

Figure 1: Spectrum (note the logarithmic frequency axis),


wavelets and integrated wavelets for increasing low frequencies.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3: The source-related ghost response in marine data is


caused by the strong surface reflectivity (a). It can be represented
by the response of the so-called ghost source (b).

Page 162

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

From removing to using ghost reflections

Figure 4: a) The combined source signal (real source plus ghost


1
source, 1 Hz) for a source depth increasing from 0 to /4 . The
dashed line indicates a depth of 30 m. b) At this depth the ghost
signal (red dots) almost cancels the real source signal (blue line).

The optimum source depth is 0.25 , which corresponds to


375 m for a frequency of 1 Hz assuming the water velocity
to be 1500 m/s. At the optimum source depth the signal of
the ghost source even enhances the signal of the real source
such that the strength of the combined source is (almost)
doubled. It means the ghost is exploited.
In our DSA (distributed source array) concept, todays
complex, local, broadband sources are replaced by
distributed arrays of simple, narrow-band sources that are
blended (Berkhout, 2012). In addition, each narrow-band
source type is positioned at its optimum depth, being 0.25
c, where c = 2fc/c with fc the central frequency of the
narrow band and c the velocity. An example of a blended
DSA record is shown in Figure 5. Its benefits are clear from
Figure 6.

Figure 5: Example of a DSA record showing the responses of


dedicated low-, mid- and high-frequency sources, each type
deployed at its optimum depth to optimally exploit the ghost.

Figure 6: a) Response of a wide-band source in the fk-domain, b)


idem for a DSA with ultra-low, low-, mid- and high-frequency
sources. Note the improvement, particularly at the low frequencies.

2016 SEG
SEG International Exposition and 86th Annual Meeting

As a first step towards a full implementation of the DSA


concept, a dedicated low-frequency source could
supplement the traditional source (airgun array) in marine
acquisition (Dellinger et al., 2012). In addition, the
traditional wide-band sources could be towed at various
depths (Combois et al., 2009). Finally, to fully exploit the
ghost effect, also the detectors should be deployed at
various depths, e.g., via slanted cables (Soubaras, 2010), or
vertical arrays (Brink and Svendsen, 1987)
Deghosting
Even with the measures taken in acquistion, such as the
ones discussed above, the ghost effect is still imprinted on
the data. In this section we therefore focus on deghosting as
a (pre)processing step. Many methods for deghosting have
been proposed. E.g., Soubaras (2010) carries out joint
deconvolution to a direct and a mirror-migration result;
Amundsen et al. (2013) apply a deterministic frequencydomain spatial deconvolution; Beasley et al. (2013) and
Robertsson et al. (2014) exploit that upcoming waves arrive
earlier than the corresponding downgoing 'ghost' waves,
leading to causal deghosting filters; Ferber and Beasley
(2014) shift the ghost events out of the time window. Some
examples focused at the source side are Mayhan and
Weglein (2013), based on Weglein and Zhang (2005), and
Amundsen and Zhou (2013).
From Figure 3b it is clear that the ghost-generation process
actually is a blending process. We therefore consider
source deghosting as a special case of deblending ('echodeblending'), leading to an algorithm that is characterized
by utilizing the ghost reflections. The output of echodeblending consists of two ghost-free records: one
generated by the real source at the location below the water
level and one generated by the ghost source at the mirrored
location above the water level.
Echo-deblending is independent of the complexity of the
subsurface; only what happens at and near the surface is
relevant. The actual sea-state may cause the sea-surface
reflectivity to become frequency dependent (Orji et al.,
2013). Furthermore, the speed of sound in the water may
vary spatially and temporally, as it is a function of pressure,
temperature and salinity (Leroy et al., 2008). Therefore, we
made model estimation part of the deghosting algorithm
(Berkhout and Blacquire, 2016). Note that estimation of
the ghost model is of particular importance at the source
side, as interaction of the source wavefield with the waterair interface may not be linear. At the detector side the
process is simpler, not only due to the better spatial
sampling but also due to full-wavefield linearity.
Using the matrix notation (Berkhout, 1985), for each
frequency component the forward model can be written as:
(1a)
S + (+zs ) = G (+zs ,+zs )S 0 (+zs ) ,
(1b)
where ghost operator G can be expressed as:
G (+zs ,+zs ) = I(+zs ,+zs ) + W + (+zs , z0 )R (z0 , z0 )W (z0 ,+zs ) ,

Page 163

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

From removing to using ghost reflections

see Figure 3a. The source depth may be spatially variant,


i.e., +zs = +zs(x,y). Matrix S+ represents the total downgoing
source wavefields and S0 the wavefield of the real sources
(subscript 0 refers to the no-ghost situation), W is an
extrapolation matrix with superscripts + and - denoting the
down- and upward direction, and matrix R refers to the
angle-dependent sea surface reflectivity.
From equation 1a it is clear that for deghosting the
computation of the inverse of the ghost operator is key. We
propose to use the following approximation:
(5a)
(+z ,+z ) 1 = G (+z ,+z ) 2 G (+z ,+z ) H
G

for G

ceiling ,
1

(+zs ,+zs ) = ceiling G (+zs ,+zs )

for

> ceiling ,

i.e., outside the notch areas and


H

(5b)

i.e., inside the notch areas.

Here ceiling is the maximum amplitude correction,


superscript H denotes the conjugate transpose, and theHhat
symbol ^ indicates an estimate. Application of G in
equations 5a and 5b we call 'zero-phasing by correlation'.
This step guarantees that the phase of the result is always
perfect, both inside and outside the notch areas. It is a noncausal step. As far as the amplitude term is concerned, the
notch areas are treated differently than the areas away from
the notches. Outside the notch areas the inversion is
perfect. Inside the notch areas the amplitude correction is
limited by the ceiling. As a result the noise is kept under
control. The properties of this scheme are schematically
illustrated in Figure 7.

Figure 8: Echo-deblending method formulated as an iterative


closed-loop process, i.e., the ouput and input are connected via a
feedback loop.

information a nonlinear, closed-loop inversion is optionally


applied (Berkhout and Blacquire, 2016), see Figure 8.
In Figures 9a to 9f a simple data example is provided. Input
is a common receiver gather with a source ghost, the source
depth being 30 m, see Figure 9a. In the noise-free case
deghosting is simple: no amplitude limitation is needed,
i.e., in equation 5 the ceiling is infinite, see Figure 9b.
Noise was added, the SNR being 30 dB, see Figure 9c. The
result of applying the same deghosting procedure is shown
in Figure 9d. Now the absence of an amplitude limitation in
the notch area leads to a large amplification of the noise:
the deghosted result is much more noisy than the input
data. Application of a ceiling that was optimized for the
SNR followed by non-linear processing leads to the result
shown in Figure 9f. Notice the much better signal quality of
this result compared to Figure 9d.
Dedicated low-frequency (pre)processing

Figure 7: Our deghosting is treating the phase everywhere


correctly. This is also the case for the amplitude outside the notch
areas. Inside the notch areas, the amplitude is limited, depending
on the SNR.

However, we know that the result of this non-causal


deghosting process is not perfect: in the notch areas the
amplitude corrections were limited by the ceiling. It means
that there still may be useful information present in the
notch areas, yet to be recovered. To recover this potential

2016 SEG
SEG International Exposition and 86th Annual Meeting

From equation 1a it follows that, apart from the inverse of


the source ghost operator, also the inverse of the source
matrix has to be computed. The same is true for the
detector matrix in the case of the detector ghost. This
clearly illustrates the importance of a proper spatial
sampling. Examples of carpet shooting at the source side
(Walker et al., 2014) and interpolation of multi-sensor data
at the detector side (Letki and Spjuth, 2014) allow practical
deghosting in the space-frequency domain.
In addition, it is proposed that the low frequencies be
treated with special care in (pre)processing. As mentioned,
low frequencies suffer from the notch at 0 Hz. However,
the coarse sampling in conventional acquisition, in
particular at the source side, is mostly affecting the high
frequencies! In the following example it is illustrated that
low-frequency information can still be retrieved properly
from data that is acquired with a very coarse spatial source
sampling.

Page 164

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

From removing to using ghost reflections

Figure 9: a) Input without noise, b) deghosted result at +zs, c)


input with noise, d) deghosted result without application of a
ceiling, e) input with noise, f) deghosted result after application of
a ceiling followed by nonlinear processing in the notch areas.

In Figures 10a and b the original, well-sampled record is


shown in the space-time and wavenumber-frequency
domain respectively. After increasing the spatial sampling
interval by a factor of 16 (!), from 7.5 m to 120 m, strong
aliasing is introduced at the higher frequencies. This is
shown in Figure 10c and 10d. However, the aliasing does
not affect the frequencies up to 6.25 Hz. Therefore, they
can be interpolated to the original spacing of 7.5 m, see
Figure 10e and 10f. Such a dense sampling allows accurate
application of processes that require a proper spatial
sampling. Currently, we are investigating the possibilities
for regularization and interpolation of the very low
frequencies, also in the presence of noise. In particular, we
think of lateral stacking to improve the SNR.
Including trend information
In the last step we include the ultra-low frequencies from
trend information. In the example of Figure 1 the principle
is clearly demonstrated. To move from column 2 to column
3, we have included area 2 (see Figure 2). During the
presentation several examples will be shown.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 10: In a) and b) a well-sampled input record is shown. By


increasing the sampling interval with a factor of 16 from 7.5 m to
120 m, strong aliasing is introduced, see c) and d). However, the
very low frequencies below 6.25 Hz are not affected! They can be
perfectly recovered and interpolated, see e) and f).

Concluding remarks
To enhance the low frequencies in the seismic method, the
so-called ghost should be exploited. It means that sources
and detectors should be positioned at the optimum depth,
which is frequency dependent (0.25). In the DSA concept
dedicated narrow-band sources are deployed. This makes
the DSA concept very suited to enhance the low
frequencies at the source side: each source type can be
towed at its optimum depth.
Remaining imprints of the ghost should be mitigated by
deghosting. Deghosting can be carried out perfectly in
areas away from the notches. Inside the notch areas,
nonlinear processing should further improve the result.
Dedicated processing of the low frequencies such as
interpolation, regularization and lateral stacking further
contributes to an increased bandwidth of seismic data.
The trend information related to the very low frequencies is
retrieved from velocity information as well as gravity
gradiometry.
Acknowledgments
We acknowledge the members of the Delphi consortium for
the stimulating discussions and their financial support.

Page 165

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

ten Kroode, F., S. Bergler, C. Corsten, J. W. de Maag, F. Strijbos, and H. Tijhof, 2013, Broadband
seismic data The importance of low frequencies: Geophysics, 78, no. 2, WA3WA14,
http://dx.doi.org/10.1190/geo2012-0294.1.
Berkhout, A. J., 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, no. 4, A19
A23, http://dx.doi.org/10.1190/geo2011-0480.1.
Dellinger, J. A., J. T. Etgen, and G. Openshaw, 2012, Seismic acquisition using narrowband seismic
sources, U.S. Patent 2012/0155217 A1.
Cambois, G., A. Long, G. Parkes, T. Lundsten, A. Mattsson, and E. Fromyr, 2009, Multi-level airgun
array: A simple and effective way to enhance the low frequency content of marine seismic data:
79th Annual International Meeting, SEG, Expanded Abstracts, 152156,
http://dx.doi.org/10.1190/1.3255140.
Soubaras, R., 2010, Deghosting by joint deconvolution of a migration and a mirror migration: 80th
Annual International Meeting, SEG, Expanded Abstracts, 34063410,
http://dx.doi.org/10.1190/1.3513556.
Brink, M., and M. Svendsen, 1987, Marine seismic exploration using vertical receiver arrays: A means
for reduction of weather downtime: 57th Annual International Meeting, SEG, Expanded
Abstracts, 184187, http://dx.doi.org/10.1190/1.1892129.
Amundsen, L., H. Zhou, A. Reitan, and A. B. Weglein, 2013, On seismic deghosting by spatial
deconvolution: Geophysics, 78, no. 6, V267V271, http://dx.doi.org/10.1190/geo2013-0198.1.
Beasley, C. J., R. Coates, Y. Ji, and J. Perdomo, 2013, Wave equation receiver deghosting: A provocative
example: 83rd Annual International Meeting, SEG, Expanded Abstracts, 42264230.
Robertsson, J. O. A., L. Amundsen, and . Pedersen, 2014, Deghosting of arbitrarily depth-varying
marine hydrophone streamer data by time-space domain modelling: 84th Annual International
Meeting, SEG, Expanded Abstracts, 42484252, http://dx.doi.org/10.1190/segam2014-0221.1.
Ferber, R., and C. J. Beasley, 2014, Simulating ultra-deep-tow marine seismic data for receiver
deghosting: 76th Annual International Conference and Exhibition, EAGE, Extended Abstracts,
http://dx.doi.org/10.3997/2214-4609.20141452.
Mayhan, J. D., and A. B. Weglein, 2013, First application of Greens theorem-derived source and receiver
deghosting on deep-water Gulf of Mexico synthetic (SEAM) and field data: Geophysics, 78, no.
2, WA77WA89, http://dx.doi.org/10.1190/geo2012-0295.1.
Weglein, A. B., and J. Zhang, 2005, Extinction theorem deghosting method using towed streamer
pressure data: Analysis of the receiver array effect on deghosting and subsequent free surface
multiple removal: 75th Annual International Meeting, SEG, Expanded Abstracts, 20952098,
http://dx.doi.org/10.1190/1.2148125.
Amundsen, L., and H. Zhou, 2013, Low-frequency seismic deghosting: Geophysics, 78, no. 2, WA15
WA20, http://dx.doi.org/10.1190/geo2012-0276.1.
Orji, O. C., W. Sollner, and L. J. Gelius, 2013, Sea surface reflection coefficient estimation: 83rd Annual
International Meeting, SEG, Expanded Abstracts, 5155, http://dx.doi.org/10.1190/segam20130944.1.
Leroy, C. C., S. P. Robinson, and M. J. Goldsmith, 2008, A new equation for the accurate calculation of
sound speed in all oceans: The Journal of the Acoustical Society of America, 124, 27742782,
http://dx.doi.org/10.1121/1.2988296.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 166

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Berkhout, A. J., 1985, Seismic migration: Theoretical aspects, 3rd ed.: Elsevier.
Berkhout, A. J., and G. Blacquire, 2016, Deghosting by echo-deblending: Geophysical Prospecting, 64,
406420, http://dx.doi.org/10.1111/1365-2478.12293.
Walker, C., D. Monk, and D. Hays, 2014, Blended source The future of ocean bottom seismic
acquisition: 76th Annual International Conference and Exhibition, EAGE, Extended Abstracts,
Th ELI2 16, http://dx.doi.org/10.3997/2214-4609.20141470.
Letki, L. P., and C. Spjuth, 2014, Quantification of wavefield reconstruction quality from multisensor
streamer data using a witness streamer experiment: 76th Annual International Conference and
Exhibition, EAGE, Extended Abstracts, Th ELI1 10, http://dx.doi.org/10.3997/22144609.20141448.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 167

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Marine seismic acquisition with 3D sensor arrays towed by wave gliders


Nick Moldoveanu, Leendert Combee, Philippe Caprioli, Everhard Muyzert, Sudhir Pai, Schlumberger
ADMA
Summary
Marine seismic acquisition has been established as a very
efficient method that employs big and powerful vessels that
can tow very large streamer spreads, for instance 12 to 18
streamers, 8 km in length, with 100-m crossline separation.
In this paper we introduce a new type of marine acquisition
that uses wave gliders -autonomous marine vehiclespowered solely by waves and solar power, which tow a
small, multimeasurement 3D sensor array. We will discuss
the results of the first field experiment with 3D sensor
arrays towed by wave gliders and processing and survey
design aspects of this new type of marine seismic
acquisition.

In each green arm there are three hydrophones spaced at 50


cm; the hydrophones are also spaced at 50 cm in the
vertical and horizontal planes. The tube placed in the
vertical plane contains a buoyancy engine and inertial
motion sensors to measure array orientation: three-axis
accelerometers, three-axis gyroscope, three-axis magnetic
sensor and depth sensor. The yaw, roll, and pitch are
measured every second and transmitted to the acquisition
system located in the wave glider float.

Introduction
In 2013 an experiment was conducted with short streamers
attached to wave gliders during an ocean-bottom node
(OBN) survey in the Gulf of Mexico (Moldoveanu et al.,
2014). The experiment proved that wave gliders can be
used as a platform for marine acquisition if an array of
sensors is attached to them. The limited amount of acquired
data was comparable, after processing, with hydrophone
OBN data in terms of signal-to-noise ratio and frequency
content. During this experiment we realized that towing a
31-m streamer is a challenge due to the high drag, reduced
maneuverability, inaccurate receiver positioning of along
the streamers, and contamination with typical streamer
noise, such as swell noise, current noise, and the noise
induced by the streamer movement. For these reasons we
decided to design a 3D sensor array to replace the streamer.
The 3D sensor array is attached to the wave glider sub by
a decoupling cable (Figure 1). A detailed view of the 3D
multimeasurement sensor array (3DSA) is presented in
Figure 2. It consists of a rigid frame that has five sensor
arms (green color) placed in vertical and horizontal planes.

Figure 2. Three-dimensional hydrophone sensor array with 15


hydrophones spaced at 50 cm in X,Y and Z directions

The seismic measurements from the hydrophones are


tramsitted, by the decoupling cable and the umbilical, to
the acquisition system and stored on a high-capacity hard
drive. Seismic data are continuosly recorded.
3D sensor array positioning is based on the GPS receiver
positioning of the float and on the orientation and depth
measurements.
Real-time quality control (QC) of seismic data recorded by
each hydrophone in the 3D sensor array is performed by
estimating a series of stastistical attributes that are sent, by
satelite, to the QC geophysicists, every 5 minutes.
The first field experiment with 3D sensor arrays towed by
wave gliders was performed in May, 2015, during a 3D
ocean bottom cable (OBC) survey in the Arabian Gulf.
The main objectives of this field test were to compare
quality of the seismic data acquired with 3D sensor arrays
versus OBC data and to evaluate how well the wave glider
can hold station, maintain the desired depth, and move
from one place to another, as field operations require. In the
next section we will discuss the results of this experiment.
First field experiment with 3D sensor arrays

Figure 1. 3D sensor array attached to the wave glider sub via a


decoupling cable; the sub is connected to the float via an umbilical

2016 SEG
SEG International Exposition and 86th Annual Meeting

The OBC survey was conducted in an area with a hard


water bottom and water depths of 20 to 25 m. It was an
orthogonal acquisition geometry with eight receiver lines,
10,000-m receiver line length, 400-m interval between
receiver lines, 10,800-m source line length and 100-m

Page 168

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

source line interval (Figure 3a). Two source vessels were


used, each with a dual source array, and 50-m crossline
separation. The shot interval was 25-m (flip-flop). Three
wave gliders equipped with 3D sensor arrays were
deployed inside the source patch, on top of the OBC
receivers. The 3D sensor arrays were separated by 400 m in
both directions (Figure 3b). The data were acquired while
shooting two source patches, for six days. The wave gliders
were programmed to hold station by moving in a small
circle around the station location. The blue dots in Figure
3b represent the locations of the wave gliders during seven
acquisition days. The average circle radius around each
pre-plot receiver station for each wave glider were 27.1 m,
18.0 m and 17.1m (shown in red). The desired depth for the
3DSA was 10 m and the average depth achieved was 9.93
m.

Scholte waves propagates through the water layer with


exponentially decaying amplitude and are recorded at the
3D sensor array hydrophones with weaker amplitudes.
Direct arrivals, refracted waves and seismic interferences
from the far source are visible on both CRG gathers.

Figure 4. Raw (unprocessed) CRG gather for one hydrophone of


the 3DSA and one source line

(a)
(b)
Figure 3. (a) OBC source patch (red) and receiver patch (blue);
(b) Deployment of three wave gliders on top of the OBC receiver
patch

The data quality evaluation was performed by comparing


common receiver gathers from OBC and 3D sensor arrays
recorded at the same locations, and limited offset 3D
stacks. The comparison with OBC is only for the
hydrophone data.
Each shot recorded in a 3D sensor array has 15 traces,
corresponding to 15 hydrophones. Most of the processing
steps are performed in common-receiver gathers (CRG),
similar to OBN processing. A common receiver gather
(CRG) is generated for each hydrophone and contains the
shots from a single source line. In Figure 4 we show 3D
unprocessed sensor array data from a CRG corresponding
to one hydrophone and one shot line. The typical swell
noise dominates the record, as the 3D sensor array is
deployed at 10-m depth. In Figure 5 we show a comparison
between a OBC CRG with a 2Hz low cut filter applied in
the acquisition system and 3D sensor array CRG, after
swell noise was removed. A singular-value decomposition
type algorithm was used to evaluate the swell noise and
subtract it from the data (Moldoveanu, 2011). Scholte
wave energy is quite strong on the OBC CRG because
propagates along the water-bottom interface. Evanescent

2016 SEG
SEG International Exposition and 86th Annual Meeting

(a)

(b)
Figure 5. Comparison of OBC CRG (a) vs. 3DSA CRG (b), after
swell noise was removed from 3DSA data

Page 169

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The frequency content of both datasets was analyzed and


an example of amplitude spectrum is presented in Figure 6.
The amplitude spectrum was estimated in the window
marked by the blue rectangle in the Figure 5. The
frequency content is slightly higher than for OBC, except at
the very low frequencies, around 5 Hz. This amplitude
increase is expected because the OBC data are recorded on
the water bottom and the lower frequencies are better
preserved at a greater depth.

Figure 6. Amplitude spectra: OBC (red) vs. 3DSA (blue)

Limited-offset 3D stacks were generated for both the OBC


and 3DSA data and the results for four inlines are shown in
Figure 7. The only processing applied was a 2Hz low-cut
filter on the OBC data and swell-noise attenuation on
3DSA data. The signal-to-noise ratio is improved on the
3DSA due to the summing of the 15 hydrophones. This is
one of the advantages of the 3D sensor multimeasurement
array acquisition.

the wave glider equipped with a 3D sensor array proved to


be much easier than towing a short streamer, particularly in
a congested field.
Processing aspects of 3D sensor array data towed by
wave gliders
Processing of data recorded with the 3D sensor array is
performed in the CRG, as is typically done for node
surveys. A typical processing sequence could include:
harvesting the shot data from continuous data, merging the
shot positions into 3DSA data, positioning QC based on
pressure gradient estimations , sorting the data in common
receiver gathers, swell noise attenuation, source signature
shaping, sorting the data in shot domain, receiver
deghosting based on pressure gradients, summing of the 15
hydrophones to improve the signal-to-noise ratio, data
sorting to CRG, source deghosting, surface-related multiple
attenuation, velocity model building, and depth or time
imaging.
One distinct difference between node acquisition and
3DSA towed by wave glider acquisition is that the node
location is fixed for the duration of the acquisition, while
for the wave glider, we do not have a fixed location
because the wave glider moves in a small circle around the
station, if so desired. Knowing the location of the 3DSA at
any time, it may be possible during the processing to
relocate the 3DSA data to the desired location using the
pressure gradients derived in X, Y and Z directions. This
processing capability addresses the issue of receiver
repeatability for 4D studies.
Survey design aspects for 3DSA towed by wave glider
acquisition

Figure-7. OBC 3D stack (left) and 3DSA stack (right)

The results of this field test with 3D sensor arrays towed by


wave gliders proved that the data quality is comparable
with or better than OBC data in terms of signal-to-noise
and frequency content. From an operational point of view it
was demonstrated that wave gliders can be remotely
controlled to hold station, move along a predefined path
and maintain a predefined depth. The maneuverability of

2016 SEG
SEG International Exposition and 86th Annual Meeting

Seismic acquisition with 3D sensor arrays towed by wave


gliders opens new possibilities for marine acquisition due
to wave glider capabilities to navigate, at a slow speed, in a
circle around a defined holding station and to move along
a predefined path. Based on these features, different type of
acquisition geometries can be implemented. Ocean-bottom
node-type geometry implemented with 3DSA towed by
wave gliders has the main benefit that it does not require a
remote operating vehicle for node deployment and
retrieval. Once all the shots are acquired for a given source
patch, the wave glider (receiver) patch will move to a new
patch location. This has the potential to significantly
reduce the operational cost.
A novel type of geometry can be defined by considering a
patch of wave gliders towing 3D sensor arrays that could be
stationary at one location, recording data for a period of
time, and after that, moving to the next location along a
predefined path. (Muyzert, 2014). The sources can be in the

Page 170

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

center of the receiver patch distributed on a circle, and both


the dimension of the receiver patch and the receiver
sampling inside the patch, are calculated based on the
required fold and maximum offset. A source patch must be
defined to cover the subsurface target area where we plan
to acquire full-azimuth data with the required maximum
offset. The number of centers of the source circles and the
interval between circles are determined based on the
required fold. Examples of the receiver patch and the
source circle in the center of the patch are shown in Figure
8a, and the centers of the source circles, for the entire
survey area, are displayed in Figure 8b. The receiver patch
will move from center to center to cover the entire source
patch. The main benefits of this type of survey design vs. a
regular node type survey design are more uniformity in
offset and azimuth distribution, and reduced operational
cost.

(a)

Discussion and conclusions


Acquisition with 3D sensor arrays towed by wave gliders,
could have applications in areas where seismic data quality
acquired with OBN or OBC systems is not adequate due to
the sea floor conditions or in very deep waters where the
ocean bottom systems cannot be deployed. Another
potential application is to complement towed-streamer
acquisition by efficiently acquiring seismic data around
obstructions, and long and ultra-long offsets.
Joint
processing of towed streamer data and 3DSA data is
straight forward, as both systems are based on hydrophone
measurements. Another very useful application could be to
acquire 3DSA data during a 3D VSP or a walkaway survey.
The multi-hydrophone measurements opens new
possibilities in signal processing and in imaging of 3DSA
data, such as receiver deghosting, detecting the direction of
the seismic arrivals, QC of the positioning based on seismic
data, wavefield interpolation or extrapolation, and vector
acoustic imaging (Vasconselos, 2013). Details about
specific processing enabled by multi-measurement 3D
sensor array will be presented in a companion paper
(Philippe Caprioli et. all, SEG 2016, paper submitted).
The first field test with a 3DSA towed by wave gliders
during an OBC survey proved to be successful from an
operational point of view and seismic data quality. A
subsequent test using 23 wave gliders was also completed
successfully. However, for future commercial applications,
where hundreds of wave gliders could potentially be used,
we have to consider how the wave gliders communicate
with each other and with a master vessel, and how the data
can be downloaded in a minimum time without affecting
the production.
Acknowledgements
We acknowledge Abu Dhabi Marine Operating Company
for the permission to conduct this test during their 2015
OBC survey and to present these results, and contribution
of our colleagues, Simon Baker, Bent Andreas Kjellesvig,
Edmond Yandon, Levent Akal, Mehul Supawala, Bluefin
OBC crew, Luis Arechiga Salinas and cooperation with
Liquid Robotics Incorporated during this project. We also
thank Ed Kragh, of Schlumberger Gould Research, for very
fruitful discussions on this work.

(b)
Figure 8. (a) Receiver patch (green dots) and source circle (yellow)
(b) centers of the source circles (yellow dots); The receiver patch
will move from center to center and record the data generated
along the source circles

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 171

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Moldoveanu, N., 2011, Attenuation of high energy towed streamer noise: 81st Annual International
Meeting, SEG, Expanded Abstracts, 35763580, http://dx.doi.org/10.1190/1.3627943.
Vasconcelos, I., 2013, Source-receiver, reverse-time imaging of dual-source, vector-acoustic seismic data:
Geophysics, 78, no. 2, WA123WA145, http://dx.doi.org/10.1190/geo2012-0300.1.
Moldoveanu, N., A. Salama, O. Lien, E. Muyzert, S. Pai, and D. Monk, 2014, Marine acquisition using
autonomous marine vehicle: A field experiment: 84th Annual International Meeting, SEG,
Expanded Abstracts, http://dx.doi.org/10.1190/segam2014-1498.1.
Muyzert, E., 2013, Design, modeling and imaging of marine seismic swarm surveys: Schlumberger,
Internal report.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 172

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Baxter: a high-resolution penta-source marine 3D seismic acquisition


Ed Hager, Polarcus; Rob Kneale, Laurence Hansen, Quadrant Energy; Troy Thompson, DownUnder
GeoSolutions
Summary
The penta-source towed streamer acquisition configuration
is designed to produce high-resolution cross-line data
sampling with conventional acquisition technologies that
do not require specialist algorithms to process, and with
source deblending can achieve an arbitrary record length
The Baxter 3D survey was acquired with the penta-source
design along with a subset conventional survey for
reference. The penta-source survey was successfully
acquired with no significant issues encountered in the field.
A preliminary fast-track processing sequence shows that a
significant improvement in resolution has been achieved by
the penta-source technique. The full processing sequence
continues with the expectation that further improvements
will be realized.

a point was reached where the pressure in the air gun


chambers could not be maintained to a reasonable level.
From this test we observed that we could fire alternate sets
of air guns every 3 seconds (approximately 7 m at 4.5 knts
boat speed). A final source interval of 9.37 m was chosen
for the penta-source survey.
A smaller subset of the survey was also acquired using the
more conventional parameters of two sources firing at an
interval of 25 m.
The penta-source and conventional data were processed
through a simple fast-track pre-stack time migration
sequence so that the basic imaging response of the
geometry could be seen ahead of the more sophisticated
processing of the full sequence. The initial comparisons
show real benefits in image resolution, even given the
limitations of this fast-track processing sequence.
Penta-Source Design

Introduction
The five-source design was conceived in November 2014
and was followed by a field test in January 2015 (Hager et
al., 2015). The overall concept and test results led to the
first full commercial survey, Baxter, which was acquired in
November 2015. The very short time from concept to
commercial production bears testament to the overall
simplicity of the penta-source design.
The five sources provide the means for improving the
cross-line sampling, but come at the expense of in-line fold
of coverage. This compromise drives the need to reduce the
source firing interval to a distance that is as small as can be
tolerated technically and physically. In order to understand
the physical limits for the shotpoint interval a field trial was
run which showed that as the time between shots decreased,

The five-source design enables 6.25 m cross-line bins by


deploying five sources at 12.5 m separation. Sub-arrays are
used in more than one source definition so that each source
is composed of 2 sub-arrays (see Figure 1). In order to
allow time for the air-guns to recharge the source firing
order is 1-3-5-2-4. Other than the sub-array spacing and
the firing order the design is flexible in terms of gun-depths
and volumes.
With the penta-source design the nominal streamer
separation is 62.5 m, which already makes it more efficient
than a HD3D where streamers are 50 m apart giving a
12.5 m cross-line bin spacing. For shallow-water surveys
the limiting factor is the near-offset from the source to
outer streamer so the 62.5 m streamer separation is
optimum.
As water depths increase the streamer
separations can be relaxed to further improve efficiency by
acquiring the data more sparsely, where we dont expect to
nominally fill every bin with every offset. The natural
movement of the spread with current and vessel track
combined with a fan ensures that data is recorded in every
bin. This style of acquisition provides efficiency equivalent
to streamers separated at 100 m, but with vastly better
cross-line spacing. This and the source design are covered
in more detail by Hager et al 2015.
Shotpoint Interval

Figure 1: Left a conventional dual source layout with 3 sub-arrays


per source. Right the five-source design where each sub-array is
spaced 12.5 m apart. Sub-arrays are used in more than once source
defintion to give 5 sources with 2 sub-arrays each, spaced 12.5 m
apart. The spacing gives 6.25 m cross-line bins

2016 SEG
SEG International Exposition and 86th Annual Meeting

The penta-source design drives us to decrease the shotpoint


interval so that sufficient inline fold is recorded which is
important for processing sub-surface lines, in either
nominal 2D CMP or common-offset domains. The
maximum shotpoint interval contemplated is probably 12.5

Page 173

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

m or 62.5 m between the same source firing and this


compares favorably to the 50 m used for many
conventional surveys. The downside of shorter shotpoint
intervals is that the record length is reduced or we need to
remove the overlapping shot energy. Even then we can
consider that the effect may not be too great: overlapping
reflection energy is relative to the seabed TWT so a 12.5 m
shotpoint interval would give 5 seconds of clean data
below the seabed regardless of how deep the seabed is. As
most AVO or other QI pre-stack products are generally
only effective within this time window, the impact of short
shotpoint intervals may be low. The risk of overlapping
shot removal is relevant only to the deeper post-stack
structural interpretation.
The shotpoint interval is not only a geophysical parameter
but it is also governed by the physical time it takes for
enough air to recharge the airgun array to the desired
pressure. The five-source design used for the Baxter survey
dictated that the shortest shotpoint interval possible was 3
seconds. This leads to a theoretical distance interval of
about 7 m at 4.5 knts, but would require shots fired on
time-firing, as distance firing would be too prone to
pressure problems caused by vessel speed fluctuations.
Therefore, distance firing at 9.37 m was chosen to provide
reasonable margins and allow pressures to be close to the
2000 psi mark. A 9.37 m shotpoint interval gives ~47 m
between the same sources firing which is similar to the
conventional of 50 m.

Table 1: Conventional and Penta-Source acquisition geometries

Acquisition: Baxter Survey, North West Shelf Australia


The Baxter survey was acquired for Quadrant Energy in
November 2015 and is situated in the Roebuck Basin,
North West Shelf, Australia in waters of around 2 seconds
TWT. Two surveys were acquired a small conventional
survey consisting of 10 sail-lines, yielding 45 km2 full fold
data and the main penta-source survey of approximately
400 km2. Table 1 summarizes the acquisition parameters.

Figure 2: Left, Baxter survey outline blue (approx 400 km2), Conventional survey outline green (45 km2), with conventional sail-lines plotted.
FOC area shown right in red. Right, unique fold-of-coverage for conventional and far right, penta-source shown for first 3 offsets (500-800 m).
The FOC area covers the race-track boundary where sail-line direction is reversed (blue arrows). All displays approx 2:1 aspect ratio

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 174

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The conventional geometry was designed to be as close as


possible to the penta-source acquisition (same swath width
and towing depths). The source volumes were different so
that the effects of source array output could be analyzed
later. Questions to be addressed are the impact of source
volume and shot-density on the final image; signal
penetration, processing related matters, especially in the
sub-surface line phase with pre-migration signal-to-noise.
Noting that these are effects related to the deeper image
which are an extension to the primary purpose penta-source
design of giving a shallow high-resolution image.
The effect on the image between the penta-source data with
overlapping shots and the conventional can also be studied,
even though the source volumes and design are different the intended final result of a clean interpretable image are
the same.
The Baxter survey was acquired with the sparse design and
to see how well the coverage worked we can look at an area
in the centre of the survey, as shown in Figure 2, where we
have partial success of randomization of the coverage.
Some lines have no coverage for the near offsets, but even
so, the gaps are mostly 1 or 2 lines (6.25 or 12.5 m) wide.
Most of the coverage sits between this and the ideal where
all lines are covered. The worst gap seen is 5 lines wide or
30 m which is not much more than the conventional 25 m
nominal cross-line spacing.

Processing
Both the penta-source and the conventional data were
processed through an identical sequence for the fast track,
except for the source debubble and de-signature to account
for the different sources. Given the similarity in the spatial
sampling of sub-surface lines, this is considered reasonable.
The fast track was processed through to isotropic pre-stack
migration with no regularization so that the effect of the
geometry could be clearly understood and seen. The full
processing sequence will include source interference
removal, full source and receiver deghosting, regularization
and anisotropic depth migration. It is important to note that
no bespoke algorithms or processing sequences are needed
to process the penta-source data.
To image the entire subset of conventional data,
surrounding lines from the penta-source survey were used
to satisfy the migration aperture after matching with a
simple scalar to the conventional. For the fast-track, the
entire 45 km2 area was migrated (green outline Figure 2).
All subsequent displays have a single matching scalar of
0.64 that adjusts for the source volume difference.
Penta and Conventional Comparisons
Figure 3 illustrates what we might expect to happen when
migrating un-regularized data: migration artifacts are

Figure 3: Water-bottom amplitude for conventional (top) and penta


source (bottom) data. Without regularization we can see migration
noise in both datasets (see x2 zoom inset).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 4: Inline/Xline displays (~450 ms x 2 km extent),


conventional top and penta source bottom. Faults better defined in
the crossline (right side) but also the inline direction. Fault plane
reflection present in the penta source (A). Notable migration noise
on the penta source as expected

Page 175

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

evident in both datasets, although more pronounced in the


penta-source data with sharply defined amplitudes stripes
running east-west. However, in spite of the limitations of
the preliminary processing sequence, the inline and crossline displays in Figure 4 show the overall fault definition
from the penta-source data is much sharper and, in
addition, fault plane reflections are also imaged, in both the
inline and crossline directions. Figure 5 shows some near
surface features below the seabed, a slice through a mass
transport complex; and a coherency slice through the
polygonal faulting (seen in Figure 4). In all cases the
image is sharper with the smaller cross-line bin size and the
migration noise decreases with depth.
Conclusions

The Baxter survey was successfully acquired with no


significant field issues. It demonstrates that adequate inline
fold data can be acquired with the penta-source technique
for shallow high-res targets. Surface coverage analysis,
however, indicates that the sparse element design chosen
for this survey is probably at the practical limit of the
technique.
The initial fast-track processing results demonstrate that the
penta-source design is effective at generating highresolution images in all dimensions. This result has been
achieved even in the presence of the expected migration
artifacts, given that regularization has not yet been applied
to the data.
The full processing sequence, including depth imaging, is
due July 2016 and will enable further investigation into
some of the issues covered in this paper.

Figure 5: Top near surface features 116 ms below seabed, conventional on left , penta-source on right. Bottom left timeslice 2700 ms through
mass transport complex; bottom right coherency slice 3100 ms through poloygnal faulting seen in Figure 4

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 176

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Hager, E., M. Rocke, and P. Fontana, 2015, Efficient multi-source and multi-streamer configuration for
dense cross-line sampling. 85th Annual International Meeting, SEG, Expanded Abstracts, 100
104, http://dx.doi.org/10.1190/segam2015-5857262.1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 177

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Full-azimuth towed-streamer acquisition and broadband processing in an obstructed area of


the Gulf of Mexico
Carlos Espinoza*, Olga Zdraveva, Barbara Curd, Eugene Gridnev, Nick Moldoveanu, Schlumberger
Summary
South Timbalier is a relatively mature field located in the
Gulf of Mexico, shelf area. The oil production is mostly
from shallow, post-salt reservoirs. In order to unveil
the potential deeper subsalt plays, while enhancing the
definition of existing shallow reservoirs, a full azimuth
towed-streamer acquisition was acquired in 2014. In this
abstract we discuss the solutions used in acquisition, data
processing, velocity model building, and imaging, to
deliver a high definition subsurface image.
Introduction
South Timbalier survey was planned in an obstructed area
of the Gulf of Mexico (Figure 1), with water depth varying
from 40 m to 300 m. The size of the survey was 2920 km
(118 OCI blocks). Seismic data was acquired over this area
in the previous years with narrow azimuth towed-streamer
acquisition (NAZ) and Ocean Bottom Cable (OBC). The
discovered reservoirs were mainly at shallow depths, in
post-salt formations such as Pleistocene. Typical traps are
of stratigraphic nature and many of these are related to
faults. Direct hydrocarbon detection, based on amplitude
analysis and seismic inversion, is used to map the shallow
reservoirs. The deeper section, including the salt and
subsalt deposits, is relatively unexplored, with very few
wells drilled deeper than 4500 m (15,000 feet).

Bottom Nodes (OBN). These methods provide wide


azimuth or full azimuth, and longer offsets, which are
required for improved subsurface illumination. A survey
design and modeling study was performed to determine if
an alternative acquisition solution to ocean bottom systems
is feasible using towed-streamer acquisition and the
existent seismic data, to complement the new data.
Acquisition solution
The proposed acquisition solution, based on the survey
design and modeling study, was to use coil type geometry
(Moldoveanu, et. al., 2008) with a single streamer vessel,
towing short streamers, and an additional source vessel, 5
km behind the streamer vessel (Figure 2), to acquire fullazimuth data with 10,000 m maximum offsets. The two
vessels sailed on two different circles, with 5500 m radius.

Figure 2. Circular shooting with a single streamer vessel and a


source vessel, 5 km behind ; circle radius was 5.5 km

Figure 1. South Timbalier survey outline is marked in red. The


existent platforms and rigs are marked in green

The geophysical and geological objectives of the survey


were to acquire broad band seismic data that will allow to
accurately map the shallow reservoirs, and to improve the
seismic illumination and the mapping of the subsalt and salt
associated plays.
Typical marine acquisitions for shallow waters and
obstructed areas are Ocean Bottom Cable (OBC) or Ocean

2016 SEG
SEG International Exposition and 86th Annual Meeting

The streamer configuration consisted of 10 streamers, 5000


m length and 120 m separation. Towing short streamers
allowed to safely navigate around platforms. A single
source array with a volume of 8475 in was deployed on
each vessel; the shot interval was in time mode, equivalent
to approximately 31.25 m, flip-flop, shooting. The depths
of the source arrays and streamers were 10 m and 12 m,
respectively.
The 3D ray tracing illumination study performed with the
proposed circular acquisition geometry and the existing
seismic data showed the holes in the subsurface coverage
due to the obstructions could be filled. In Figure 3a we
show an illumination map for 0-2000 m offset range using
the proposed circular acquisition geometry. The holes in
the illumination maps are filled if NAZ data is used
together with circular acquisition data (Figure 3b).

Page 178

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

and knowledge of source and receiver depths, water


velocity, and sea-surface reflection coefficient. By design,
data acquired with circular shooting has a crossline
component. However, ghost modeling with a 2D
assumption still produced a good solution with short
turnaround. An example is shown in Figure 4.

Figure 4: Stack before (top) and after (bottom) deghosting.

Figure 3. Illumination map for 0-2000 m offset range: (a) using


proposed circular acquisition; (b) using combined, circular
acquisition and existent NAZ data

In order to have a short turnaround time for the acquisition,


it was decided to use two identical seismic crews. The
shape of the survey allowed to operate these two crews
with a minimum 25 km separation distance, to minimize
seismic interference. The survey was acquired in 72 days.
Processing aspects
One of the challenges in data processing was to increase the
seismic bandwidth inherent in seismic data. The strategy to
achieve this was based on the following processing steps:
1) Multi-step approach for noise attenuation to minimize
any residual noise, particularly at very low frequencies;
2) Apply receiver deghosting, source deghosting, and
inverse-Q (phase only), before imaging
3) Apply depth variant inverse-Q (amplitude only) and
bandwidth extension, after imaging
The deghosting method employed was an inversion type
algorithm that solves for the upgoing wavefield or
deghosted pressure data. It requires fine receiver sampling

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5. Kirchhoff Depth Migration before (top) and after


(bottom) surface related multiple attenuation

Page 179

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Attenuation of sea surface related multiples was another


challenge. It required reprocessing all the existent NAZ
data to improve data population, particularly near offsets,
and using it together with the new data for prediction and
adaptive subtraction of the multiples. A 3D Surface Related
Multiple Elimination (SRME) algorithm that can handle
any type of acquisition geometry was used for the
prediction step (Dragoset et. al., 2010). This was followed
by a L1-norm cascaded adaptive subtraction workflow. An
example of Kirchhoff Depth Migration, before and after
surface multiple attenuation, is presented in Figure 5,
illustrating the demultiple process was effective. A
preliminary velocity model was used for this migration.
Velocity model building challenges
A very accurate anisotropic velocity model is required to
properly image the small faults and the stratigraphic traps
in the shallow sediments, the deeper salt bodies, and subsalt
structures.

section, where picking moveout might be cumbersome due


to limited number of traces and prevalent stretch energy.
The key challenge for FWI was addressing shot and
receiver density differences between legacy NAZ and
circular acquisition data. Limiting the data to diving waves
and using a new FWI algorithm designed to mitigate cycleskipping issues related to inaccuracies in the background
model (Jiao et. al., 2015) produced robust results.
Five iterations of multiazimuth, multiscale TTI tomography
(Woodward et al., 2008) with steering filters (Bakulin et
al., 2010 and Zdraveva et al., 2012) produced the
significant uplift in quality and resolution of the post-salt
section. The final velocity model is very detailed, including
fault delineation and stratigraphic layers (Figure 6).
The subsalt velocity model building workflow involved a
combination of rock physics-constrained initial model
building, calibration with existing well information, and
subsalt CIP tomography with steering filters.
Depth imaging results
TTI depth imaging was performed on both, existent NAZ
data and on the new circular acquisition data, with 3D
prestack Kirchhoff migration (KDM) to 90 Hz maximum
frequency, and with reverse time migration (RTM) to 35
Hz maximum frequency. The output bin size for KDM was
12.5 m (inline) x 15 m (crossline) x 3.04 m (depth) and for
RTM migration was 25 m (inline) x 30 m (crossline) x 9.75
m (depth). For both migrations the aperture was 9000 m.
Optimal stacking of NAZ and FAZ data results will be
based on Vector Image Partition gathers (Zhao et. al. 2014)
generated during RTM migration. An example of simple
summing NAZ and circular acquisition RTM is shown in
Figure 7.

Figure 6. Kirchhoff Depth Migration before (top) and after five


iterations of tomography; velocity color scale is on the right side

A tilted transversely isotropic (TTI) model building and


updating process closely followed all the steps outlined in
Zdraveva et al., 2012, with an additional FWI step
introduced after the first CIP tomography iteration. The
role of FWI was to complement tomography in the shallow

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 7. RTM image of combined NAZ and circular acquistion


data

Page 180

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Full azimuth and long offsets effectively illuminate subsalt


targets and produce high-quality images for salt geometry
interpretation and prospective geologic play identification.
High resolution KDM with fine sampling and large
aperture imaged the heavily faulted shallow section very
well. Figure 8 highlights the faults and fractures visible in a
shallow depth slice after the edge detection attribute was
calculated.

Conclusions
The subsurface imaging results from this survey show very
good definition of post-salt and subsalt sediments. This
allows accurate mapping of the stratigraphic and fault traps,
delineation of the salt bodies, subsalt structure or salt
associated reservoirs, and amplitude anomalies potential
related to fluid presence.
Towed-streamer circular acquisition using a single receiver
vessel, towing short streamers, and a source vessel placed
behind the streamer vessel, proved to be an efficient
method to acquire full-azimuth and 10000 m maximum
offset in a much obstructed area of the Gulf of Mexico
Shelf. Reprocessing all underlying NAZ seismic data was
essential to complement the new data.

Figure 8. Depth slice through the KDM volume, after edge


detection was applied, illustrates fractures and faults affecting the
post-salt sediments

The combination of high-resolution tomography and


broadband processing reveals narrow-scale features such as
channels and stratigraphic reservoirs. An example of
known post-salt, shallow reservoir, imaged with KDM, is
shown in figure 9 to illustrate the relative amplitudes were
properly preserved during data processing and imaging.
KDM gathers show several instances of AVO anomalies
visible at large offsets. A careful spatial variant mute was
required to preserve the far offset signals.

Improving low frequency content of the data, as a result of


broadband processing, increased seismic resolution,
continuity of the subsalt events, and phase fidelity for
velocity model building and inversion.
High-resolution multiazimuth CIP tomography provided
the greatest impact, enhancing post-salt definition,
including existing stratigraphic and fault traps of existing
reservoirs, and enabling salt delineation.
From economic point of view, the acquisition solution we
used for this project is very attractive to be used in similar
obstructed areas, where OBC or OBN acquisitions are
typically required.
Acknowledgements
We like to thank Schlumberger WesternGeco Multiclient
for the permission to present these results and we
acknowledge the contribution of Alexander Zarkhidze,
Saeeda Hydal, Clive Gerrard, Seet Li Yong, Zach Wolfe,
Myla Kilchrist, and South Timbalier data processing team.

Figure 9. Example of a post salt reservoir after high resolution


KDM

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 181

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bakulin, A., M. Woodward, Y. Liu, O. Zdraveva, D. Nichols, and K. Osypov, 2010, Application of
steering filters to localized anisotropic tomography with well data: 80th Annual International
Meeting, SEG, Expanded Abstracts, 42864290, http://dx.doi.org/10.1190/1.3513765.
Dragoset, B., E. Verschuur, I. Moore and R. Bisley, 2010, A perspective on 3D surface-related multiple
elimination: Geophysics, 75, no. 5, 75A24575A261, http://dx.doi.org/10.1190/1.3475413.
Jiao, K., D. Sun, X. Cheng, and D. Vigh, 2015 Adjustive full waveform inversion: 85th Annual
International Meeting, SEG, Expanded Abstracts, 10911095,
http://dx.doi.org/10.1190/segam2015-5901541.1.
Moldoveanu, N. J., Kapoor, and M. Egan, 2008, A single-vessel method for wide-azimuth towedstreamer acquisition: 78th Annual International Meeting, SEG, Expanded Abstracts, 6569,
http://dx.doi.org/10.1190/1.3054856.
Woodward, M., D. Nichols, O. Zdraveva, P. Whitfield, and T. Johns, 2008, A decade of tomography:
Geophysics, 73, no. 5, 511, http://dx.doi.org/10.1190/1.2969907.
Zdraveva, O., R. Hubbard, M. OBriain, D. Zhang, and C. Vito, 2012, Anisotropic model building in
complex media: Three successful strategies applied to wide azimuth data from Gulf of Mexico:
74th EAGE Conference and Exhibition, Extended Abstracts.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 182

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Removal of Doppler Effects from Marine Vibrator OBN Seismic


Chen Qi*, University of Houston
Fred Hilterman, Geokinetics Inc.
Summary
The marine vibrator has a major advantage over the air-gun
in that it is more environmentally friendly but because of
source movement, a Doppler Effect must be removed from
the recorded trace. We propose an efficient method in the fk domain to remove the Doppler phase distortion. Compared
with previous f-k methods, ours removes the complete
Doppler phase distortion and has the ability to handle spatial
aliasing.
Introduction
Many methods have been developed in the past 35 years to
increase the productivity of land vibrator acquisition
including simultaneous shooting, cascaded sweep and slip
sweep (Bagaini 2006). For the marine case, the vibrator is
more environmentally friendly because vibrators emit
seismic energy with a restricted bandwidth, and over a
longer time interval with peak energy significantly lower
than impulse sources. With pilot-sweep correlation after
recording, the effective source signature energy is increased.
The main obstacle is the Doppler Effect due to the movement
of the source during acquisition which was described in the
classic paper by Dragoset (1988). When the vibrator and
hydrophones are moving together, as in the case of towed of
towed streamer acquisition, reflections from horizontal
layers dont experience a phase distortion but dipping events
do. However, when the receivers are stationary, such as
ocean-bottom nodes (OBN), significant Doppler phase
distortions exist even for flat reflections at vibrator tow
speeds of less than 4 knots. Phase corrections for the
Doppler Effect can be applied after correlation (Dragoset) or
before correlation (Hampson and Jakubowicz, 1995). With
streamer acquisition and pre-correlation phase corrections,
both source-motion and receiver-motion corrections are
needed. Since OBN receivers are stationary, only sourcemotion corrections are needed for pre-correlation dephasing.
With an additional term in Dragosets phase correction, we
show that the pre- and post-correlation dephasing methods
yield the same results. However, the post-correlation
method conveniently allows spatially aliased events to be
properly phase corrected.

Forward Modeling of Doppler Effect


Similar to definitions given by Aldridge (1992), a sweep
frequency signal is defined as

2016 SEG
SEG International Exposition and 86th Annual Meeting

() = ()(),

(1)

where () is the amplitude function and () is the phase


function. The phase function is obtained with the integration
of the frequency function ()

() = 0 + 2 (),

(2)

where 0 is a constant. As a side note, a(t), f(t) and (t) are


not the instantaneous attributes of amplitude, frequency and
phase associated with the complex-valued analytic signal
(Aldridge, 1992).
For simplicity, a linear sweep is used to analyze the Doppler
Effect. The frequency function is defined by the sweeps
initial frequency(0 ), final frequency (1 ) and length () as
1 0
(3)
() = 0 +
, 0 .

The phase function is derived from equations (2) and (3) as


(4)
(1 0 ) 2
() = 0 +
.
2
The Doppler Effect is easily synthesized by forward
modeling the acquisition process. Two modeling methods
have been proposed previously. The first method mentioned
by Dragoset calculates the impulse response of diffraction
scatterers and sums the diffraction responses together. The
second method proposed by Hampson and Jakubowicz uses
the extended convolutional model (multi-channel
convolution in time and space) to express the forward
process of the marine vibrator acquisition. Our method is
closer to the one proposed by Dragoset.
Using geometric ray tracing, reflections from flat or dipping
interfaces -are computed for each time sample of the sweep
signal as the source moves. The source positions are updated
for each time sample of the source sweep. For ease of
understanding, a stationary OBN receiver is moved from the
sea floor to the surface in Figure 1. As the boat moves from
the sweeps starting position to the position at the end of the
sweep, the receiver trace is recorded with a time delay t1 with
respect to the sweep [Figure 1(b)]. If the boat is stationary,
the receiver trace and the pilot sweep are identical after
eliminating the first break time t1. However, as shown in
Figure 1(c), the time-shifted recorded trace is different from
the pilot sweep due to the Doppler Effect. The example
shown in Figure 1(c) used unrealistic parameters (boat
speed = 100m/s) so that differences between the receiver
trace and the pilot sweep are distinguishable for a sweep
length of 1 second. A more realistic model is built (Figure

Page 183

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

2) and a common-receiver gather with pilot sweep


correlation (Figure 3) is shown with the linear sweep
parameters described in Table 1. The reflection event
generated by the flat sea floor should be zero-phase after
correlation with the pilot trace. However, as is shown in
Figure 3(a) and (b), wavelets with negative and positive
time shifts appear unsymmetrical because of opposite phase
dispersion.

Figure 3. (a) Common-receiver gather with 20m shot


interval. (b) All traces in (a) are statically aligned to the
arrival time of the normal incident reflection at 800ms.

Figure 1. (a) Receiver and source are arranged at sea level.


(b) The receiver and pilot traces are overlain. (c) The
receiver trace is shifted by time t1 to emphasize the Doppler
Effect against the pilot sweep.
()

()

()

()
.

An apparent dip exists for the event in Figure 3(b) because


of the gradual change in the phase dispersion curves from
the -600m trace to the +600m trace. The phase spectra in
Figure 4 illustrate this change in the dispersion curves.
Previous literature indicates the Doppler Phase Dispersion
can be predicted using the following equation for a linear
sweep

(/)

Table 1. Linear sweep parameters for synthetics in


Figure 3. is the boat speed.

Doppler Phase Dispersion


The removal of the Doppler Effect could be done either preor post-correlation with the pilot sweep. The pre-correlation
solution was discussed by Hampson and Jakubowicz and it
was slightly more accurate than the original post-correlation
method for a linear sweep. However, both methods are
equally accurate with the additional term we included. An
advantage of the post-correlation method is its ability to
handle spatial aliasing by applying linear moveout (LMO)
before dispersion corrections.

Figure 4. Phase spectra of traces shown in Figure 3(b) with


time zero set to the alignment time at 800ms. Doppler phase
distortion is observed on traces with for non-zero sourcereceiver offsets.

() =

2 2
,
1 0

= .

Figure 2. A more realistic model with sea floor at 600m


depth. The common-receiver gather shown in Figure 3 is
acquired from -600m to 600m offsets with a constant boat
speed. The shot point interval is 20m. The acoustic velocity
in water is 1500m/s.

2016 SEG
SEG International Exposition and 86th Annual Meeting

(5)

(6)

The phase () is predicted with the knowledge of the


sweep duration (), the sweep start frequency (0 ), the sweep
end frequency (1 ) and the Doppler Factor (). The Doppler
Factor is a parameter that Dragoset related to the speed of
the boat ( ) and the ray parameter (). Though equation (5)
predicts the phase dispersion well, the prediction accuracy is

Page 184

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

not enough to completely remove the Doppler Effect


compared to the correction method applied on uncorrelated
data. The modified dispersion equation by Aldridge (1992)
is
2( 0 )
(7)
() =
.
1 0
Figure 5 compares the phase spectra with dispersion
equations (5) and (7) for the -600m offset trace shown in
Figure 3(b).

Figure 5. Equation 7 (red) predicts the phase dispersion


(black) better than equation 5 (blue).

5.

Inverse transform the filtered data from f-k domain to


t-x domain.

Discussion
For linear sweeps, dispersion corrections based on equation
(7) completely remove the Doppler dispersion and the
forward and inverse f-k transforms are stable and efficient.
Synthetic seismic data shown in Figure 3(a) are phase
corrected using the f-k method, and the data after correction
are shown in Figure 6(a). For validating the effectiveness of
the phase correction process, the same synthetic seismogram
is generated but with a stationary source. The ideal
seismogram does not suffer the phase dispersion because the
speed of the boat is 0m/s and no Doppler Effect exists. The
subtraction error of the corrected data and the ideal
seismogram [Figure 6(b)] is hardly observed except for the
end traces which are affected by the boundary. We have
compared our result against the Hampson and Jakubowicz
method and the subtraction errors are basically the same.

Phase-correction in f-k Domain


According to equation (7), phase-correction filtering could
be applied to a common-receiver gather in the -p domain.
After the -p transform, each trace represents a plane wave
with a single slope . Referring to equations (6) and (7) ,
once the shooting parameters are defined, the phase
dispersion is only related to the slope and the boat speed
which is usually known. For each trace in -p domain,
phase-correction filtering is implemented by firstly
designing an all-pass phase filter in the frequency domain
according to equation (7),
Fourier transforming it back
to the time domain and then convolving the designed filter
with the specified -p trace.
However, there are artifact disadvantages with the -p
transform (Yilmaz, 2001). To avoid such difficulties, we
developed an f-k domain phase-correction method using the
slice theorem. According to the slice theorem (Kak and
Rosenfeld 1982), a trace in the -p domain is projected to the
radial line in the f-k domain with
(8)
= ,
where represents wavenumber. The workflow to do f-k
domain phase-correction for a linear sweep is
1. Sort data into common-receiver gather
2.
f-k transform the data
3. Calculate filter values for each grid point (, )
(, ) = (, ) =
4.

2 ( 0 )
1 0

(9)

At each grid point, multiply the data value with F(k,f)

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 6. (a) Seismic data shown in Figure 3(a) is phasecorrected using f-k domain method. (b) Subtraction error
between phase-corrected data shown in (a) and the synthetic
data generated with stationary source ( = 0/).

Both phase correction methods generate severe artifacts


when data are spatially aliased. This problem can be avoided
with our method by applying a linear moveout before f-k
transform. Figure 7(a) illustrates the direct-arrival part of
the synthetic seismogram shown in Figure 3(a) and it can be
seen that first break picking is more difficult as a result of
Doppler Effects. If no preprocessing is done, the artifacts
from spatially aliased events corrupt the Doppler distortion
even more. This is shown by the subtraction error with the
ideal data (Figure 7b). In order to avoid additional phase
error due to spatially aliasing, a linear moveout (LMO) is
therefore applied first to the negative offsets and then to the

Page 185

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

positive offsets. On field data, a series of cones are defined


with different LMO. The subtraction error after the addition
of LMO shows little error except near the junction of the plus
and minus LMO (Figure 7c). Of course, this method
depends on dividing the common-receiver gather into
segment cones with consistent dip.

4.

5.

a. Compute for cell i, pi = -ki / fi


b. Cell i phase distortion, (pi / pDA) dist(fi)
f-k transform common-receiver gather
Apply phase distortion filter and inverse f-k

A common-receiver gather with a non-linear sweep (Figure


8a) is phase corrected with the above procedure (Figure 8b).
Spatial aliasing was removed with LMO before phase
correction. After phase correction, the direct arrival and seafloor reflection have more concentrated wavelet energy and
less side lobes.

Figure 8. (a) Input common-receiver gather from noenlinear sweep. (b) Field data shown in (a) after Doppler Effect
and spatial aliasing are removed.

Conclusions

Figure 7. (a) Direct arrival for synthetic data shown in


Figure 3a. (b) Synthetic data shown in (a) is phasecorrection filtered and is subtracted from the ideal data
(stationary source). (c) Before phase-correction filtering,
LMO is applied to avoid spatial aliasing and subtraction
from the ideal data shows little error.

Field Data
Ocean-Bottom receiver field tests with different marine
vibrator sweep parameters were conducted by Geokinetics.
One experiment had sweep parameters similar to those in
Table 1 except the sweep was highly non-linear. Each sweep
was designed to accommodate a specific exploration goal.
Analytic expressions of the sweeps were not necessarily
known, but of course the pilot sweeps were saved. The data
were corrected for phase distortion in the f-k domain with the
following procedure.
1. Generate synthetics of the direct arrival (slope =
pDA) with and without boat movement.
2. Fourier transform both synthetics and subtract
phase spectra; phase distortion dist(f) at pDA.
3. Create f-k matrix for phase-correction filter.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Weve discussed five methods, which were tested, to remove


Doppler distortions from marine vibrator data. They
included
pre-correlation
and
post-correlation
methodologies. All five methods were equally successful in
removing the Doppler Effects. Pre-correlation methods
(Hampson and Jakubowicz) are commonly preferred
because analytic expressions of the sweep are not necessary
and the methodology is stable. However, our postcorrelation methods are stable, accurate and efficient in that
they only require f-k forward and inverse transforms of the
common-receiver gather. An advantage of the f-k method is
its ability to handle spatial aliasing of events such as first
arrivals with pre- and post-LMO applications in the x-t
domain. For sweep signals that are difficult to express
analytically, a forward modeling approach can be used to
numerically estimate the phase dispersion filter which is then
applied in the f-k domain.
Acknowledgement
We thank John Archer, Sam Sampanthan, Lee Bell and Grog
Fookes from Geokinetics for their assistance and guidance.

Page 186

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Aldridge, D. F., 1992, Mathematics of linear sweeps: Canadian Journal of Exploration Geophysics, 28,
no. 1, 6268.
Bagaini, Claudio. 2006. "Overview of simultaneous Vibroseis acquisition methods." SEG Expanded
Abstract. 10.1190/1.2370358.
Dragoset, W. H., 1988, Marine vibrators and the Doppler effect: Geophysics, 53, no. 11, 13881398.
http://dx.doi.org/10.1190/1.1442418.
Hampson, G., and H. Jakubowicz, 1995, The effects of source and receiver motion on seismic data:
Geophysical Prospecting, 43, no. 2, 221244. http://dx.doi.org/10.1111/j.13652478.1995.tb00133.x.
Kak, A. C., and A. Rosenfeld, 1982, Digital picture processing: Academic Press Inc.
Pramik, Bill, Lee M. Bell, Adam Grier, and Allen Lindsay. 2015. "Field testing the AquaVib: an alternate
marine seismic source." SEG Expanded Abstract. 10.1190/segam2015-5925758.1.
Rietsch, E., 1977, Computerize analysis of vibroseis signal similarity: Geophysical Prospecting, 25, no. 3,
541552. http://dx.doi.org/10.1111/j.1365-2478.1977.tb01186.x.
Sheriff, R. E., 2002, Encyclopedic Dictionary of Applied Geophysics: Society of Exploration
Geophysicists. http://dx.doi.org/10.1190/1.9781560802969.
Yilmaz, O., 2001, Seismic data analysis: Society of Exploration Geophysics.
http://dx.doi.org/10.1190/1.9781560801580.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 187

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Optimization of sea surface reflection coefficient and source geometry in conventional dual
source flip/flop marine seismic acquisition
Maksym Kryvohuz*, Shell International Exploration and Production Inc.; Xander Campman, Shell Global
Solutions International
Summary
In the present paper we demonstrate how redundant nearfield data from dual airgun array marine seismic source can
be used for estimation of the sea surface reflection
coefficient, optimization of the geometry of airgun arrays
and inversion for source signatures of individual airguns.
The latter allows prediction of accurate 3D far field
signatures of the gun arrays. The optimized sea surface
reflection coefficient is found as a function of frequency
and angle of incidence at the sea surface, and agrees with
the Rayleigh model of reflection from rough surfaces.

The process of inversion of near-field data for airgun


notionals, however, depends on the accuracy of several
model parameters, such as the relative locations of airguns
with respect to each other, the magnitude of sea surface
reflection coefficient, trajectories of air bubbles released by
airguns, etc. Unfortunately, the geometry of airgun arrays is
subject to constant fluctuations due to sea surface waves,
water currents and rising air bubbles. Additionally, the
effective sea surface reflection coefficient fluctuates
depending on the roughness of the sea surface. Uncertain
values of the latter parameters lead to errors in the
estimated notionals and thus to inaccuracies in predicting
far-field signatures of airgun array (Ni et al., 2012).

Introduction
Estimation of signatures of marine seismic sources is
important for successful offshore exploration for
hydrocarbons. It serves as an input to many seismic
processing algorithms. Areas where knowledge of an
accurate source signature is of great value includes
deconvolution, multiple attenuation, modelling and
inversion, 4D, AVO analysis, reservoir monitoring, and
analysis of marine multicomponent recordings.
As noted by Ziolkowski (1991) in his paper Why dont we
measure seismic signatures, it is possible to use
measurements to obtain signatures of marine seismic
sources rather than getting their wavelets using statistical
methods or modelling. Several methods have been
proposed in the past for determination of signatures of
airgun arrays from measurements (Ziolkowski et al., 1982;
Landr et al., 1994; Laws et al., 1998; Amundsen 1993),
from which the method based on near-field measurements
(Ziolkowski et al., 1982) seems to be the most simple and
practical. Deriving signatures from measurements promises
to properly handle phase and amplidutes of the far-field
estimates better than modeled signatures do, in particular at
low frequencies. The Ziolkowski method is based on the
assumption that an array of N interacting airguns (air
bubbles) can be treated as N non-interacting point sources.
The total wavefield of the gun array is then a superposition
of the fields from N notional point sources. Near-field
recordings of airgun arrays acoustic field at N different
places (i.e. by N different hydrophones installed on the
airgun array) are sufficient to perform the inversion of such
near-field data to find N source signatures of notional point
sources.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Direct measurement of the coordinates of each individual


airgun is difficult at present due to the low accuracy of GPS
positioning devices. Even more complicated is deducing
the magnitude of the sea surface reflection coefficient. Its
dependence on frequency and angle of incidence at the sea
surface was shown in several studies (Orji et al., 2013;
Klver, 2015) In practical applications, the sea surface
reflection coefficient often has to be guessed by assigning
some value greater than -1.
In this paper we show that although parameters such as
those discussed above are difficult to measure directly,
their values can be estimated indirectly from redundant
near-field data. Indeed, as noted in the beginning of this
section, one needs only N NFHs to obtain N airgun
notionals. If the number of NFHs is greater than N, then
NFH data is redundant and we can use this extra data to
extract some additional information, such as information on
the array geometry or the sea surface reflection coefficient.
Such redundancy, for instance, is achieved in a
conventional flip/flop dual-source acquisition, in which two
airgun arrays fire in turns one after another (Fig.1). If NFHs
are installed on both arrays, then at every shot the number
of recording NFHs is twice the number of firing airguns.
Such data redundancy allows us not only to invert for
airgun notionals but also to estimate other unknown
parameters as explained in the next section.
Theory
In this section we review the method of extraction of airgun
notionals from near-field data, and show how it can be
extended to optimization of additional parameters.

Page 188

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Optimization of sea surface reflection coefficient and source geometry

Figure 1: Dual source in flip/flop marine seismic acquisition. Gray cylinders represent airguns, green cones represent NFHs, blue spheres
represent air bubbles released by airguns of the firing array, and the blue plane represents the sea surface.

Estimation of airgun source signature from near-field


data
Ziolkowski et al. (1982) have proposed a method for the
prediction of far field signatures of airgun arrays from nearfield measurements. The main assumption of the latter
method is that air bubbles released by airguns can be
treated as point sources with effective wavelets (notionals)
s(t). Notionals sj(t) are not known a-priori, but can be
extracted from recordings of near-field hydrophones
(NFHs) installed on airgun array. The pressure field
produced by an airgun array as recorded by the i-th NFH is
then

rij (t )
1

pi (t )
s j t
c
j 1 rij ( t )

rijg (t )
1
,
R * g s j t

c
r
(
t
)
j 1
ij

g
rij (t ) , rij (t ) are source-hydrophone

(1)

where

and ghost-

hydrophone separations which depend on time due to


relative horizontal and vertical motion of air bubbles, N is
the number of airguns, and R is the frequency- and angledependent sea surface reflection coefficient. N independent
measurements pi(t) provide N different superpositions of
sj(t), which should be sufficient to solve for the N unknown
functions sj(t). Once sj(t) are determined, equation (1) can
be used to calculate the 3D far-field signature of the airgun
array. To find sj(t) from Eq.(1), one can introduce vectors

p { p1 (t0 ),..., p1 (t M ), p2 (t0 ),..., p N (t M )}


s {s1 (t0 ),..., s1 (t M ), s2 (t0 ),..., s N (t M )}

2016 SEG
SEG International Exposition and 86th Annual Meeting

(2)

and rewrite Eq.(1) in the form of a system of linear


equations

p Zs ,

(3)

where operator Z acts on s according to Eq.(1).


One can see from Eq.(1) that matrix Z depends on several
parameters: coordinates of the airguns, coordinates of
NFHs, the sea surface reflection coefficient, trajectories of
air bubbles, etc. Denoting all these parameters with ,
equation (3) takes the following form:

p Z( )s ,

(4)

in which matrix Z is now a function of parameters .


Uncertain values of will influence the accuracy of
inversion for airgun notionals s. This is illustrated, for
instance, in Fig.2, where an incorrect value of the sea
surface reflection coefficient resulted in boosting of the
source signature spectrum around 100 Hz. It is therefore
desirable to know the true values of these parameters, and
one way to do so is to solve Eq.(4) for both and s. The
latter can be achieved if the number of NFHs is greater than
the number of airguns, i.e. when the system of equations
(4) is overdetermined.
Nonlinear optimization
Eq.(4) is to be solved for both and s in the least-square
sense:

min p Z( )s

,s

(5)

From Eq.(5) one can notice that minimization over s is


trivial, and for given the solution is

s( ) ( Z( )T Z( )) 1 Z( )T p

(6)

Substitution of Eq.(6) back to (5) reduces the original


minimization problem to the following:

Page 189

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Optimization of sea surface reflection coefficient and source geometry

Figure 2: Comparison of predicted (blue) and measured (red)


source signature at one of the NFHs on the idle array. Blue curve
in the top plot was obtained by using model reflection coefficient
R=-1, and in the bottom plot an optimized reflection coefficient
was used.

Figure 3: Synthetic test of the proposed optimization algorithm.


Black spheres represent true positions of airguns, red spheres
represent assumed positions of airguns (left pictures show
geometries of the port array, right pictures starboard array). Top
pictures (a) show positions of airguns in the beginning of the
optimization cycle. Bottom pictures (b) show converged positions
of airguns..

Examples

coefficient that fluctuates around its average value given by


the Rayleigh model (Brekhovskikh and Lysanov, 2003).
We assign synthetic notionals to each airgun, generate
synthetic NFH data, and then use this data in the
optimization algorithm described by eqs. (5)-(7) to search
for airgun notionals, geometry of the arrays and the sea
surface reflection coefficient, starting with a nominal
regular arrangement of airguns in the array and a constant
reflection coefficient of -1. One can see from Fig.3 that the
optimization algorithm recovers the geometry of the airgun
arrays well. We note, however, that the depths of individual
airguns were not recovered since the present dual-source
setting is more sensitive to cross-line displacements rather
than displacements in depths. The modeled notionals and
the reflection coefficient has been recovered as well.

Synthetic test

Real data examples

We first illustrate the performance of the optimization


algorithm (5)-(7) on synthetic example, see Fig.3. We
consider a dual source consisting of two airgun arrays, each
having 21 airguns. The arrays are separated by 50 meters
from each other. We perturb the geometry of these arrays
by adding random cross-line and in-line displacements to
the airguns, and also perturb the depths of selected airguns
by +/-20 cm. To each pair of airgun and NFH we assign an
angle- and frequency-dependent sea surface reflection

We have applied our optimization algorithm to near-field


data from two different surveys. Each survey was done
with dual array source that has 21 airguns and airgun
clusters in each array. In one survey, the distance between
airgun arrays was 25 meters, in another survey the distance
was 50 meters. The airgun arrays were towed at depth of 8
meters.

min p Z( )s( )

(7)

Reduction of the original separable least square problem,


Eq.(5), to smaller least square problem, Eq.(7), is known in
the literature as the Variable Projection (VP) method
(Golub et al., 1973). The VP method is known to converge
in less number of iterations than the standard GaussNewton method, and results in fast solution of the
minimization problem (7). In the next section we provide
several examples of such optimization.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 190

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Optimization of sea surface reflection coefficient and source geometry

In Fig.4 we show the optimized sea surface reflection


coefficient estimated from near-field data of each survey
(one shot) and compare it with the Rayleigh reflection
coefficient. One can see that the optimized reflection
coefficients indeed depend on frequency and angle of
incidence similar to the Rayleigh model. This fact suggests
that in future optimizations of the sea surface reflection
coefficients it is sufficient to parameterize the reflection
coefficient in Rayleighs form, which depends only on one
parameter the RMS of the sea wave heights.
In Fig.5 we also show the optimized geometry of airgun
arrays at two consecutive shots of the flip array in the
second survey (with 50 meter spacing between arrays). We
note that the accuracy of recovery of cross-line positions of
individual airguns depends on the sampling rate of the NFH
data. NFH data from the second survey sampled at 0.1 ms
should be able to recover positions of individual airguns
with accuracy of up to 20 cm, while NFH data from the
first survey was acquired at 2ms sampling and thus lacks
such precision.

Figure 5: Optimized geometries of the port (left) and starboard


(right) arrays from the second survey for two consecutive shots of
the port source.

Conclusions

Figure 4: Frequency- and angle-dependent reflection coefficients.


(a) - optimized reflection coefficient from the first survey; (b)
optimized reflection coefficient from the second survey. In both
cases, optimized reflection coefficients were searched in the form
2

R ( f , ) A exp( Bf Cf D ) , and optimization was


done over 4 parameters A, B, C and D. (c) Rayleigh reflection
2

coefficient R ( f , ) exp(2( 2f cos( ) / c ) ) .

2016 SEG
SEG International Exposition and 86th Annual Meeting

Near-field measurements can be used not only for inversion


of airgun notionals but also to recover shot-specific
parameters of airgun arrays, such as coordinates of
individual airguns and the sea surface reflection coefficient.
Near-field data from conventional dual-source surveys
provides possibility to run such extra optimizations. The
latter improves predictions of the 3D source signature and
should lead to higher quality data processing such as
designature and deghosting.
Acknowledgements
The authors thank Shell for giving permission to publish
this work, and Shell UK and Gabon for allowing us to show
the results. We also acknowledge Henk Vocks for support
regarding the real data example.

Page 191

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Amundsen, L., 1993, Estimation of source array signatures: Geophysics, 58, 18651869,
http://dx.doi.org/10.1190/1.1443402.
Brekhovskikh, L. M., and Yu. P. Lysanov, 2003, Fundamentals of ocean acoustics: Springer.
Golub, G. H., and V. Pereyra, 1973, The differentiation of pseudo-inverses and nonlinear least squares
problems whose variables separate: SIAM Journal on Numerical Analysis, 10, 413432,
http://dx.doi.org/10.1137/0710036.
Klver, T., 2015, Derivation of statistical sea-surface information from dual-sensor towed streamer data:
77th Annual International Conference and Exhibition, EAGE, Expanded Abstracts,
http://dx.doi.org/10.3997/2214-4609.201413187.
Landr, M., J. Langhammer, R. Sollie, L. Amundsen, and E. Berg, 1994, Source signature determination
from ministreamer data: Geophysics, 59, 12611269, http://dx.doi.org/10.1190/1.1443683.
Laws, R., M. Landr, and L. Amundsen, 1998, An experimental comparison of three direct methods of
marine source signature estimation: Geophysical Prospecting, 46, 353389,
http://dx.doi.org/10.1046/j.1365-2478.1998.980334.x.
Ni, Y., C. Niang, and R. Siliqi, 2012, Monitoring the stability of airgun source array signature: 82nd
Annual International Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/segam20120875.1.
Orji, O. C., 2013, Sea surface reflection coefficient estimation: 83rd Annual International Meeting, SEG,
Expanded Abstracts, http://dx.doi.org/10.1190/segam2013-0944.1.
Ziolkowski, A., 1991, Why dont we measure seismic signatures? Geophysics, 56, 190201,
http://dx.doi.org/10.1190/1.1443031.
Ziolkowski, A., G. Parkes, L. Hatton, and T. Haugland, 1982, The signature of an air gun array:
Computation from near-field measurements including interactions: Geophysics, 47, 14131421,
http://dx.doi.org/10.1190/1.1441289.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 192

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Ocean Bottom Node Acquisition Optimization


Vijay P. Singh*, Anupama Venkataraman, Rachel Ho, Erik Neumann, Bernard Laugier
ExxonMobil Exploration Company (Div. Exxon Mobil Corporation), ExxonMobil Upstream
Research Company, ExxonMobil Development Company
Summary
To understand the legacy WAZ data limitations of a deep
water subsalt target in a GOM play and to determine if new
acquisition could provide uplift, we applied an integrated
workflow to analyze and optimize acquisition design. The
integrated workflow includes data and acquisition geometry
analysis, ray tracing, finite difference modeling and
migration, data sensitivity and economic analysis.
Technical and operational constraints led us to recommend
an Ocean Bottom Node survey for this prospect. Modelled
comparisons of current geometry with new acquisition
geometries suggest potential uplift from new acquisition.
Preliminary processing of the acquired OBN data shows
improvements which were predicted by the survey design
exercise.

overburden. The salt geometry results in small incidence


angles at the target reservoir which yield sub-optimal
imaging and large depth uncertainty.
High effort
proprietary processing of a combined WAZ/NAZ survey
improved the image. However, the improvement was
limited by the constraints of these legacy datasets.
To explore if new seismic acquisition and advanced
processing could overcome the legacy NAZ/WAZ data
limitations, we decided to model and analyze full azimuth
and larger offset acquisition. Moreover, large stand-off
zones for streamer vessels due to in-sea and sub-sea
installations makes towed streamer acquisition sub-optimal
in the area of investigation. Deep water OBN naturally
offers seismic data rich in offset, azimuthal content, and
broadband response.

Introduction
As exploration and new field development
projects move towards geologically complex regions (subsalt, sub-basalt, Arctic), new challenges in acquiring and
processing seismic data emerge. Often, in these complex
regions, current acquisition geometries have limited
azimuths and/or offsets and hence provide sub-optimal
images. Moreover, in-sea and sub-sea installations place
restrictions on streamer acquisition both because of standoff zones for streamer vessels as well as noise from the
drilling activity. In such cases, acquisition design should be
based on operational as well as technical constraints.

Limitations of legacy Data and Acquisition Geometry


Our first goal was to understand the acquisition
limitations of the legacy WAZ data by analysing the final
seismic images and gathers from this data. We made the
following observations: i) poor imaging and resolution in
the target region, ii) limited angle range in RTM angle
gathers and iii) relatively poor sensitivity for different salt
scenarios in the mini-basin. We performed both ray tracing
and Finite Difference (FD) modelling to validate and
understand the reasons for current data limitations. Ray
based analysis serves two purposes; first, it is a powerful
screening tool that highlights limitations in existing
acquisition, and aids in delineating geometries which could
provide potential illumination uplift. Second, the
granularity of ray based analysis provides several metrics
that enable us to place quantitative constraints and narrow
down the range of parameters that are needed to proceed to
the next step of design refinement using computation
intensive finite difference modelling and imaging.
The ray tracing modelling was performed on the
target horizon using the final WAZ velocity model and the
legacy WAZ acquisition geometry. Several attributes
derived from ray based modelling were analyzed. Here we
discuss only two of these attributes Simulated Migration
Amplitude (SMA) and average incidence angle.
SMA is an informative illumination map that
allows us to estimate the reflection amplitudes at the target
horizon similar to those observed in a Kirchhoff prestack
depth migration. The SMA map shows small amplitudes in
the Northeast portion of map (figure 1C). In the same
region we observe poor illumination in the seismic data.
This shows that poor illumination of the subsurface is the
reason for the deterioration in image quality in this region.

The case study presented here is related to a deep


and complex sub-salt target in the deep water Gulf of
Mexico, where multi-billion dollar development can be
impacted by uncertainties due to poor sub-surface imaging
over part of the field. The prospect is covered with several
legacy state-of-the-art wide and narrow azimuth surveys
(WAZ and NAZ). Because of the small incidence angles at
the deep target depths, the legacy wide azimuth survey is
akin to a narrow azimuth survey. This limits effectiveness
of velocity analysis and imaging of the steeply dipping
flanks in the area of interest. We developed and
implemented an integrated survey design workflow to
optimize the acquisition parameters for illumination of the
target reservoir. We then uniquely adopted our processing
flow and applied high-end imaging technology to image the
complex subsalt reservoir.
Case study
In this sub-salt prospect, the poorly imaged target
area lies below a deep mini-basin and complicated

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 193

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Average incidence angle represents another


important attribute that helps in understanding the effect of
seismic acquisition geometry on velocity estimation, AVO
analysis and seismic lateral resolution. The modeled
average incidence angle map shows that the incidence
angles are around 10 to 15 (Figure 1F) in the poorly
illuminated area. This is similar to angle range observed in
the RTM angle gather data (Figure 1E). These small angle
ranges are insufficient for proper illumination of the subsalt targets and make the legacy WAZ geometry not truly
wide-azimuth for the target depths of interest.
The above two attributes, SMA and average
incident angle, from ray based modelling show that poor
illumination and limited incident angle can explain the
observed deterioration in seismic image quality.
We designed an experiment to test the sensitivity
of WAZ data. In this experiment we modified the original
velocity model in the mini-basin by placing several
diffractors and generated new synthetic data with this
modified models. Then, we migrated the new synthetic data
with the original model instead of mini-basin diffraction
model. We observed that the migrated image is similar to
the original model and does not show any hints that a
wrong model is being used for migration. This suggests that
WAZ data has poor sensitivity at that depth and the
migrated image just captures an imprint of the velocity
model. This supports our hypothesis that the legacy WAZ
data are insufficient in resolving the complicated salt body
shape inside the deep mini basin.
New Acquisition Design
The modeling results strongly suggest that the
legacy WAZ acquisition geometry is insufficient for proper
sub-salt velocity analysis because it has inadequate angle
coverage and poor sensitivity to changes in the velocity
model. To explore a new acquisition design which
overcomes the WAZ data limitations, we modeled and
analyzed full azimuth and longer offset acquisition
geometries. Large stand-off zones for streamer vessels due
to in-sea and sub-sea installations makes towed streamer
acquisition sub-optimal in the area of investigation. Ocean
Bottom Nodes (OBN) can be deployed using RemotelyOperated Vehicles (ROVs) with high accuracy of a few
meters in areas with significant current or anticipated
obstructions. Moreover, OBN acquisition naturally offers
full azimuthal, longer offset and broadband response. These
properties, together with the low ambient noise
environment of the receivers placed on the deep sea floor,
have made OBN particularly appealing for high-fidelity
imaging required for reservoirs entering their development
and production phases.
The acquisition design was optimized to give the best
illumination and coverage at the subsalt target depths. At
the same time, limited number of available nodes and high

2016 SEG
SEG International Exposition and 86th Annual Meeting

acquisition cost had to be considered. With these objectives


in mind, we address the following node survey design
issues: i) node coverage nodes required for full coverage
and minimum offset coverage for optimal imaging, ii) node
spacing, iii) shot coverage for the given offset, iv) shooting
direction, and v) recording time.
To determine the optimal offsets required to illuminate the
subsalt structure, we performed ray tracing on the final
WAZ velocity model and subsalt target horizons. Raytracing results along with an analysis of seismic aperture,
phase change, incident angle, hit maps, and shot-receiver
azimuth led us to a conclusion that at least 12 km offset is
required for proper imaging. Illumination analysis in the
azimuth domain and flower plot analysis shows that full
azimuth (FAZ) acquisition is required although the
contribution from the NE is slightly more (Figure 3). OBN
acquisition is one of the few options that best illuminates
the sub-surface from multiple azimuths at a range of
offsets.
Once we estimated the offset and azimuth range required
for proper illumination, we proceeded to determine the shot
coverage and maximum recording time for the given OBN
layout. To estimate shot coverage, we added the first
Fresnel zone area to the edges of the target image area. We
then mapped the ray coverage in the shot domain to
determine the surface shot coverage those results in a
sufficient number of rays hitting this expanded target area
(figure 3). Most of the rays are coming from within a 12
km radius from the center and they gradually diminish with
offset. The resulting shot coverage has an oval shape that is
elongated in the NE direction suggesting that most rays are
coming from the NE direction. Therefore, NE is the
preferred shooting direction, and with this configuration the
boat would make the least number of turns.
Since we are working with static recording spreads, even
with so-called rolling geometries where a portion of the
receiver spread is moved between successive shots, the
time taken to complete all the shots dictates the overall rate
of acquisition. We computed two way times from all shots
to the target and back to the entire receiver spread, and
added additional 2 seconds in the maximum travel time to
capture the diffraction events. Although OBN continuously
record data, estimating recording travel time helps in
optimizing boat speed and selecting the source type (such
as flip-flop, etc.).
OBN layout is one of the most critical aspects of the OBN
survey design process, and limited node availability makes
this process very challenging. One has to simultaneously
optimise node coverage and node spacing. Sufficient node
coverage is required so that we record most of the reflected
energy and have enough apertures for migration. Node

Page 194

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

coverage was optimised based on the reflection ray hit


density coverage at the water bottom. We realized that
there were not enough nodes available for deployment to
capture all the reflected energy (Figure 3B). Therefore, we
experimented with different node spacing.
Larger node spacing allows for greater areal OBN
coverage. Many node and shot spacing tests that have been
performed (Olofsson et. al 2012, Wang and Hatchell 2013)
demonstrate that even for a relatively shallow target,
varying node spacing from 300m to 1000m has little impact
on structural imaging. To understand the impact of node
spacing on illumination, we conducted finite difference
modelling using different node spacing with an estimated
peak frequency wavelet. Then we performed Reverse Time
Migration (RTM) on the simulated data. We observed no
significant differences in the final RTM images. We
observed some swing noise in the image in case of the large
node spacing. However, there were no differences in
imaging resolution. The analytically computed first Fresnel
zone radius at the target depth is around 1500m. This
implies that at 600 m node spacing a Fresnel zone is
sampled by ~25 nodes and at 1000 meters node spacing the
Fresnel zone is sampled by ~9 nodes. These 9 nodes are
sufficient to sample Fresnel zone, therefore as long as the
Fresnel zone is well sampled by the nodes, there should not
be any differences in structural imaging. These analyses
suggest that for structural imaging we can choose larger
node spacing up to 1000m however we decided to be
conservative and choose a 400m staggered grid node
layout. In general nodes at the periphery of the OBN layout
provide aperture for migration. Therefore variable node
spacing can be used to obtain maximum apertures, a
practice that is not widely used in industry, but could be
considered as a viable option for future OBN acquisition
especially when node availability is limited. Variable node
spacing will reduce dependency on traditional streamer
data for migration and velocity model building. In our case
we simulated and realize the advantage of variable node
spacing; however we had sufficient node to cover desirable
area.
After estimating OBN acquisition parameters, the next
objective is to understand the sensitivity of the new
acquisition design. We simulated data with the mini-basin
diffraction model using the OBN acquisition geometry
and migrated the simulated data with the initial velocity
model. The resulting image shows strong indications that
the initial model is incorrect. Thus, OBN data has greater
velocity sensitivity to changes in the salt model than
compared to the legacy WAZ data.

target area. Ray-tracing also places constraints on the travel


time and aperture to be used in the finite difference
modelling. Comparison of images generated for the two
geometries from finite difference modelling and imaging
confirms that the OBN acquisition results in a significant
uplift over the legacy WAZ data in the poorly illumined
areas. We also used the finite difference modelling and
imaging to study sensitivity of the acquisition design to
node spacing and uncertainties in the velocity model, thus
gaining insights into the sub-surface features that control
image quality. We conclude that the OBN geometry is
suitable for improved imaging of the deep target.
Preliminary processing results from the newly acquired
OBN data show improvements in which are consistent with
this survey design. Broader data bandwidth and higher
signal-to-noise ratio is observed both in pre- and post- salt
section (figure 4a and 4b). Larger incident angle and wider
azimuthal coverage also aid the velocity update in
otherwise poorly illuminated area such as deep mini-basin
and reservoir area with complex overlain salt geometry.
Conclusions
We applied an integrated workflow to analyze the legacy
acquisition geometry data and optimized a new acquisition
design for imaging of a deep water subsalt target. The
integrated workflow includes data and acquisition geometry
analysis, ray tracing, finite difference modeling and
migration, data sensitivity and economic analysis.
Technical and operational constraints led us to recommend
an Ocean Bottom Node survey. A synthetic comparison of
legacy geometry with new acquisition geometries shows
uplift from the proposed acquisition. Preliminary
processing results of the OBN survey show improvements
in the data quality which were predicted in the survey
design study.
Acknowledgements
We thank Exxon Mobil Corporation management for
permission to present this work.

Courtesy of WesternGeco

Preliminary OBN results


The ray-tracing analysis shows that acquisition using OBN
geometry produces significantly better illumination in the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 195

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 1: Poor image area (A and B) in the legacy data and


corresponding illumination map (C), there is a very good
correlation between model and real data which illustrates
that poor image area is due to poor illumination. (D) And
(E) represent offset and angle image gather around well,
maximum angle at the target is 10 to 12 which matches
the modeled incident angle (F). The limited angles are not
sufficient for an effective velocity analysis.

WAZ B

OBN

Courtesy of FairfieldNodal

WAZ D

OBN

Figure 2: SMA maps for OBN using 8, 12 and 16 km


offset. By increasing the offset from 8 Km to 12 Km there
is a significant improvement of illumination, Illumination
improvement from 12 Km to 16 Km is very small.

Courtesy of FairfieldNodal

Figure 4: Comparisons of Final legacy WAZ and minimum


processed OBN image above salt (A & B) and below salt
(C &D) show that OBN data has high resolution both above
and below salt and provide an opportunity to update
velocity model. Same velocity model was used to create
both the images.

Figure 3: (A) Shot ray hit density in the image area for 12
km nominal offset, ray hits are gradually diminishing away
from center. This optimized shot coverage is used to design
shot layout and shooting direction. (B) Receiver ray hit
density in the image area; red is high hit and blue is low hit
density. Receiver coverage was planned in such a way that
most of the high hit density area can be covered.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 196

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Olofsson, B., P. Mitchell, and R. Doychev, 2012, Decimation test on an ocean-bottom node survey:
Feasibility to acquire sparse but full-azimuth data: The Leading Edge, 31, 457464,
http://dx.doi.org/10.1190/tle31040457.1.
Wang, K., and P. Hatchell, 2013, Impact of receiver decimation and mispositioning on time-lapse OBN
data: 75th Annual International Conference and Exhibition, EAGE, Extended Abstracts,
http://dx.doi.org/10.3997/2214-4609.20130128.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 197

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Optimal towing depth for marine seismic data minimizing the noise from normal modes
Toan Dao* and Martin Landr, Norwegian University of Science and Technology
Summary
The sea layer is an effective medium to transmit sound
waves. The acoustic signal from noise sources can be
detected at hydrophones from tens to hundreds of kilometers
away at any depth. Noise level is not monotonically
distributed in the water column. Contrary to the common
belief that it is more quiet in deeper water which is generally
applied to low-frequency surface waves, we find that noise
levels spatial distribution is non-monotonic with respect to
depth and is closely related to normal modes. Normal modes
are waves that travel within the water layer; along the
seafloor, reflect back to the free surface and are trapped in
the water column. Based on the theory for normal modes, we
find that in a 135 m water column, the noise level at 8 m is
less than at 60 m and greatest in between. We test our
hypothesis using a unique dataset acquired offshore Norway
that employed three streamer configurations: constant at 8
m, varying between 15-35 m and constant at 60 m deep. The
finding in the field data is consistent with the theory. Based
on this, we suggest to determine an optimal towing depth for
a given area based on theory when the normal mode signal
is minimal.
Introduction

between normal mode generated noise and depth is not


monotonic and more interestingly, the noise level is minimal
at 8 m, increases to a maximum for the snake datas range
and intermediate at 60 m depth. The finding is consistent
with the normal mode theory (Pekeris, 1948; Ewing et al.,
1957; Landr and Hatchell, 2012) that predicts the order of
the noise level in space. Normal modes are dominant wave
modes at offsets that are much greater than the water depth.
Normal modes travel along, reflect at the seabed and at the
free surface. We conclude by suggesting an optimal towing
depth for streamer data based on the height of the water
column.
Normal modes
Given a simple two-layer model where the first layer
represents the water column and the second layer is a halfspace, the density and velocity of each layer are denoted as
1 , 1 and 2 , 2 (Figure 1). Let be the phase velocity,
be the angular frequency and be the wavenumber, the
displacement potential at offset and depth for a given
frequency and mode number is given by Ewing et al. (1957):
(, , , ) =

Noise is an integrated part of seismic data that puts huge


constraint on processing and interpretation workflows. In
time lapse seismic, data quality has reached a plateau where
it is extremely difficult to further improve repeatability due
to uncontrollable noise sources other than the seismic
acquisition itself. Therefore, it is desirable to avoid these
noise sources. Recent advances in broadband seismic which
primarily attempts to preserve low frequencies (Kroode et
al., 2013) are in favor of greater towing depth. The source
depth has also been studied with respect to preserve as much
low frequencies as possible (Parkes and Hegna, 2011;
Landr and Amundsen, 2014) The marine environment is
more quiet at greater depth due to the fact that hydrostatic
noise, the dominant noise source that masks raw seismic
records, decays considerably with depth. Therefore, it is
reasonable to assume that towing deep is optimal for marine
streamer data. We investigate this assumption using a unique
dataset acquired offshore Norway where three different
streamer configurations were employed: 8 m constant
towing depth, varying (snake) 15-35 m towing depth and
finally a constant towing depth of 60 m. More details on the
acquisition and purpose of this data set can be found in
Dhelie et al., 2014. A major part of the hydrostatic noise is
removed from the data using a low-cut filter. On the contrary
to the stated assumption, we find that the relationship

2016 SEG
SEG International Exposition and 86th Annual Meeting

)} ( ) sin

S(). exp { (

sin

(1)

where
2

=
( ) =

12

2 ,

(2)

2
1
sin2 tan
2
2

, (3)

sin cos

is a solution corresponding to mode of the period


equation
tan

12

1=

2
1

2
2
1

(4)
2
2
2

and S() is the source term.


The sum of the potential over frequencies for a fixed mode
number is a mode among many modes that contributes to the
potential field. To illustrate the normal modes, we create a
model whose velocities of the water layer and the seabed are
1500 m/s and 1700 m/s respectively. The density ratio
contrast between the seabed and the water column is
assumed to be 1.8. Figure 2 shows the first constituent modes
at a fixed offset. The shapes of the constituent modes
resemble standing wave and the number of peak from the

Page 198

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

surface to the seabed is exactly the mode number. Each


mode only starts from a cut-off frequency. The lower modes
have lower cut-off frequencies and higher modes have
higher cut-off frequencies. The spanning of modes in the
frequency spectrum also varies. The mode number is
proportional to the bandwidth where that mode lives in.

Figure 1: A two-layer model where the first layer represents


the water column lying on top of the half-space that
represents the seabed. H=135 m in our example.

Figure 3: Noise level predicted by normal modes when the


source is a 5-80Hz square wave and is (a) shallow and (b)
at the seabed.

Figure 2: The displacement potential of each mode before


the source term is applied at 5 km offset. The constituent
modes resemble standing waves. The number of peak is
exactly the mode number.
The source term S() influences what modes will be
dominant in the data. If a 35 Hz Ricker wavelet is the source
wavelet, the dominant modes will be mode 2 and 3. Jumping
ahead to the field data a bit, we computed the frequency
response of the noise data and their average spectra (figures
4 and 5). We notice that most of the noise that is likely to
link to normal modes is below 80Hz. The higher frequency
noise observed in the 8 m data is vessel noise. We make a
realistic assumption that most noise sources are either very
shallow (ship noise) or deep at the seafloor. If we weight
each frequency between 5 and 80Hz equally, the amplitude
at different depths caused by normal modes from a single
source from far away is illustrated in Figure 3.

2016 SEG
SEG International Exposition and 86th Annual Meeting

This relationship suggests that the amplitudes at one depth


for the same noise sources that are 30 km and farther away
can be considered very slowly decreasing. The noise level
caused by a source 30 km away at 30 m and 60 m depth is
comparable to the same source 5 km away at 8 m depth.
Field data experiment
The streamer data used in this study were acquired in the
North Sea over a relatively flat area where water depth is 135
m on average. The dataset is divided into three parts which
employ different streamer geometries: 8 m, snake-like
depth from 15 m to 35 m, and slanted 60 m. The number of
shot points is 2000. The offsets in the data we use as the input
for computing rms amplitudes is from 3.5 to 6 km. The data
is kept unaltered in order to preserve the physics. The only
processing is the application of a Butterworth 5-Hz low-cut
filter which is the cut-off frequency of the first mode. Figure
3 is a representative shot record in the experiments. We
observe from Figure 3 (top) that the 8 and 60 m streamers
are flat for offsets larger than 3 km.

Page 199

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

acquired at different times and weather conditions. The 8 m


data were acquired in 11-14 knot wind, 30 m were acquired
in 28-30 knot wind and the 60 m data were acquired in 8-15
knot wind. We speculate that the noisier 30m data could be
contributed by the huge difference in weather condition.
However, Wenz (1962) reported that below 100 Hz, there is
little or no dependence on wind speed and sea state of the
noise while some other observers could correlate the weather
to frequencies as low as 50 Hz.

Figure 3: A representative shot gather and its corresponding


streamer geometry. The noise data is extracted from the area
in the blue box.
Figure 4 shows the frequency spectra of the noise data at 8m,
30 m and 60 m depth. Most of the noise spectra drops
significantly in amplitude levels for frequencies above 80 Hz
for all three cases. For the 8 m data, the noise from 80 Hz
and higher shown in the figure between shot point 1200 and
2000 is vessel noise which is not present in the 30 m and 60
m data. Figure 5 is the average spectra of the noise at the
three depths. Figure 6 shows the rms amplitude along the
shot line at different depths using frequency band from 580Hz. The vessel noise cannot be completely removed by
the 5-80 Hz bandpass filter. For most part of the shot line,
the rms amplitude of the noise at 8 m is smallest among three
considered depths. If the vessel noise was not present in the
8m data, we could safely infer that the noise level at 8 m
would be smaller than at 60 m. After the data are bandpass
filtered, the rms amplitude of the 8 m data decreases where
the vessel noise is. Even though the noise is present beyond
80 Hz in the 30 m and 60 m data, its amplitude is negligible
compared to the lower frequency amplitude.
Discussion
The normal amplitude dissipates at the rate of the inverse of
the offset. According to that, ideally the amplitude ratio of
the noise level at different depths should be a constant with
respect to offset. However, the seabed is not a homogeneous
medium. Its velocity and density vary slowly but have an
effect on the amplitude. Irregularities at the seabed also
contribute to noisy recorded responses, yet quite consistent
with the normal mode orders. Three parts of the data were

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5: Average noise spectrum of 2000 noise gathers of


8 m, 30 m and 60 m data.
Marine seismic noise is extremely complex and often cannot
be fully understood or quantified. Beside the normal mode
noise we propose to be the main cause for the noise level
orders at different depths, we need to name a few other
significant noise sources such as vessel noise, sea surface
and wind or weather related noise, traffic and other
manmade noise, and turbulence flow noise close to the
streamer. Among these known noise sources, if they are far
away from the streamers, they can contribute to the normal
mode noise. Weather-related noise is uncontrollable.
Turbulence and flow noise can be mitigated by using superhydrophobic coating streamers (Elboth et al., 2012). The
latter noise sources do not contribute to the normal mode
noise. It is important to stress that the normal mode signal
created by the source in a marine seismic experiment is not
a problem for seismic imaging. However, what we want to
focus on in this abstract is the background normal mode
noise created by previous shots and other noise sources in
the water column.
Conclusion
We use theory to predict the noise level caused by normal
modes at different depths in the water column. Compared to
measurements we find that this theory is consistent with
seismic field data. From streamer data acquired at 8, 30 and

Page 200

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

60 m depth, we find that the 8 m data is less influenced by


normal mode noise compared to the other two. The optimal
streamer depth when only normal mode generated noise is
considered can be determined from modeling. This optimal
towing depth is dependent on the water depth and the
dominant frequency range. For practical use, other noise
sources have to be considered and weighted with respect to
the influence of the noise impact from normal modes.

Acknowledgement
We thank Lundin for providing and allowing us to use the
data in this study. The Norwegian Research Council is
acknowledged for financial support the ROSE consortium at
NTNU.

Figure 4: Frequency spectra of the noise along the shot line of from top to bottom: 8m, 30m and 60m depth.

Figure 6: RMS amplitude of the noise along the shot line of 8 m, 30 m and 60 m data after 5-80Hz bandpass filtering.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 201

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Dhelie, P.E., J. E. Lie, V. Danielsen, A. K. Evensen, and A. Myklebostad, 2014, Broadband seismic A
novel way to increase notch diversity: 84th Annual International Meeting, SEG, Expanded
Abstracts, 148152, http://dx.doi.org/10.1190/segam2014-1330.1.
Elboth, T., B. A. P. Reif, O. Andreassen and M. B. Martell, 2012, Flow noise reduction from
superhydrophobic surfaces: Geophysics, 77, no. 1, P1P13, http://dx.doi.org/10.1190/geo20110001.1.
Ewing, W. M., W. S. Jardetsky, and F. Press, 1957, Elastic waves in layered media: McGraw-Hill.
Kroode, F ten, S. Bergler, C. Corsten, J. W de Maag, F. Strijbos, H. Tijhof, 2013, Broadband seismic data
The importance of low frequencies: Geophysics, 78, no. 2, WA3WA14,
http://dx.doi.org/10.1190/geo2012-0294.1.
Landr, 2007, Attenuation of seismic water-column noise, tested on seismic data from the Grane field:
Geophysics, 72, no. 4, V87V95, http://dx.doi.org/10.1190/1.2740020.
Landr, M and L. Amundsen, 2014, Is it optimal to tow air guns shallow to enhance low frequencies?
Geophysics, 79, A13-A18.
Landr, M., and P. Hatchell, 2012, Normal modes in seismic data Revisited: Geophysics, 77, no. 4,
W27W40, http://dx.doi.org/10.1190/geo2011-0094.1.
Parkes, G. E., and S. Hegna, 2011, How to influence the low frequency output of marine air-gun arrays:
73rd Annual International Conference and Exhibition, EAGE, Extended Abstracts, H012.
Pekeris, C. L., 1948, Theory of propagation of explosive sound in shallow water: Geological Society of
America Memoir, 27, 1117.
Wenz, M. G., 1962, Acoustic ambient noise in the ocean: Spectra and sources: The Journal of the
Acoustical Society of America, 34, 19361951, http://dx.doi.org/10.1121/1.1909155

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 202

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A new low frequency vibrator enhances blended Vibroseis acquisition


Zhouhong Wei*, INOVA Geophysical
Summary
Recording a wider range of frequencies using Vibroseis
technology has become popular in recent years in land
seismic acquisition. The ability to reduce the seismic
wavelets side lobes by recording lower frequencies and at
the same time the increase of bandwidth has been proven as
the main advantage of recording broadband data.
Meantime, improving crew productivity and reducing the
time and expense involved in surveys become very
necessary. The technique of blended acquisition with
dispersed source arrays might become a good solution to
achieve these goals. This paper attempts to present a newly
designed low frequency vibrator that could potentially
enhance this acquisition method. Experimental tests show
that this new low frequency vibrator has significantly
improved the ground force at the low frequency range (< 10
Hz) as well as the normal frequency range (10 100 Hz).
Introduction
In recent years, recording broadband data using Vibroseis
techniques has become a routine practice. In order to obtain
well sampled seismic wave-fields and improved seismic
imaging in a cost effective manner, the technique such as
blended acquisition with dispersed source arrays (Berkhout,
2012) is needed.
Tsingas et al. (2015) presented a case study using the
method of blended acquisition with dispersed vibrators for
acquiring optimum broadband data. In their experiment,
three conventional 80,000-lbs vibrators (DX-80) were
dedicated to three frequency bands, namely, 1.5 to 8 Hz,
6.5 to 54 Hz and 50 to 87 Hz. By implementing this type of
blended acquisition, they successfully established a
procedure that employed conventional seismic vibrators to
fit in each narrow frequency band with optimum force
concentration. Because of limited ground force output from
these conventional vibrators at low frequencies, a
customized low frequency sweep was built for the
bandwidth between 1.5 Hz and 8 Hz. However, this
customized low frequency sweep resulted in a long sweep
length (18 seconds). The long sweep length potentially
could downgrade the productivity.
To efficiently utilize the method of blended acquisition
with dispersed source arrays (Berkhout, 2012), it seems to
be very necessary to have a low frequency Vibroseis source
that can dedicate to this low frequency band (< 10 Hz).
Meier et al. (2015) introduced a counter-rotating eccentricmass vibrator. They demonstrated that this vibrator could

2016 SEG
SEG International Exposition and 86th Annual Meeting

achieve a ground force that is greater than 60,000 lbs in the


frequency range from 1 Hz to 5 Hz. Reust et al. (2015) also
presented a very low frequency source. This vibrator used
the technology of dual counter-rotating eccentric masses as
well. They showed that the vibrator could achieve its full
force (60,000 lbs) at 3 Hz.
In summary of these new introduced low frequency
vibrators, the advantage is that a significant ground force at
low frequencies is achieved. However, because of the use
of mechanism of counter-rotating eccentric masses the
physical frequency bandwidth of these vibrators is limited
approximately no more than 20 Hz. Therefore, in order to
acquire broadband data, a crew needs to have two different
types of vibrators. The vibrator using the mechanism of
counter-rotating eccentric masses covers the frequency
band below 10 Hz and the vibrator using the mechanism of
hydraulic system covers the normal sweep bandwidth.
Because of two completely different vibrator mechanisms,
the parts and components for the actuator cannot be
exchangeable. It is required for a crew to manage different
parts. Meanwhile, the crew needs to have skillful field
service teams that can maintain and troubleshooting these
different mechanism vibrators. Obviously, the operational
cost of survey can possibly be increased.
Based on these facts, a new generation low frequency
vibrator still using hydraulic system has been designed
(Wei, 2015a and 2015b). Wei (2015a and 2015b) has
demonstrated with a downhole measurement at 7500 ft
(2288 m) that a frequency bandwidth of 8 octaves (0.5 to
131 Hz) can be achieved with this newly designed
hydraulic vibrator. This wide bandwidth opens a way for
efficiently implementation of the technique of blended
acquisition with dispersed source arrays (Berkhout, 2012).
This vibrator can be easily used to either focus on the low
frequency band or normal frequency band.
To validate the vibrator performance in these frequency
bands, a field standing test was carried out in Middle-East
desert environment. Two sweep frequency bands were
tested using linear sweeps. One was the low frequency
band from 1 Hz to 10 Hz in 15s at 70% force level with
0.5-s cosine taper applied to front and end of the sweep.
Because a low frequency limits control is implemented in
the vibrator controller (Vib Pro HD), the linear sweep can
be performed with a very short taper (0.5 s) and a high
force output (70%). The other was the normal frequency
band from 6 Hz to 96 Hz in 12s at 80% force level with
0.5-s cosine taper applied to front and end of the sweep.
For purpose of comparison, an 80,000-lbs commercial
vibrator (AHV-IV Model 380) was also used in the test.

Page 203

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A new broadband vibrator enhances blended Vibroseis acquisition


The comparison was made in three aspects, force-energy,
harmonic distortion and accurate source signature. In this
paper, the new hydraulic low frequency vibrator is named
as the prototype vibrator.
Testing on sandy surface ground
The vibrator sanding test is performed on a sandy surface
ground. The sandy surface ground is a mixed layer of
loosen sand and gravel rock. This type of ground surface is
very common in Middle-East areas. Figure 1 illustrates an
example on sandy surface ground where the prototype
vibrator shakes. It clearly shows that a rectangular shape of
the vibrator baseplate is marked on the sandy surface
ground after vibration. The welding characters on the
bottom plane surface of the vibrator baseplate leaves clear
marks on the ground surface. This indicates that the
baseplate sinks during sweeping. Much plastic deformation
happens. Furthermore, the unevenness of the ground is seen
as well in the plot.

vibrator. Vibrators are run using a linear sweep from 1 Hz


to 10 Hz in 15s at 70% force level (56,000 lbs). Because of
the low sweep rate (0.6 Hz/s) this sweep can make the
vibrators dwell for a longer time at each low frequency so
that the steady state of the vibrator can be achieved. Thus,
the maximum force at each low frequency can be produced.
Figure 2 clearly demonstrates that below 10 Hz the
prototype vibrator generates much more fundamental force
than the 80,000-lbs commercial vibrator generates. At the
frequencies less than 3 Hz, the prototype vibrator can
output approximately 5-dB more fundamental force. From
3 to 5 Hz, an approximately 3-dB more fundamental force
is observed. Between 5 and 8 Hz, approximately 2-dB more
fundamental force is seen, and 1-dB more fundamental
force is shown above 8 Hz.

All of these confirm that partial decoupling occurs during


vibrator vibration. The sandy surface ground in theory can
cause asymmetry of the vibrator ground force wiggle trace.
The amplitude in downward half cycle is higher than the
amplitude in upper half cycle. It often causes the vibrator
ground force to decline at high frequencies. The stiffer
baseplate can sense this high frequency force drop.
Figure 2. Comparison of fundamental forces of the
prototype and the 80,000-lbs commercial vibrators at low
frequencies from 1 Hz to 10 Hz.
Figure 3 demonstrates another example of amplitude
spectrum comparison of weighted-sum ground forces
produced by the prototype vibrator (red curve) and the
80,000-lbs commercial vibrator (blue curve) where
vibrators are run using a linear sweep from 6 Hz to 96 Hz
in 12s at 80% force level (64,000 lbs).

Figure 1. The vibrator shaking on sandy surface ground.


More ground force-energy
Figure 2 shows an example of amplitude spectrum
comparison of weighted-sum ground forces produced by
two vibrators in a frequency range from 1 Hz to 10 Hz. The
red trace is produced by the prototype vibrator while the
blue trace is produced by the 80,000-lbs commercial

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3 clearly shows that the prototype vibrator generates


more fundamental force than the 80,000-lbs commercial
vibrator generates in entire sweep bandwidth. With this
sweep approximately 20-dB more fundamental force is
seen in the bandwidth between 6 Hz and 10 Hz. Because of
a very fast sweep rate (7.5 Hz/s), the vibrator only uses 0.5
seconds to shake the frequency band from 6 Hz to 10 Hz.
The sweep in this 0.5-s time is still in the taper part. The
vibrator hasnt achieved its steady-state performance. The
ground force is a transient force not a steady-state force.
Due to improved hydraulic systems the prototype vibrator
has a much faster response than the 80,000-lbs during
transient time. This is the reason why a 20-dB difference is
observed. After the taper, the difference reduces to
approximately 3 dB at 14 Hz. This 3-dB difference is kept

Page 204

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A new broadband vibrator enhances blended Vibroseis acquisition


till 76 Hz. Starting at 76 Hz the spectrum of the prototype
vibrator declines towards the end of the sweep while the
spectrum of the 80,000-lbs commercial vibrator remains
flat. Because the prototype vibrator is equipped with a very
stiff baseplate, it can sense this high frequency drop.

using the linear sweep from 6 Hz to 96 Hz in 12s. The top


graph is produced by the prototype vibrator and the bottom
graph is produced by the 80,000-lbs commercial vibrator.
In general, the prototype vibrator produces less harmonic
distortion than the 80,000-lbs commercial vibrator. Overall,
the reduction of 5% in total harmonic distortion is seen,
especially below 2 seconds (< 21 Hz).

Figure 3. Comparison of fundamental forces of the


prototype and the 80,000-lbs commercial vibrators in the
frequency range from 6 Hz to 96 Hz.
Less harmonic distortion
Figure 4 displays a frequency-time (F-T) variant plot of
weighted-sum ground forces where two vibrators are run
using the linear sweep from 1 Hz to 10 Hz in 15s. The top
graph is made from data produced by the prototype vibrator
and the bottom graph is made from the 80,000-lbs
commercial vibrator. It can be seen that the intensity of
harmonics above the 2nd are dramatically reduced on the
prototype vibrator. Overall, the reduction of 20% in total
harmonic distortion is observed.

Figure 5. Comparison of harmonic distortion of two


different vibrators from 6 Hz to 96 Hz. (a) The prototype
vibrator; (b) a commercial 80,000-lbs vibrator.
Accurate source signature
To discuss the accuracy of the vibrator source signature, it
is better to start from Figure 6. Figure 6a can be used to
illustrate the dynamic motion relationship between the
vibrator ground force and the captured ground mass
system. In this model, Fg represents the true vibrator
ground force and Ag is the acceleration of the captured
ground mass system. The parameters of Mg, Kg and Dg
represent the mass, stiffness and viscosity of the captured
ground mass system, respectively. Based on Figure 6a, a
transfer function in frequency domain is developed. This
transfer function is expressed by equation 1. For
clarification, in the frequency domain the operator s is
equal to j where the is the angular frequency and the j
is equal to the square root of minus one.
= (

2 + +

Figure 4. Comparison of harmonic distortion of two


different vibrators from 1 Hz to 10 Hz. (a) The prototype
vibrator; (b) a commercial 80,000-lbs vibrator.
Figure 5 displays a frequency-time (F-T) variant plot of
weighted-sum ground forces where two vibrators are run

2016 SEG
SEG International Exposition and 86th Annual Meeting

(1)

Equation 1 tells us that the acceleration of the captured


ground mass is proportional to the vibrator ground force
that is filtered by the captured ground mass system. This
filter of the captured ground mass is expressed inside of a
pair of parentheses in equation 1. In order to explain this
filter clearly, the theoretical amplitude spectrum of the
filter is depicted graphically in Figure 6b. The amplitude

Page 205

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A new broadband vibrator enhances blended Vibroseis acquisition


spectrum first starts with a slope of 40-dB/decade at low
frequencies. Then, at the ground resonance the slope of the
amplitude spectrum changes to be 0-dB/decade and remain
this 0-dB slope through the rest of frequencies. The 0-dB
slope means that the output amplitude will equal to the
input amplitude. In other words, the amplitude spectrum of
the captured ground mass acceleration at frequencies below
the ground resonance will equal to the amplitude spectrum
of the vibrator ground force that is bended down with a
slope of 40-dB/decade towards low frequencies. At
frequencies above the ground resonance, the amplitude
spectrum shape of the acceleration will be identical to the
spectrum shape of the vibrator ground force.

Figure 6. (a) The captured ground model, (b) Theoretical


amplitude frequency spectrum of the vibrator ground force
(input) and the ground acceleration (output).
To demonstrate the findings from Figure 6b, an
accelerometer is buried 1 meter away from the vibrator
baseplate. In Figure 7, the top graph shows the amplitude
spectra of the vibrator ground forces of the prototype
vibrator and the 80,000-lbs commercial vibrator. It is
identical to Figure 3. The bottom graph shows the
amplitude spectra of buried accelerometers. Figure 7b
shows the amplitude spectra of two buried accelerometers
have a 40-dB/decade slope from 10 Hz to the resonant
frequency that lays in between 36 Hz to 46 Hz. At
frequencies below 10 Hz the spectra are bended down more

2016 SEG
SEG International Exposition and 86th Annual Meeting

towards lower frequencies due to the declined ground force


spectra. Above the resonance the spectrum of the
accelerometer buried 1 meter away from the prototype
vibrator drops at 76 Hz towards the end of the sweep. The
spectrum of the accelerometer buried 1 meter away from
the 80,000-lbs commercial vibrator drops at 72 Hz towards
the end of the sweep. However, Figure 7a shows that the
80,000-lbs commercial vibrator keeps a flat ground force
amplitude spectrum at higher frequencies. Figure 7a also
shows that the ground force amplitude spectrum from the
prototype vibrator drops at approximately 76 Hz towards
the end of the sweep. It can be concluded from Figure 7
that at high frequencies a strong similarity between the
ground force and the buried accelerometer is seen from the
prototype vibrator. The vibrator ground force spectrum
follows exactly to what the theory predicts. Precisely
speaking, the vibrator ground force produced by the
prototype vibrator is the true source signature that is
emitted into the ground.

Figure 7. (a) The spectra of weighted-sum ground forces of


two different vibrators from 6 Hz to 96 Hz; (b) the spectra
of buried accelerometers that are 1 meter away from the
baseplates.
Conclusions
Experimental tests show that the prototype low frequency
vibrator has significantly improved the ground force at the
low frequency range (< 10 Hz) as well as the normal
frequency range (10 100 Hz). These improvements have
potentials to enhance the high productivity technique using
blended acquisition with dispersed source arrays.

Page 206

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Berkhout, A. J., 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, no. 4, A19
A23. http://dx.doi.org/10.1190/geo2011-0480.1.
Meier, M. A., S. E. Heiney, J. Tomic, P. Ibanez, and C. N. Byrne, 2015, Low frequency seismic
acquisition using a counter rotating eccentric mass vibrator: US Patent application
US2015/0041242 A1.
Reust, D. K., O. A. Johnston, J. A. Giles, and S. Ballinger, 2015, Very low frequency seismic source:
85th Annual International Meeting, SEG, Expanded Abstracts, 221 225.
Tsingas, C., Y. Kim, and J. Yoo, 2015, Broadband Acquisition, Deblending and Imaging Employing
Dispersed Source Arrays: EAGE Workshop on Broadband Seismic - A Broader View for the
Middle East, EAGE, BS27.
Wei, Z., 2015a, A new generation low frequency seismic vibrator: 85th Annual International Meeting,
SEG, Expanded Abstracts, 211 215.
Wei, Z., 2015b, Extending the vibroseis acquisition bandwidth with a newly designed low frequency
seismic vibrator: EAGE Workshop on Broadband Seismic - A Broader View for the Middle East,
EAGE, BS06.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 207

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Improving acquisition efficiency with a situational awareness (SA) based SIMOPS management
solution
Gary Pemberton, Stuart Darling*, and Emma McDonald, ION
Summary
Situational awareness (SA) describes the accuracy of a
persons knowledge and understanding of a situation. It
directly impacts the quality of decisions made by
personnel. SA can be severely compromised when
personnel are overloaded with complex and dynamic
information, as experienced during simultaneous operations
(SIMOPS) management in many offshore fields. When
conducting seismic acquisition, poor SA can result in an
increase in risk exposure, inefficiency and expense.
This paper presents a SIMOPS management system which
works to improve the SA of all users (including, but not
limited to seismic operations personnel), therefore
improving the efficiency and reducing the risk exposure of
acquisition. This is primarily achieved through a linked
Gantt chart and map, and the ability to model forward in
time, showing the users where vessels and other infield
equipment (such as streamers, dive crews, drilling rigs etc.)
plan to be in the future. A knowledge-based system
automatically provides alarms when rules are breached,
reducing the chance of user error and increasing visibility.
Several case histories from commercial field operations
have demonstrated that downtime and risk exposure can be
reduced through effective SA.
The cost of simultaneous operations
Oilfields are complex environments, with multiple
operations occurring simultaneously. The variety and
intricacy of SIMOPS considerations have increased
markedly as oilfields and infrastructure have developed.
Managing SIMOPS has become a major task, and errors
due to poor management can have a massive cost in QHSE
and monetary terms.
Seismic acquisition is a challenge for both the operator and
contractor, as it introduces a substantial moving obstruction
into an environment already crowded with Exploration and
Production activity. This occurs alongside industries such
as fishing and shipping which operate across oilfields.
When a seismic crew is added, the SIMOPS complexity
increases and careful planning is required. Communication
is crucial to minimizing standby - operational plans are
exchanged infield daily, often verbally or via email (as text
or spreadsheet). The problems associated with this timing
and distribution are well recognized by the time an email
is sent the information may be out of date, and spreadsheets
cannot clearly represent the spatial nature of the operations.

2016 SEG
SEG International Exposition and 86th Annual Meeting

A SIMOPS management system is needed which provides


all relevant parties with timely, accurate, real-time updates
during rapidly changing conditions.
The significance of situational awareness (SA)
SA describes the accuracy of a persons current knowledge
and understanding of a task, compared to actual conditions
at the time (Lochmann et al., 2015). Endsley (1995)
describes three levels of SA: (1) perceiving/being aware of
critical information; (2) comprehending the meaning of
information in an integrated manner and (3) projecting
relevant elements into the near future, resulting in
appropriate action. The three levels build on each other dynamic updates of the present environment are required,
which contribute to accurate understanding of the situation,
and the projection into the future. In turn, this informs
appropriate decisions and actions for the current objectives
(Figure 1). SA is about more than information processing;
it focuses on human behavior in complex environments.
It is widely accepted that SA is a safety critical factor, and
the consequences of reduced SA in the marine environment
have been assessed - the USCG accident database indicates
that 60% of accidents can be attributed to a failure of SA
(Safahani and Tutttle, 2013). SA and risk perception were
identified as root causes by the Deepwater Horizon
investigation (Roberts et al, 2013). Statistics vary, but the
majority of incidents occur at SA level 1 (Table 1 and
Figure 1), indicating a strong requirement to improve the
users perception of elements in the current situation.
Level 1 Level 2 Level3
Aviation
78%
17%
5%
(Jones & Endsley, 1996)
Offshore Drilling
67%
20%
13%
(Sneddon et al, 2006)
Shipping
59%
33%
9%
(Hetherington et al, 2006)
Table 1: Percentage of incidents occurring at each level of SA

A phenomenon known as inattentional blindness can


further decrease SA. It defines the failure to notice a fully
visible, but unexpected, object or event when attention is
devoted to something else (Simons, 2000). As the volume
of information grows, it becomes impossible for any
individual to understand the relationships between all
variables. Consequently, companies suffer a loss of
performance and can increase the risk of QHSE incidents,
due to SIMOPS complexities.

Page 208

Improving SIMOPS management

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

SA and systems design


In light of the potential consequences, it is no longer
acceptable to rely on a system that requires the right person
to be looking at the right data at the right time, and then to
understand its significance in spite of simultaneous
activities and other monitoring responsibilities (p.121,
Graham et al, 2011). Endsley (1995) argues that systems
should be designed to support and enhance SA. As
processes are controlled by people, human error and
operational risk can never be entirely eliminated, but
effective systems can minimize any loss of awareness.
Literature surrounding systems design, systems engineering
and cognitive engineering is extensive, but the requirement
to collect, filter, analyze, structure, and transmit data is key
(Harrald & Jefferson, 2007). Historically there has been a
failure of systems to provide information in a usable format
(Bonaceto and Burns, 2005). Safety-critical systems
became complex to the point where they were unable to be
operated effectively by skilled users (Gersh et al, 2005).
While operator error is often cited as a causal factor, the
conditions and systems with which the operator works can

Provide a historic overview


Allow operators to loop back over time to check vessels
This could be achieved with a Common Operating Picture
(COP), a direct method to improve SA.
A Common Operating Picture
A COP is a single display of relevant information shared
by more than one Command. A common operational
picture facilitates collaborative planning and assists all
echelons to achieve situational awareness (Dictionary of
Military and Associated Terms, 2005). It initially referred
to military and security situations, but is increasingly
applied to emergency response, and has thus been utilized
by the energy sector.
In SIMOPS situations, a COP would aid SA amongst all
users across the field. Sneddon et al (2006) describe the
importance of team SA, where the drill crew works
together effectively, with a mutual understanding of the
situation. This can be expanded to encompass workers
across the field, with the added complexity of trying to gain
a good awareness of unfamiliar operations. In this situation,
a COP can be very valuable.

Figure 1: Levels of situational awareness and feedback loop (based on Endsley 1995)

have a significant effect on the outcome (Leveson, 2011).

Automated Processes

As data volume has increased, we require more from


systems design (Endsley, 2000). The necessary capabilities
and information must be provided in a way that is usable
cognitively as well as physically. Glandrup (2013) debates
the difficulty of gathering maritime information and
combining it in a way that is useful to the user, without
overloading them. It specifically looks at the challenge of
analyzing significant volumes of data to identify safety
critical situations. The work discusses maritime security,
but similar principles can be applied to the offshore energy
sector. The following requirements were identified:
Warn operators when needed

It is recognized that manual processes are better than no


processes, and well implemented automated processes are
more effective than manual ones. Automated systems
provide a speed, consistency and reliability of search and
analysis algorithms which cannot be matched manually
(Galloway et al, 2002). Digital oilfield automation is
credited with significantly improving production rates,
while allowing engineers to focus on more skilled work
(Lochmann and Brown, 2014). This has been integrated in
the concept of Intelligent Energy, an initiative which
promotes automated solutions to improve asset
performance (Davidson & Lochmann, 2012).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 209

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Improving SIMOPS management


Automated system controls improve the users SA, and
reduce the likelihood of inattentional blindness by drawing
attention to relevant information. People can only store
between 5 and 9 items in their working memory (Miller
1956). By organizing and managing data, software systems
reduce the non-essential information and allow operators to
concentrate on critical items.

A key requirement of a SIMOPS system is the ability to


visualize SIMOPS events. The system described here
provides complete temporal and spatial representation of
operations, enabling visualization and sharing of real-time
operational data, within the field, onshore, and (via a web
map) on third party operations. By ensuring that individuals
involved in the operations have access to the same real-

Figure 2: Interactive Gantt chart showing overlapping task schedules. These are some of the facts which rules are monitored against.

A rules-based monitoring system


A SIMOPS management system has been developed
specifically to aid SA. It aims to improve the safety and
efficiency of all infield operations, including seismic
acquisition, using automated processes to provide a COP.
The SIMOPS management system uses a knowledge-based,
rules-based monitoring system to integrate and display
critical information, and to help the user predict the effect
on operations in the near future. The knowledge base is
comprised of user-defined facts, including:
positions for installations and static or dynamic vessels;
exclusion zones (based on attributes such as speed
limits and permit requirements);
acquisition plans and other operational schedules;
supplementary data including meteorological data,
marine mammal observations, debris, and fishing gear.
General rules address vessel proximity, operational plan
progress, and zone encroachment (Pemberton et al, 2015).
Facts are continuously updated by the system and
monitored against the rules. When a rule is breached, an
alarm is generated, reducing the likelihood of inattentional
blindness. All warnings are logged, and data can be played
back to aid post-incident analysis and audit. Additional
facts, such as marine mammal observations, can be
documented and aggregated over multiple surveys, to build
a solid scientific basis to address regulatory concerns.

2016 SEG
SEG International Exposition and 86th Annual Meeting

time information, they may collaborate more effectively to


make critical real-time decisions.
Facts are updated automatically and manually. Authorized
users add and adjust tasks and exclusion zones, enabling
multiple accurate SIMOPS plans to be shared automatically
via an interactive Gantt chart (Figure 2). Live positional
data (such as AIS and radar feeds) are displayed
automatically, enabling real-time operational data to be
shared across the field and onshore. This data feeds
constantly into the rules system. For example, if a new AIS
target introduces a potential conflict with another operation,
an alarm will activate, alerting the operator to the predicted
collision and allowing mitigating action to be taken.
The system was first utilized on a 4D survey over multiple
fields. SIMOPS activities included saturation diving, pipelaying, rig moves and tanker offloading, and over 100 close
passes were performed during the acquisition. In this
environment, the ability to share SIMOPS data across all
users was invaluable, as was the link between the map and
Gantt chart, allowing the crew to optimize acquisition in
priority areas (Figure 3).
A case study
Recent energy reform opened access to the southern Gulf
of Mexico (GoM), and by mid-2015 many seismic vessels
were operating in Mexican waters. The potential for

Page 210

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Improving SIMOPS management


inefficiencies resulting from this influx of crews was
significant. Vessels operating in close proximity produce
acoustic interference, reducing data quality and increasing
costs. In a 2D multiclient survey, sail-lines extended the
width of the GoM, requiring interaction with numerous
distinct operations, each with its own SIMOPS schedule.
While the majority of acquisition occurred in open water,
the survey proved how congested an area can become when
several contractors prioritize the same blocks, particularly
when adhering to a regulatory requirement to maintain
30km separation between seismic operations. Over a 3-day
period, 4 incidents occurred where the rules and timesliding capabilities provided by the system warned of
potential conflicts. These were identified sufficiently early
that efficient schedule changes were made, and minimal
delays occurred.

current low oil price environment (Pemberton et al, 2015).


Conclusion
Seismic surveys are frequently conducted over congested
fields where other operations influence survey efficiency.
Existing SIMOPS management systems do not provide all
parties with a full temporal and spatial overview. This
means that the SA of all personnel is compromised. A new
automated SA/SIMOPS technology has been introduced,
based on 20 years of academic SA research and extensive
field experience. It addresses the current operating
conditions of todays low oil price environment, where
complex assets, rising production costs and low headcounts
make efficiency a practical necessity. Producers are no
longer forced to accept inefficiencies that are commonplace
in offshore operations managed by manual processes.

Data acquisition in a single continuous traverse


The benefits of a SA/SIMOPS management system are
clear on surveys where multiple vessels and obstructions
increase the day rate and the QHSE exposure, but effects
can also be notable on conventional surveys. A new
regional 3D survey was acquired in the North Sea,
comprising approximately 20 producing fields. As with the
2D case study, a principal priority was to acquire lines in a
single continuous pass, preventing the downtime and data
discontinuities associated with multiple attempts. The
survey included 4 shipping lanes and 26 offshore facilities,
often requiring 4 or 5 close passes of surface obstructions
per day. This is typical of the hidden operating cost which
has been accepted in the past, but should be reduced in the

This management system addresses all three levels of SA.


At level 1, the system shares all critical infield and
operations data through real time data hosting. At level 2,
combining spatial and temporal elements provides a clear
visualization of the situation for all users. Continuous rules
monitoring triggers automated alarms, warning users of
potential conflicts. At level 3, the ability to slide forward in
time allows the effects of operations on each other to
become apparent, and multiple scenarios can be evaluated
in terms of operational efficiency and risk exposure. By
improving SA at all three levels, the ability to successfully
address SIMOPS is greatly increased for seismic and other
E&P operations.

Figure 3: Linked Gantt chart and map showing multiple activities. Note seismic streamer vessel in yellow at the bottom of the map.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 211

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bonaceto, C., and K. Burns, 2006, Using cognitive engineering to improve systems engineering:
Manuscript submitted for Presentation at the 2006 International Council on Systems Engineering
Conference, http://dx.doi.org/10.1002/j.2334-5837.2006.tb02784.x.
Davidson, J. and M. J. Lochmann, 2012, Lessons and insights from unexpected places, SPE Intelligent
Energy International, SPE-149998-MS, http://dx.doi.org/10.2118/149998-MS.
Dictionary of Military and Associated Terms, 2005, accessed 23 March 2016 from
http://www.thefreedictionary.com/common operational picture.
Endsley, M. R., 1995, Toward a theory of situation awareness in dynamic systems: Human Factors, 37,
3264, http://dx.doi.org/10.1518/001872095779049543.
Endsley, M.R., 2000. Theoretical underpinnings of situation awareness: a critical review: Situation
Awareness Analysis and Measurement, 332.
Galloway, A., J. A. McDermid, J. Murdoch, and D. J. Pumfrey, 2002, Automation of system safety
analysis: possibilities and pitfalls: Proceedings of International Systems Safety Conference,
Denver, accessed 16 March 2016 http://wwwusers.cs.york.ac.uk/~djp/publications/Automation.pdf
Gersh, J. R., J. A. McKneely, and R. W. Remington, 2005, Cognitive engineering: understanding human
interaction with complex systems: Johns Hopkins APL Technical Digest, 26, 377382.
Glandrup, M., 2013. Improving situation awareness in the maritime domain: Situation Awareness with
Systems of Systems, 21-38: Springer, http://dx.doi.org/10.1007/978-1-4614-6230-9_2.
Graham, B., W.K Reilly, F. Beinecke, D.F.Boesch, T.D. Garcia, C.A. Murray, and F. Ulmer, 2011, Deep
water: the gulf oil disaster and the future of offshore drilling, report to the President: National
Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling.
Harrald, J., and T. Jefferson, 2007, Shared situational awareness in emergency management mitigation
and response: 40th Annual Hawaii International Conference on Systems Science, 23,
http://dx.doi.org/10.1109/HICSS.2007.481.
Hetherington, C., R. Flin, and K. Mearns, 2006, Safety in shipping: the human element: Journal of Safety
Research, 37, no. 4, 401411. http://dx.doi.org/10.1016/j.jsr.2006.04.007.
Jones, D. G., and M. R. Endsley, 1996, Sources of situational awareness errors in aviation: Aviation,
Space, and Environmental Medicine, 67, 507512.
Leveson, N. G., 2011, Risk management in the oil and gas industry: Testimony before the United States
Senate Committee on Energy and Natural Resources, http://mitei.mit.edu/news/risk-managementoil-and-gas-industry
Lochmann, M., and I. Brown, 2014, Intelligent energy: A strategic inflection point: SPE Annual
Technical Conference and Exhibition, SPE, http://dx.doi.org/10.2118/170630-MS.
Lochmann, M., S. Darling, G. Pemberton, R. Mills, and J. Altaie, 2015, Improving situational awareness
to prevent conflicts in simultaneous offshore operations: SPE Middle East Intelligent Oil and Gas
Conference and Exhibition, SPE-176796-MS
Miller, G., 1956, The magical number seven, plus or minus two: some limits on our capacity for
processing information: Psychological Review, 63, 8197, http://dx.doi.org/10.1037/h0043158.
Pemberton, G., S. Darling, C. Koheler, and E. McDonald, 2015, Managing simultaneous operations
during seismic acquisition: First Break, 33, 7581.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 212

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Roberts, R., R. Flin, and J. Cleland, 2013, Situation awareness in offshore drillers: Proceedings of the
11th International Conference on Naturalistic Decision Making (NDM 2013), Arpege Science
Publishing
Safahani, M. S., and L. S. Tuttle, 2013, Situation awareness and its practical application in maritime
domain: The International Maritime Human Element Bulletin HE01120, accessed 16 March 2016
http://www.he-alert.org/en/articles.cfm/page/37
Simons, D. J., 2000, Attentional capture and inattentional blindness: Trends in Cognitive Sciences, 4,
147155, http://dx.doi.org/10.1016/S1364-6613(00)01455-8.
Sneddon, A., K. Mearns, and R. Flin, 2006, Situation awareness and safety in offshore drill crews:
Cognition Technology and Work, 8, 255267. http://dx.doi.org/10.1007/s10111-006-0040-1.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 213

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Implementation of Vibrator Autonomous Shooting in Urban Area, Case Study from Kuwait

Bader Al-Ajmi *, Khaled SalahEldin , Meshari Al-Awadi , Ding Guandong , Li Wenjing , Nie Mingtao , Liu

Renwu , Liu Hao


-BGP Inc., CNPC -KOC
Summary
Acquiring seismic data acquisition in urban areas are very
challenging interms of operation and HSE. Wireless
recording system make such surveys more efficient and
easier to conduct in a safe and cost effective manner. The
introduction of Vibrator Autonomous Shooting which has
been develop recently has contributed to improving the
efficiency as well as the production in highly congested
urbanized area in Kuwait. In 2015 KOC has contracted
BGP to acquire very complex 3D seismic survey which
includes the Bay area, City of Kuwait and SABKHA area
to the north of the Bay. This survey involves different type
of sources and including Vibroseis, Dynamite and Air guns.
KOC pointed out this worry and then both of KOC and
BGP worked together to find out a better operation mode to
realize efficient acquisition. Inorder to ensure efficiency of
data acquisition the Digital-Seis System (DSS) which was
recently developed has been utilized. This system can
directly trigger energy sources for high-productivity
operation, make intelligent guidance, real-time quality
control, real-time operation management. The overall
vibrator autonomous shooting can be divided as five (5)
stages. The base of this shooting technology is a series of
configurations in both of DSS and Sercel recording system.
However, it is a better choice to combine autonomous
shooting with code shooting technique when both of cable
system and wireless system are allocated for integrated
acquisition. All this has contributed to achieve the objective
of the survey and enhanced the productivity.

parts which include DSG (Digital Seismic Guider) like a


tablet mounted in each of vibrators controlled by TC (Time
Control) and DSC (Digital Seismic Commander) like a
computer with monitor mounted in Sercel recording room.
Autonomous Shooting Techniques
It is a type of energy source shooting method in general and
here specific for vibrator source continuous shooting
without any interference from outside after sweep
parameters determined. Actually, it is a kind of scriptcontrolled shooting technology. Sweep parameters are
depend upon Sercel recording system and SPS comes from
QC section. Only after completion of configurations in
both of DSS and Sercel recording system and sweep
parameters transferred to vibrators, which can be defined as
shooting readiness, then the vibrator fleets may start sweep
following their own time sequence.

Introduction
It is an extremely challengeable seismic acquisition
covering water bay areas, transition zone and crowded
urban areas over 700 km2. BGP allocated suitable and
sufficient equipment, including two types of recording
systems combining cable system for marine portion and
cable-less system for land portion. The cable-less nodal
recording system offered a golden opportunity for
application and popularization of autonomous shooting
technology since it could realize continuous recording. The
bottleneck of high production efficiency is not node
recording but vibrator shooting instead, especially in urban
areas. The Digital-Seis System (DSS) was equipped for the
purpose of autonomous shooting so that it was possible to
enhance operation productivity. DSS mainly contains two

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: DSS devices

Page 214

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

who is responsible for sweep parameter definition and


distribution and vibrator performance monitoring, and
Vibration section who is the main role of autonomous
shooting by controlling DSS and being READY.

Figure 2: Device connection within vibrator (DSG to others)

Methodology
Normally, vibrators shooting is under control of the
vibrator controller system mounted on recording room by
radio which is constrained with hardware, electromagnetic
interference, bad weather, and barriers like mountains or
urban architectures. In some cases, these communication
limitation cannot be avoided when using radios, the best
way is to minimize the usage of radios or shorten the
contact range. The autonomous shooting will reduce the
period of using radios whatever it is analog or digital and
the most time of using radios are communication within
one group or fleet. The most significant is that the
autonomous shooting can fulfill automatically sweeping
based on preset sequential time window without any other
control or triggering from outside of DSS system, which
could not only improve productivity but also lower HSE
risks. Regarding urban seismic vibration, there are a lot of
tall buildings and blind areas which will block the radio
communication, especially long range connection.
Furthermore, crowed vehicles and numerous human
activities will bring high potential HSE hazards for seismic
operations, especially big and unwieldy recording truck
moving. Autonomous shooting method can release seismic
operations from such inconvenience.
The methodology of autonomous shooting depending upon
DSS contains four (4) aspects. First, the real-time
information is exchanged by both of digital radios and
satellite data link or 4G network. Second, 4G network link
is the main channel for QC & status information exchange
and the digital radio data link is alternative. Third, the
satellite data link is for GPS & Differential signal. Fourth,
the DSS software for integrated information processing.

Figure 3: Key sections to handle autonomous shooting

There are other stakeholders like field sections other than


those three mentioned above, third parties related to
vibrator operations, e.g. Sercel support center, DSS
hardware suppliers, DSS software programmers, etc. All of
them contributed to this technology.
Application
There are five (5) defined to achieve autonomous shooting.
The first is work planning. It mainly contains scouting,
permitting, improving GIS (Geographic Information
System), simulating and outputting SPS and making work
schedule. It is also a common step to do seismic acquisition.
However, KOC and BGP innovate a neighborhood source
planning method dedicated for urban seismic. We will
divide the urban areas to Cells (bin size of 25*25 m) as the
potential supplementary source locations if initially planned
points unavailable and also we will assign the source type
for each cell according to safe distance zones.
The second is to imbedding GIS into both of Sercel
recording system and DSS. The background can be raster
image with geographical reference, shape files, vectorial
data, or simple raster files. Sercel recording system has
Layer manager function which can input background in
job positioning window and DSS can be input through flash
disks, hard disks or internet.

The direct related stakeholder is seismic crew who will


benefit from this technology for safer operations, higher
efficiency and timelier manner, and the same benefit for
both contractor and customer companies as well. The key
sections run this shooting technology are Quality Control
(QC) section, who is in charge of SPS files and
vibrator/recorder return files via internet, Recorder section

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 215

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 5: Shop points in DSC (left) and DSG (right)

Figure 4: Jpositioning window to input geographic information

This step is critical for safe operations. All safety concerns,


for instance, pipelines, water wells, pumps, sewages,
refinery infrastructures and so on so forth, are investigated
in detail and mapping in GIS, and also it is very important
to determine the access routes for vibrator fleet move. The
purpose of this step is to cross check with vibrator pushers
to guarantee vibrators working in the safe situations.

The fourth is Sercel recording system configuration. This


step will solve vibrator grouping fully matching with fleet
setup in DSS and sweep parameter configurations,
including drive level, sweep length, etc. In order to escape
from big truck move in crowded streets or narrow roads
within communities, a portable recorder is necessary,
which is only used for sweep parameter transmission and
carried by one light vehicle. In Sercel recording system, we
can configure taper, sweep frequency, sweep length and
drive level and set up acquisition type in 428XL VE464
window. After that, sweep parameters are sent to vibrators
for autonomous shooting.

The third step is sending SPS file from QC office in realtime. Once the sweep parameter changes or source code
switches, QC office need to update shot points in a block or
zone and spread to vibrators and recording truck. Both of
DSG and DSC can monitor shot point status that green
points mean already acquired, yellow points indicate lack
of vibrator performance files, and red color points represent
not being shot yet. Regarding shot points chosen to be
acquired, the vibration section can take point by point
method or line by line or block by block method under
Program mode. The common way is to choose default
method following appointed time window to shoot when
READY through matching coordinates.

Figure 6: Sweep configuration in Sercel 428XL VE464 window

The fifth step is vibrator shooting. It is the last step of


autonomous shooting. One day will be divided with a
certain sequential time segments appointed to dedicated
vibrator fleets. Every fleet can only be triggered at
specified time window belonging to its own. In case of any
fleet not READY on its time occasion, it must wait until
other fleets finish vibration on their time frame, which
could lead to the idle time for the related fleet.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 216

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 8: Process Type identified in shooting window

This shooting technique can maximize taking advantage of


cable system and is a complementary method of vibrator
autonomous shooting.

Conclusions
Figure 7: Time sequences with constant period

When the situation becomes more complicated, there are


two distinct recording systems allocated. It is necessary to
use Sercel cable system in transition zone and shallow
water areas and wireless nodal recording system on land for
integrated data acquisition. Wed better consider taking
advantage of code shooting technique within Sercel 428XL
system to be alternative of autonomous shooting.
Autonomous shooting is convenient when wireless nodal
recording system is a single type of acquisition equipment.
However, the disadvantage is noticeable when cable system
is involved due to super mega size of data amount caused
by continuous recording. Even though Sercel recording
system has continuous recording function, there is no
successful cases of seismic acquisition through such
method so far.
If applying code shooting technique, we need to get back to
the fourth step of autonomous shooting and continue to set
up Process Type in 428XL operation window. Then, the
observer can shoot with code shooting mode.

This paper presents vibrator autonomous shooting


technology when seismic acquisition happens in urban
areas. This is the first trial and practice applying new
shooting method to acquire seismic data within crowded
urban areas, which is drawn extensively through wisdoms
from both of KOC and BGP people. By showing five detail
stages and code shooting as a complementary method,
autonomous shooting technology was demonstrated clearly
and effectively, which will benefit both of oil companies
and geophysical service companies for production efficient,
cost effective and timely manner even though this
technology still has disadvantages of precise time
utilization.
Acknowledgements
The authors would like to give their thanks to KOC
(Kuwait Oil Company) and BGP Inc., CNPC for their
permission of publishing this paper and appreciate the
cooperation from KOC Exploration Group, BGP Crew
8652, Sercel Support Centre and those who provided help
in material collection and question inquiries.

For example, the sweep parameters are confirmed with four


(4) vibrators, drive level 50%, sweep length 12 second and
the point code set as V1. Then, observers load the SPS file
into 428XL system, Process Type can be identified
automatically based on point code definition in SPS file.
When vibrators ready and pad down, it will jump to the
corresponding shot in 428XL shooting table by GPS
coordinates, the process type link to sweep length and drive
level automatically to realize vibration.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 217

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

No references.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 218

Numerical modeling of seismic airguns and low-pressure sources


SUMMARY
There is significant interest in understanding the dynamics of
seismic airguns and the coupling between the bubble produced
when the airgun discharges and the pressure waves excited in
the water. It is desirable to increase the low frequency content of the signal, which is beneficial for imaging, especially
for sub-salt and sub-basalt exploration, and to reduce the high
frequency content, which is not useful as seismic signal, yet is
thought to adversely impact marine life. It has been argued that
a new style of airgun, with drastically lower pressure and larger
volume than conventional airguns, will achieve these improvements. We develop a numerical model of a seismic airgun and
compare the simulation results to experimental data for validation. We perform numerical simulations for a range of airgun firing parameters and demonstrate that the proposed low
pressure source (4000 in3 , 600 psi) is able to reduce the high
frequency noise by 6 dB at 150 Hz compared to a 1000 in3 airgun at 2000 psi, while maintaining the low frequency content.
Therefore, the low pressure source is more environmentally
friendly without compromising survey quality.

INTRODUCTION
Seismic airguns are the predominant source used in marine
seismic surveys. They function by discharging highly pressurized air forming a bubble that expands and contracts in the water, exciting pressure waves over a wide range of frequencies.
The low frequency waves are used to image targets of interest.
Several studies have emphasized the need for improved low
frequency content (below 30 Hz) for sub-salt and sub-basalt
imaging (Ziolkowski et al., 2003). The high frequency energy
(above 150 Hz) is generally useless for seismic imaging as it is
attenuated before it reaches the target or scattered by the heterogeneous overburden. In addition, current seismic acquisition and processing techniques sample at 2 ms and only utilize
frequencies up to 220 Hz. Thus, reducing the proportion of
high frequency energy generated would improve the efficiency
of the airgun. Furthermore, ocean noise from marine seismic
surveys is thought to have a significant impact on marine life
(Weilgart, 2007; Nowacek et al., 2015). The specific impact
of marine seismic surveys on the plethora of different marine
species is complicated and understanding is hampered by limited data (Weilgart, 2013). However, it is likely that reducing
the high frequency noise that is not used for seismic imaging
will have environmental benefits without compromising survey quality.
Chelminski et al. (2016) proposed a low-pressure source (LPS)
with radically reduced pressure and increased volume. They
argue that the LPS will be more efficient and have lower high
frequency content, alleviating environmental concerns. To investigate this idea, Chelminski Technology and Dolphin Geophysical conducted field tests of a LPS prototype in June 2015.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Due to experimental limitations, the field measurements were


restricted to a limited range of airgun parameters. Furthermore, the prototype tested had a much smaller volume than
that of the proposed LPS.
In this work we develop a numerical model for seismic airguns, based on the work by Ziolkowski (1970). We validate
the model against data from the field tests of the LPS prototype. Previous authors (e.g., Landr and Sollie, 1992; Li et al.,
2014; de Graaf et al., 2014) have developed more complicated
models and performed sophisticated inversions to find the best
fitting model parameters. Here, we focus on the predictive capability of forward modeling. We perform numerical simulations to investigate airgun configurations that were not tested in
the lake and to predict whether the full scale LPS will be more
efficient and produce less high frequency than a conventional
airgun.

DATA
Data was collected over two days at Lake Seneca, a 200 m
deep lake in upstate New York. The LPS prototype was suspended at variable depth from a crane over the side of the boat.
Two airgun volumes, 598 in3 and 50 in3 , were tested at a range
of depths (5, 7.5, 10, 15, and 25 m measured depth) and pressures (135 psi to 1320 psi for the 598 in3 airgun and 510 psi
to 1850 psi for the 50 in3 airgun). Observations were made
with a 24 channel downhole array in the far-field, 75 m below
the airgun, with a spacing of 2 m between the channels. The
observations are recorded at 32 kHz, a much higher temporal
resolution than in industry seismic surveys, where 0.5 kHz is
the standard sampling rate.
210
810 psi, 598 in 3 , 5 m
610 psi, 598 in 3 , 10 m
410 psi, 598 in 3 , 15 m

200

power (dB)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Leighton M. Watson, Eric M. Dunham, and Shuki Ronen

190
180
170
160
150
140

101

102

frequency (Hz)

Figure 1: The Rayleigh-Willis equation (dashed) accurately


predicts the dominant frequency of the far-field data (solid)
across a range of different firing parameters.
The Rayleigh-Willis equation is a well known formula used in
the exploration industry to estimate the dominant frequency of
a seismic airgun (Rayleigh, 1917; Willis, 1941; Cole, 1948):

f =k

(1 + D/10)5/6
,
(paVa )1/3

(1)

Page 219

where D is the depth of the airgun in meters, pa and Va are


the pressure and volume of the airgun, respectively, and k is a
constant. We are interested in how the high and low frequency
components of the signal change when the airgun parameters
are varied. Therefore, we need to develop a numerical model
of the system that can capture all of the frequency information,
rather than just the dominant frequency.

and accounts for the energy associated with the advection of


mass from the airgun into the bubble. The air inside the airgun
and the bubble is treated as an ideal gas with a heat capacity ratio of = 1.4. Combined with the modified Herring equation,
this gives a system of nonlinear ordinary differential equations
for the coupled bubble and airgun system. We solve this using
an explicit Runge-Kutta solver with adaptive time-stepping.

MODEL

The pressure perturbation in the water is related to the bubble


dynamics by (Keller and Kolodner, 1956)

We solve the Euler equations governing the motion of a compressible fluid and evaluate the solution on the bubble wall to
give a nonlinear ordinary differential equation for the bubble
dynamics. Our work differs from previous studies (e.g., Ziolkowski, 1970; de Graaf et al., 2014) as we use the modified
Herring equation (Herring, 1941; Cole, 1948; Vokurka, 1986)
rather than the Gilmore (1952) equation to describe the bubble
motions. The modified Herring equation is
3
p p
R
RR + R 2 = b
+
p ,
2

c b

(2)

where R and R = dR/dt are the radius and velocity of the bubble wall, respectively, pb is the pressure inside the bubble, and
p , and c are the pressure, density, and speed of sound,
respectively, in the water infinitely far from the bubble. Without the pb term, equation 2 is the Rayleigh equation (Rayleigh,
1917) which is a statement of conservation of momentum for
an incompressible fluid. The pb term is a correction for compressibility that allows for energy loss through acoustic radiation. The Herring equation assumes a constant, rather than
pressure dependent, speed of sound, which is well justified as
 1. The modified version of the Herring equation neR/c
) type correction factors (Vokurka, 1986).
glects the (1 R/c
The bubble is coupled to the airgun by mass conservation. We
solve for the exit velocity of the flow out of the airgun at each
time step rather than assuming choked flow. The airgun is assumed to discharge adiabatically. The temperature of the bubble is governed by the first law of thermodynamics for an open
system. This allows for heat conduction across the bubble wall

2016 SEG
SEG International Exposition and 86th Annual Meeting


p(r,t) =

V (t r/c ) V (t r/c )2

,
4r
32 2 r4

(3)

where p is the pressure perturbation in the water, r is the


distance from the center of the bubble, and V = 34 R3 is the
volume of the bubble. The second term on the right side is a
near-field term that decays rapidly with distance and is negligible in the far-field. For the parameter space relevant to seismic airguns, equations (2) and (3) give identical results to the
equivalent Gilmore (1952) formulations.

normalized R

Since the seminal paper by Ziolkowski (1970) there has been


extensive work on numerical modeling of seismic airguns (e.g.,
Schulze-Gattermann, 1972; Safar, 1976; Ziolkowski, 1982; Li
et al., 2010; de Graaf et al., 2014). We follow a similar treatment, assuming that the internal properties of the airgun and
bubble are spatially uniform and that the bubble is approximately spherical. The first assumption poses a restriction on
the temporal resolution of our model, limiting the model resolution to time scales long compared to the time it takes for
a sound wave to propagate across the airgun and bubble. The
resolution will vary depending upon the size and physical properties of the bubble. For the bubble at equilibrium the upper
bound on the resolution is approximately 1 ms, corresponding
to a frequency limit of 1 kHz. The second assumption is well
satisfied as the bubble radius (1 m) is far smaller than the
wavelengths that we are interested in (>10 m). Therefore, it is
appropriate to treat the bubble as a point source.

1
0.8
0.6
0.4
0.2

Gilmore
Modified Herring

normalized p

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Numerical modeling of seismic airguns

0.1

0.2

0.3

0.4

0.5

1
Gilmore
Keller and Kolodner

0.5
0
-0.5

0.1

0.2

0.3

0.4

0.5

time (s)

Figure 2: Bubble radius (top) and near-field pressure perturbation in the water, p = pb p , (bottom) as computed by the
Gilmore (1952) equations and with the analogous equations
from Herring (1941) and Keller and Kolodner (1956), which
are used in this work. The bubble radius from the modified
Herring equation is used as an input to the Keller and Kolodner (1956) pressure equation. The bubble radius and pressure
perturbation are normalized by the maximum of the Gilmore
(1952) solutions. The initial conditions of Ziolkowski (1970)
are used where the initial volume of the bubble is equal to the
volume of the airgun. The discontinuity in the derivative of the
radius and pressure is due to the airgun port opening instantaneously.
The observed pressure perturbation in the water is a superposition of the direct arrival and the ghost, which is a wave that
is reflected from the surface of the water and arrives at the
receiver at a later time. In the near-field, the amplitude of the
ghost is much smaller than that of the direct arrival as the ghost
travels along a much longer path, reducing the amplitude by
geometrical spreading. In the far-field, the path length for the

Page 220

The observed pressure perturbation, pobs , is a superposition


of the direct arrival and the ghost. For a vertically down-going
direct wave, as is the case for our acquisition geometry, the
observed pressure perturbation in the water is computed by
pobs (r,t) = pd (r,t) pg (r + 2D,t),

(4)

where pd and pg are the pressure perturbations from the


direct arrival and the ghost, respectively. Equation 4 assumes
linearity and is only valid when the pressure perturbation is
dominated by the first term in equation 3, as is the situation for
the work shown here.

MODEL VALIDATION
In order to validate our model, we compare our simulation results to the lake data. The model has several tunable parameters. We tune these parameters so that the model fits the farfield data for one airgun firing configuration (Figure 3). We
can then match the measurements from the other firing configurations by varying the airgun properties (Figure 4). This is
done without any further tuning of the model parameters.
The magnitude of the pressure perturbation depends upon the
location of the receiver relative to the airgun. To remove this
dependency, we normalize all observations and simulations by
multiplying the pressure perturbation by r, the distance from
the airgun to the receiver, and state the result in bar m. The
port area of the airgun used in the lake was measured as 11
in2 . In our simulations, we use a reduced area of 4 in2 to best
fit the data. de Graaf et al. (2014) used a similar approach to
avoid over predicting the amplitude of the initial peak when
modeling conventional airguns.

p (bar m)

1
Data
Model

0.5
0
-0.5

0.1

0.2

0.3

0.4

0.5

time (s)
power (dB)

200
180
160
140
120
100

101

102

frequency (Hz)

Figure 3: Comparison between the far-field observations and


simulations in the time and frequency domain. Airgun properties are depth of 5 m, pressure of 410 psi, and volume of 598
in3 . The model parameters, relating to heat transfer and fraction of mass discharged from the airgun, are tuned to provide
the best fit.
2

p (bar m)

direct arrival and the ghost are almost the same. The ghost
must be accounted for in order to accurately simulate the observed pressure perturbations, especially in the far-field. The
pressure perturbation due to the ghost signal is calculated by
replacing the path length of the direct arrival, r, with the path
length of the ghost, r + 2D, in equation (3). The sea surface is
assumed to have a reflectivity of -1 (Ziolkowski, 1982). The
reflectivity can be frequency dependent, especially in rough
seas. The lake surface was relatively flat during data acquisition and we found that -1 was an appropriate choice for this
work.

Data
Model

1
0
-1
-2

0.1

0.2

0.3

0.4

0.5

time (s)
200

power (dB)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Numerical modeling of seismic airguns

180
160
140
120
100

101

102

frequency (Hz)

Figure 4: Comparison between far-field observations and simulations for an airgun fired at a depth of 15 m, pressure of 1030
psi, and volume of 598 in3 . The tunable model parameters are
the same as for Figure 3.

6000 in3 and pressure of 600 psi to 1000 psi. The LPS will
have a much larger port area than conventional airguns, 62 in2
compared to 16 in2 .

The simulation results are in agreement with the RayleighWillis equation (Figure 5) and display similar trends to the
data (see Figure 1). The fit to the data and agreement with
the Rayleigh-Willis equation validates our model and enables
us to use it to investigate airgun firing configurations not tested
in the lake, such as the proposed LPS.

Figure 6 shows a comparison between the simulated pressure


signal for a typical conventional airgun and for the proposed
LPS with the same PV value. This ensures that, according to
the Rayleigh-Willis equation, they will have the same dominant frequency. The LPS reduces the high frequency noise by
5 dB at 150 Hz. However, with the same PV as the conventional airgun, the LPS is unsuccessful at improving the low
frequency content, with a reduction of 1.5 dB at 3Hz.

LOW PRESSURE SOURCE

Larger volume conventional airguns (2000 in3 ) have been proposed as a solution to improve the low frequency content (Ziolkowski et al., 2003). However, the larger volume airguns are
heavy and have maintenance issues because of the high pres-

Conventional airguns typically have volumes of less than 1000


in3 and are pressurized to 2000 psi. Chelminski et al. (2016)
proposed a low pressure source (LPS) with a volume of up to

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 221

Numerical modeling of seismic airguns


the conventional airgun to 1.79 for the 3333 in3 LPS and 1.76
for the 4000 in3 LPS. This will not degrade the quality of the
data as processing can extract useful signal from the bubble as
well as from the initial pulse (Ronen et al., 2015).

210
810 psi, 598 ci, 5 m
610 psi, 598 ci, 10 m
410 psi, 598 ci, 15 m

180

160
150
140

101

102

frequency (Hz)

Figure 5: Simulation results are in agreement with the


Rayleigh-Willis equation. The corresponding spectra for the
data is shown in Figure 1.

pressure (bar m)

sures that they must be engineered to withstand. Therefore,


they have not been widely adopted by the industry. An advantage of the LPS is that much larger volumes can be used
without engineering or operational difficulties, improving the
low frequency content. Figure 7 shows a comparison between
a conventional airgun and a larger volume LPS (4000 in3 ). The
PV value for the LPS is greater than for the conventional airgun. The larger LPS reduces the high frequency noise by 6
dB at 150 Hz compared to the conventional airgun and has a
lower dominant frequency. The low frequency content at 3
Hz is the same for the two designs. This demonstrates that
increasing the volume of the LPS results in improved low frequency content, as suggested by the Rayleigh-Willis equation.
Even larger volume LPS (up to 6000 in3 ) can be built, and
safely operated, that will generate more low frequency energy
while maintaining the environmental benefits of reduced high
frequency noise.
6
1000 in3 airgun at 2000 psi
3333 in3 LPS at 600 psi

4
2
0
-2

0.2

0.4

0.6

0.8

time (s)
210
200
190
180

pressure (bar m)

170
6
1000 in3 airgun at 2000 psi
4000 in3 LPS at 600 psi

4
2
0
-2

0.2

0.4

0.6

0.8

time (s)
220

power (dB)

power (dB)

190

power (dB)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

200

200
180
160

101

102

frequency (Hz)

Figure 7: Comparison between simulations for a conventional


airgun and a larger LPS fired at a depth of 7.5 m. The low
frequency content is the same for the two designs but the LPS
produces less high frequency noise.

CONCLUSION
There is significant interest in reducing the high frequency
noise that is produced by seismic airguns as this is thought
to adversely impact marine life. In addition, it is desirable to
improve their imaging capabilities and efficiency. The lowpressure source has been proposed as an improvement to conventional seismic airguns that will achieve these goals.
We present a numerical model for seismic airguns and lowpressure sources that we validate against high resolution farfield data from a lake. Numerical simulations show that the
proposed low pressure source can reduce the high frequency
noise without compromising the usable low frequency content
compared to a conventional airgun and is thus more efficient
and environmentally friendly. Furthermore, the low-pressure
source can be manufactured and operated at far larger volumes
than conventional airguns enabling the low frequency content
to be improved resulting in better sub-salt and sub-basalt imaging capabilities.

170
160
10

10

frequency (Hz)

Figure 6: Comparison between simulations of the near-field


(r = 1 m) pressure perturbation generated by a conventional
airgun and a LPS fired at a depth of 7.5 m. This LPS reduces
the high frequency noise but also decreases the low frequency
content compared to a conventional airgun.

ACKNOWLEDGMENTS
We thank Chelminski Technology and Dolphin Geophysical
for providing us access to the data from Lake Seneca. We acknowledge the Stanford Exploration Project and their sponsors
for financial support.

The peak-to-bubble ratio (the amplitude of the initial pressure


pulse compared to the amplitude of the second pulse, which is
due to the oscillation of the bubble) is reduced from 1.92 for

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 222

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Chelminski, S., J. Chelminski, S. Denny, and S. Ronen, 2016, The low-pressure source: Harts E&P
Magazine, 14.
Cole, R. H., 1948, Underwater explosions: Princeton University Press,
http://dx.doi.org/10.5962/bhl.title.48411.
de Graaf, K. L., I. Penesis, and P. A. Brandner, 2014, Modelling of seismic airgun bubble dynamics and
pressure field using the Gilmore equation with additional damping factors: Ocean Engineering,
76, 3239, http://dx.doi.org/10.1016/j.oceaneng.2013.12.001.
Gilmore, F. R., 1952, The growth or collapse of a spherical bubble in a viscous compressible liquid:
Hydrodynamics Laboratory, California Institute of Technology, Technical report.
Herring, C., 1941, Theory of the pulsations of the gas bubble produced by an underwater explosion:
Technical report, OSRD Report 236.
Keller, J. B., and I. I. Kolodner, 1956, Damping of underwater explosion bubble oscillations: Journal of
Applied Physics, 27, 11521161, http://dx.doi.org/10.1063/1.1722221.
Landr, M., and R. Sollie, 1992, Source signature determination by inversion: Geophysics, 57, 1633
1640, http://dx.doi.org/10.1190/1.1443230.
Li, G., Z. Liu, J. Wang, and M. Cao, 2014, Air-gun signature modelling considering the influence of
mechanical structure factors: Journal of Geophysics and Engineering, 11, 025005,
http://dx.doi.org/10.1088/1742-2132/11/2/025005.
Li, G. F., M. Q. Cao, H. L. Chen, and C. Z. Ni, 2010, Modeling air gun signatures in marine seismic
exploration considering multiple physical factors: Applied Geophysics, 7, no. 2, 158165,
http://dx.doi.org/10.1007/s11770-010-0240-y.
Nowacek, D. P., C. W. Clark, D. Mann, P. J. O. Miller, H. C. Rosenbaum, J. S. Golden, M. Jasny, J.
Kraska, and B. L. Southall, 2015, Marine seismic surveys and ocean noise: Time for coordinated
and prudent planning: Frontiers in Ecology and the Environment, 13, 378386,
http://dx.doi.org/10.1890/130286.
Rayleigh, L., 1917, On the pressure developed in a liquid during the collapse of a spherical cavity:
Philosophical Magazine Series 6, 34, 9498, http://dx.doi.org/10.1080/14786440808635681.
Ronen, S., S. Denny, R. Telling, S. Chelminski, J. Young, D. Darling, and S. Murphy, 2015, Reducing
ocean noise in offshore seismic surveys using low-pressure sources and swarms of motorized
unmanned surface vessels Low- pressure sources and Swarms of motorized unmanned surface
vessels: 85th Annual International Meeting, SEG, Expanded Abstracts, 49564960,
http://dx.doi.org/10.1190/segam2015-5928795.1.
Safar, M. H., 1976, The radiation of acoustic waves from an airgun: Geophysical Prospecting, 24, 756
772, http://dx.doi.org/10.1111/j.1365-2478.1976.tb01571.x.
Schulze-Gattermann, R., 1972, Physical aspects of the air- pulser as a seismic energy source:
Geophysical Prospecting, 20, 155192, http://dx.doi.org/10.1111/j.1365-2478.1972.tb00628.x.
Vokurka, K., 1986, Comparison of Rayleighs, Herrings, and Gilmores models of gas bubbles: Acta
Acustica United with Acustica, 59, 214219(6).
Weilgart, L., 2007, The impacts of anthropogenic ocean noise on cetaceans and implications for
management: Canadian Journal of Zoology, 85, 10911116, http://dx.doi.org/10.1139/Z07-101.
Weilgart, L., 2013, A review of the impacts of seismic airgun surveys on marine life: CBD Expert
Workshop on Underwater Noise and its Impacts on Marine and Coastal Biodiversity, 19.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 223

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Willis, H. F., 1941, Underwater explosions: the time interval between successive explosions: Technical
report, British Report WA-47-21.
Ziolkowski, A., 1970, A method for calculating the output pressure waveform from an air gun:
Geophysical Journal of the Royal Astronomical Society, 21, 137161,
http://dx.doi.org/10.1111/j.1365-246X.1970.tb01773.x.
Ziolkowski, A., 1982, An airgun model which includes heat transfer and bubble interactions: 52th Annual
International Meeting, SEG, Expanded Abstracts, 187189.
Ziolkowski, A., P. Hanssen, R. Gatliff, H. Jakubowicz, A. Dobson, G. Hampson, X. Y. Li, and E. Liu,
2003, Use of low frequencies for sub-basalt imaging: Geophysical Prospecting, 51, 169182,
http://dx.doi.org/10.1046/j.1365-2478.2003.00363.x.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 224

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

An unmanned aerial vehicle with vibration sensing ability (seismic drone)


Robert R. Stewart*, Li Chang, Srikanth K.V. Sudarshan, Aaron T. Becker, and Li Huang, University of Houston
Summary
We describe the design, testing, and potential of an
unmanned aerial vehicle (UAV or drone) with seismic
sensing capabilities. The seismic or vibration sensing
platform (four 100 Hz geophones plus recording electronics)
is attached to a 3DR Solo Quadcopter drone. The geophone
spikes become the drones landing legs. The drone and its
geophone payload have been successfully flown a number of
times with take-off, programmed or remotely controlled
navigation, landing, and recording. We have conducted tests
(using hammer and weight drop sources) to compare the
response of the landed seismic-drone system to planted
geophones and a conventional cabled seismic system. The
seismic traces from the drone are quite similar to those of the
planted geophones. To test the spike penetration on landing,
we created three different scenarios (dropping the drone on
sand, grass and dry clay) and measuring the depth of
penetration (up to 20 mm). We conducted a walk-away
survey with the drone versus a planted geophone line. Again
the drone and planted geophone responses are very similar.
We conclude that the drone-mounted geophone platform can
fly to a site, land, and record seismic vibrations with similar
quality as planted geophones. Detachable and roving seismic
platforms may further increase the drones seismic reach.
Drones show considerable promise for various kinds of
seismic measurements and surveys.

outlined (and implemented in January, 2016 requiring US


flier registration with the Federal Aviation Administration FAA), but a more extensive regulatory environment is under
development by the FAA and other national aviation bodies.
Now, moving toward the geophysics world, measuring
vibrations and a materials properties are key components of
many fields, including geotechnical engineering, earthquake
monitoring, and seismic surveying. Sometimes, associated
sites may be difficult or hazardous to access. In addition,
there might be many places to measure which could require
a great deal of hand labor (3D seismic surveys). Earthquake
monitoring, especially after a disastrous earthquake requires
placing sensors close to the events epicenter. This
hazardous need could be made much safer by having a
robotic sensor emplaced in the calamity zone. The UAV
itself could mediate geophone deployment in three ways: 1)
dropping the sensor from the air to the ground (or placing it)
to be left or collected in another way [possibly via a Flirtey
(Sonner, 2016) or Amazon Prime Air system], 2) deploying
the sensor on the ground and returning aerially to pick it up,
or 3) landing and using a vibration detector that is integrated
into the UAV platform. We describe an integrated system
here where we have added a seismic recording platform to a
commercial drone (3DR Solo Quadcopter in Figure 1).

Introduction
There is an exciting new technology that has become popular
with recreational flyers and a growing cadre of geoscience
professionals (Cicoria, 2015). It is the unmanned aerial
vehicle (UAV) or airborne drone (Chamayou, 2015; Whittle,
2015) - a flying platform with propulsion, positioning, and
remote or self control. Most drones also have some kind of
sensors which capture and possibly transmit information.
There has been considerable coverage in the news and
technical press about the burgeoning promise, along with
concerns, of drones (Horgan, 2013; Adams and Bushwick,
2014). The promise of UAVs is legion - from remote rescue
to deliveries, farming, and forensics. Drones are being used
in humanitarian response efforts after disasters. For
example, after the April 2015 7.8 magnitude earthquake in
Nepal, drones were deployed and able to survey sites,
inspect buildings and roads, and create 3D maps of cultural
heritage sites such as temples (Team Rotordrone, 2015).
However, there are issues which include safety, reliability,
and privacy. Preliminary regulations governing the
recreational and commercial operation of drones have been

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1. 3DR Solo Quadcopter (drone) with a four-legged seismic


recording platform attached. The drone flew to the grassy location,
landed, and recorded seismic vibrations. The vibrations recorded by
the drone are compared to those from adjacent, planted geophones.

Page 225

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The seismic drone could be used for terrestrial operations or


to land on water and make marine (pressure) measurements
(Lorge, 2015). Another type of aerial drone uses fixed wings
more like a plane than a helicopter. These plane-like
unmanned aircraft are often used for longer flights (Oleson,
2013) as opposed to helicopter-type or rotor-wing craft. A
fixed-wing drone might drop seismic sensors as could our
rotor-wing craft.
The concept of using robots to place seismic sensors has
captured the imagination of at least a few (e.g., Gifford,
2005). Postel et al. (2014) describe using mobile robots for
geophone placement. This paper presents the design of a
seismic drone and its performance including landing issues,
ground coupling, and penetration. We conducted tests to
compare the drones seismic response with conventional
seismic recording. The overall goal of this research is to
explore the quality, logistics, and potential of employing
drones for seismic measurements and surveys.
Platform design
The sensor platform of our seismic drone contains four
geophones, wired in series, on a cross-member X support
structure. Recording electronics and battery are also
mounted on the X structure. This work primarily tests the
seismic drone concept and examines the data quality
attainable. Thus, to make an equitable comparison with
planted geophones, we take the raw output from the drones
geophones and plug it into the planted geophone recorder
system. Thus, the only variable for our measurements is the
drone mount and landing. The drone and planted geophones
are all Geospace 100 Hz vertical geophones. The geophones
are set at 25 cm apart on the drones X structure. The planar
X platform makes the sensors largely perpendicular to the
ground surface upon landing.

Coupling
The first experiment compares the output from the
geophones, as deployed with variable coupling on
different surfaces (Figure 3), with the seismic
drone. This test was undertaken at the University
of Houston campus.
Recording a shot gather
The second experiment compared the drone and
the cabled system at our La Marque Geophysical
Observatory 15 miles north of Galveston, Texas.
The recording system and source are Geometrics
StrataVisor NZ and 40 kg propelled energy
generator (PEG.) We deployed a 24-channel 2D
line. The drone flew closely to each receiver,
landed, and recorded each shot.
Soil penetration
The third experiment compares soil penetration
and the angle of incidence in three different soil
types. This is important to ensure quality data for
coupling in various soils. We also need to test
whether the drone can takeoff, even when the
geophones are well planted in soil.

Figure 3: Different geophone configurations and setups for


comparing with the seismic drone. We used a 10 lb hammer as a
source recorded into: a) round platform, b) wooden platform, c) well
planted geophone, and d) marginally planted geophone.

Figure 2: Design of the four-geophone platform attached to the 3DR


Quadcopter.

Experiments
We conducted three experiments to test the drones seismic
capabilities:

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 4 shows the data recorded from some of the


configurations as outlined in Figure 3. The drone records a
seismogram, on the hard surface, about the same as the platebased geophone. In the long grass, the drone has a slightly
ringier response as complared to the well planted geophone
in this case.

Page 226

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 6. Schematic diagram of the drone versus planted geophone


test.

Survey Results

Figure 4. Displays of the seismograms generated by different


geophone plants and the seismic drone. a) the drone versus different
platforms (round and wooden). Oscillations in the platforms are not
damped quickly since they are not fixed to the ground, The
maximum amplitude values are similar and appear almost
simultaneously. The drone landed quite close to the geophone setups
so no time shift is observed. b) the drone versus planted geophones
(well planted and marginal to satisfactorily planted). We observe
mild reverberations in the drone data compared to the planted
geophones. The maximum amplitude values are similar but do not
appear aligned as the locations were approximately half meter apart,
and hence time a shift occurred.

In Figure 7, we observe the seismograms as generated by an


accelerated 40 kg weight drop with the drone and planted
cabled system. The drone data is a little noisier than the
planted geophone data. However, most of the events are
quite similar. We note that that the drones geophones could
be more rigidly attached than in this prototype model which
should decrease noise and reverberations.

We undertook a longer offset survey at our La Marque


Geophysical Observatory about 15 miles north of Galveston,
Texas as shown in Figures 5 and 6.

Figure 7. Shot gather comparison using a 40 kg accelerated weight


drop. A planted geophone (left) and drone (right) co-located shot
gather. The drone and planted geophones are connected to the
Geometrics 60-channel StrataVisor. Each receiver location is 1 m
apart.

Figure 5. Field test photograph, at the La Marque Geophysical


Observatory, Texas with a planted geophone and cabled system
versus the drone.

2016 SEG
SEG International Exposition and 86th Annual Meeting

This next experiment tests the soil penetration, upon landing


of the seismic drone, in different soil types. Good contact
with soil is important for obtaining quality data, hence the
experiment explores the penetration capability of the setup
in common soils. We performed the experiment in grass,
sand, and dry clay. The penetration was maximum in sand
followed by grass, but the drone did not have spike
penetration into dry clay as shown in Figure 8. The drone
was able to take-off after all landings.

Page 227

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

measurements and depart or leave the sensor package behind


at a number of locations. We can also imagine many drones
with a sensor or sensors undertaking larger surveys. There
are also numerous ways that the drone can transfer or
communicate its information via radio link, WiFi, or
connected download upon return to home base. The data
transfer architecture could be bucket-brigade style among
drones, link to a master drone or tower, or directly connected
to base data repository.

Conclusions

Figure 8. Box and whisker plots comparing the variations in depth


of geophones attached to the seismic drone after landing.

Further capabilities
We noted previously that the vibration or seismic sensing
platform could be detached from the aircraft and left to
monitor In addition, the platform could also be an
autonomous rover in its own right (Figure 9). It could thus
move to various positions to achieve, for example, more
recording locations, better coupling, sunlight, shade, etc. It
could then recovered by the drone or by other means.

In this paper, we have presented the design, testing, and


potential of unmanned aerial vehicles with integrated
seismic sensing capability. We built a seismic platform with
four 100 Hz geophones and attached it to a 3DR Solo
Quadcopter. The Quadcopter was able to fly preprogrammed
or piloted flight paths, land, record seismic data, and takeoff
again. We have thus demonstrated the proof-of-concept of
mounting geophones on a drone and acquiring reasonable
seismic data. The data acquired after flying, landing, and
recording under various scenarios indicates that the drone
can indeed record data with similar quality as that of a
planted geophone. This type of sensing can be automated.
There are many opportunities for future work in platform
design, data processing and transmission, and logistics.
Unmanned aerial systems are advancing very rapidly. There
are undoubtedly many activities and requirements in the
geosciences that will be assisted by drones.
Acknowledgments
We express our appreciation to the supporters of the Allied
Geophysical Lab and the Robotic Swarm Control Lab at the
University of Houston.

Figure 9. Hexapod robot (from EZRobot) with six, 100 Hz


geophones attached as legs and sensors.

Part of the promise of the seismically equipped drone is to


fly autonomously on pre-programmed paths to either take

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 228

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Adams, E., and S. Bushwick, 2014, Successful drone delivery: Popular Science, 285, 4350.
Chamayou, G., 2015, A theory of the drone: The New Press.
Cicoria, N., 2015, Unmanned aerial vehicles (UAVs) in seismic: How to evaluate drone technology and
operators effectively: The Source: Canadian Association Geophysical Contractors, 12, 1618.
Gifford, C. M., 2005, Robotic seismic sensors for polar environments: Masters thesis, University of
Kansas.
Horgan, J., 2013, The drones come home: National Geographic, 223, 122135.
Lorge, G., 2015, Come on in, the waters fine! Water-tight multirotors from QuadH2O, in J. Babler, ed.,
Make: Magazine: Maker Media Inc. 44, 4243.
Masunaga, S., 2016, Dynamic growth for drones ahead: Houston Chronicle, B3.
Oleson, T., 2013, Droning on for science: Unmanned aerial vehicles take off in geosciences research:
Earth, 58, 2837.
Postel, J. J., T. Bianchi, and J. Grimsdale, 2014, Drone seismic sensing method and apparatus: U. S.
Patent US 2014/0307525 A1, https://www.google.com/patents/US20140307525.
Sonner, S., 2016, Drone journey called a first for autonomous deliveries: Houston Chronicle, B3.
Team RotorDrone, 2015, Disaster in Nepal: Drones aid first responders in the aftermath of a devastating
earthquake: RotorDrone Magazine, 4247.
Whittle, R., 2015, Hellfire meets PredatorHow the drone was cleared to shoot: Air and Space
Magazine, 29, 2227.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 229

Luc Haumont*, Laurent Velay, and Michel Manin, Kietta


natural limit (Figure 2), estimated to be approximately
800m of swath width in CDP.

Traditional marine seismic acquisition methods (streamer,


OBC) have reached some inherent limitations that are
difficult to overcome. Some new methods such as nodes
have been recently introduced but lead to a significant cost
increase in opex and/or capex. The new patented method
presented here combines the advantages of all techniques to
offer excellent quality data in a very efficient manner. The
paper describes the methods background, its positioning
compared to competing techniques, and presents the key
advantages of this new concept.

Evolution of the number of towed streamer over the years


100

Max. number of Streamers towed by a single vessel

Summary

10

Introduction
1

Offshore seismic exploration methods need to adapt to the


current price environment and develop new technologies
that address the complexities of offshore operations and
increasingly challenging geological settings. The
exponential growth in the amount of acquired data over
time confirms the need for more quality requirements
(Figure 1).
Channels Count per crew
10000000

1000000

100000
Number of channels per crew

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

A novel marine exploration method offering true 4C 3D full-offset full-azimuth full-bandwidth


seismic acquisition (FreeCableTM).

10000

1000

100

10

1
1960

1970

1980

1990

2000

2010

2020

2030

2040

2050

Year

Figure 1: channel number increase (actual and interpolated)

Today, the challenge is to deliver data better suited to


advanced quantitative interpretation methods while
controlling costs. The latest innovations in seismic
acquisition are full azimuth geometries and multi-sensor
technologies. They attempt to sample seismic wave fields
in the most comprehensive manner: increased coverage,
finer grids, broader bandwidth, lower noise, multiple
components.
the number of towed streamers has increased to reach up to
16 streamers behind a seismic vessel. Because of the level
of hydrodynamic forces (the mechanical tension reaches 70
tons at the paravane) this increase has approached its

2016 SEG
SEG International Exposition and 86th Annual Meeting

1975

1980

1985

1990

1995

2000

2005

2010

2015

Year

Figure 2: towed streamer number stalling trend

To overcome this limitation, multi-azimuth measurement


can be obtained by surveying the same area several times
using a standard array of towed streamers but different
shooting azimuths. The cost is approximately multiplied by
the number of desired azimuths.
Alternatively, Wide Azimuth Towed Streamer (WATS)
involves complex logistics involving several seismic
vessels and source vessels. The azimuth sampling is
improved at the expense of shot sampling, which in turns
yields a moderate coverage.
Another area of improvement is signal spectrum
enlargement. The industry has developed different
solutions to overcome the ghost problem, such as the overunder technique, the slant streamer technique or the multisensor streamer. However, these advantages remain limited
since measurements are corrupted by noise, linked to the
fact that streamers are still towed at about 5 knots.
Noise reduction has been an improvement area for long
time, for instance with the introduction of solid streamers.
Streamer noise generally comprises the following: towing
noise, swell noise, and the strumming noise related to the
mechanical tension in the streamer. The post processed
noise level is also a function of the processing gain which
can be improved due to a higher coverage (number of
traces in a particular bin).
Ocean bottom technologies (OBC, OBN) are currently the
only solutions that address all the geophysical issues.
However, due to limited productivity and high costs, OBC
and OBN have managed to capture only a small percentage
of the seismic market. Additionally, seabed techniques are
not well adapted to certain types of environments such as
hilly seabed or corals where deployment or sensor coupling

Page 230

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

might be very complicated or impossible, and are impacted


by seabed interface related noise (e.g. Scholte wave,
mudroll, shear wave noise)
The new method introduces a technology that retains the
ocean bottom advantages while overcoming the inherent
efficiency problem.
New method description
The new method uses classical principles of reflection
seismology:

One or more shooting vessel generates impulses with


an air gun array (source).

A large number of seismic sensors mounted on cables


measure reflected waves (receivers).
The main originality of the system relies in the constitution
of the receiver array and the way it is operated. The
receiver is composed of a set of cables meant to be pseudostationary w.r.t. the seabed and floating in mid-water
(Stationary Midwater Autonomous Cable - SMAC). Each
SMAC is physically independent of the others and attached
at its two ends to unmanned vessels (Autonomous
Recording Vessels - ARV) for tensioning and controlling
its position. Each SMAC is equipped with hydrophones and
geophones to measure both wave pressure and velocity.
Hydrophone and geophone signals are combined together
to cancel out reflection ghost thus leading to a flat spectrum
regardless of depth. The SMAC embeds specificallydeveloped equipment to control the depth at low speed. The
operations are supervised from a master vessel that
communicates with the ARV fleet through wireless radios.
Real-time QC is done onboard.
The typical configuration of the system is as follows:

Each cable is 8-kilometer long and includes a 4C


station every 25 m (320 stations per cable);

The system spread consists of 20 parallel cables


spaced by 400m, leading to a receiving area of 8 km
by 8 km;

Immersion depth is between 10 m and 100m.


The main specificities compared to existing systems are:

Source and receivers are totally independent;

Receiver cables are not physically linked to each


other;

The cables are immersed at a depth that is


considerably larger than in towed streamers, in a much
quieter environment, and the system is not lying on the
sea bed (like in OBC/OBN).

2016 SEG
SEG International Exposition and 86th Annual Meeting

Several shooting methods are possible since source and


receivers are independent. For the sake of clarification, two
methods are introduced:

A more qualitative method (patch method)

A more productive method (progressive method).


In the patch method, the 20 cables are stationary w.r.t.
seabed. The shooting vessel sails perpendicularly to the
lines, shoots every 25m, with an inter-line spacing of 400m.
Overshoot of 4 kilometers is used on each side (Figure 3).
This method is of the stop & go type: once the 8x8 km zone
is covered, the array moves to the next contiguous zone.

Figure 3: patch method with typical spread configuration

In the progressive method, the 20 cables are slowly moving


(e.g. 0.1 knot) along a pre-determined route. The shooting
vessel sails perpendicularly to the cables, and shoots every
25m, with an overshoot of 4 km on each side. The cable
speed is such that the shooting vessel has time to shoot a
complete line while the cables move from 400m.
Method advantages
Low noise 4C data and flat spectrum
The level of noise is significantly reduced compared to
towed streamer technology thanks to the fact that the
SMAC is subject to much lower tension, has a reduced
water speed and can operate at greater depths.
Turbulence and other hydrodynamic noise sources are
suppressed. Flow noise which is proportional to the square
of the speed [Elboth et al. 2010] is significantly reduced (in
a range of at least 20 to 30 dB). The SMAC is not sensitive
to sea surface perturbations such as waves and wind, and
the background noise at 100m is extremely weak [Elboth et
al. 2009]. The low tensions and the low level of vibrations
along the cables remove tug noise and strumming noise

Page 231

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

typically seen in towed streamer techniques [Schoenberger


et al. 1974].
The 4C sensors embedded in the MSC are perfectly
coupled to the water which is a homogeneous medium.
There is no coupling or interface problems such as those
that can be encountered in seabed technologies.
The good quality obtained on geophone signals at low
frequencies (thanks to reduced tension and vibrations)
allows receiver ghost cancellation to be done at any depth,
leading to a flat spectrum over the full seismic frequency
band (frequency notch removed). The received signal has
useful frequency content down to a few Hz.
It is noteworthy to highlight that even the inline geophone
signal records useful seismic energy and can be exploited
to have a 3D vector measure of the particle velocity. The
method provides true 4C acquisition and opens the
possibility for advanced processing in the domain of noise
suppression, signal separation, angle of arrival estimation
and wave field reconstruction.
Data richness: 3D, high fold, full offset, full azimuth
The method produces full 3D high fold seismic data with
full offset and full azimuth. Full azimuth and full offset
(including zero offset) data is obtained by exploiting the
fact that sources and receivers are not linked and the
shooting vessel(s) freely navigate(s) in the survey area.

Figure 4: receiver () and shot point (+) space sampling

A dense shooting classically used with nodes is also


possible with this method. Figure 7 displays the coverage
map obtained with a 50m x 50m shooting pattern, yielding
a 6000-fold in the natural bin.
The full 3D acquisition brings among others two qualitative
advantages:

in complex subsalt geological structures it may allow


the illumination of zones appearing as shadow zone
for other technologies;

it provides richer data for AVOA (Amplitude versus


offset and azimuth) reservoir characterization studies.

The method uses an ideal geometry with regularly spaced


receivers inline and regularly space shot points crossline
(Figure 4). It leads to a natural square bin of 12.5m x 12.5m
in the typical configuration.
The typical configuration in patch mode leads to a seismic
fold of 400 in the natural bin (Figure 5). The full fold area
is 8km x 8km wide. This high coverage per bin greatly
improves the post-stack signal-to-noise ratio (+26dB). The
geometry produces a completely isotropic response since
full-offset full-azimuth is obtained in every bin (Figure 6).

Figure 5: Fold map in natural bin (12.5m x 12.5m)

Figure 6: Rose diagram in full fold area

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 232

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 7: Fold map obtained with 50m x 50m dense shooting in


ideal case (left) and with sea current effects modelled (right)

In the presence of sea current, the receiver positions will


not be perfectly stationary. Changes in current intensity
with no direction change has no effect on the spread. Stable
changes of current direction rotate the spread as a whole
and shift it mainly in the crossline direction. Current
variations and their effects on the spread have been
accurately and extensively characterized with available
current data from the different geographical zones in the
main areas of interest. Patented navigation techniques for
the ARVs and the source(s) have been developed to ensure
that the sea current effects on the seismic fold and the other
seismic attributes remain limited. It is noteworthy that in
some cases the current effect can be seen as positive,
especially for some attribute like offset distributions.
Figure 7 compares the ideal fold map to the fold map
obtained with an array drift of 1000m and azimuth
variations of 20, which correspond to a worst case
scenario (90%-tile worst conditions in GoM). The
difference is weak and it can be seen that the current has a
smearing effect.
Improved productivity
The method brings important advantages in terms of
productivity:

The receiving area of 64 km allows the acquisition of


a significant amount of data at each shot.

The useful acquisition time is improved thanks to the


suppression of U-turn time.

The patch method is ideal for cost effective 4D.

The method is able to perform continuous acquisition


without any pause in progressive method.

The downtime is expected to be small since the


method is much less sensitive to bad weather and high
sea states.
When comparing the productivity of this method for a same
data quality over a 900-km area, it is twice more
productive than multi-azimuth and 6 times more productive
than OBC.

2016 SEG
SEG International Exposition and 86th Annual Meeting

HSE benefits
The method contributes a positive impact on the
environment:

Lower carbon footprint: fuel consumption is


significantly reduced,

Reduced marine life disturbance (low power


machinery, reduced marine spread),

No impact on seabed (unlike OBC),

Possibility to use a weaker source for the same final


signal-to-noise ratio.
Flexibility and adaptive geometry
The method is flexible and the geometry can be adapted:

The method can be used in different environments


(deep water, shallow water, proximity to platforms,
landlocked seas) to fit specific survey needs.

The receiver geometry and the shooting lines can be


optimized for a particular geological target.

There are no limitations in offset or in azimuth.

It is possible to trade off fast exploration (sparse) and


higher quality exploration (dense): bin size, seismic
fold, and offset/azimuth distributions can be tuned.

Conclusions
Generally speaking, the system can be seen as an evolution
combining the advantages of both streamer and OBC/OBN
techniques.
It may be considered as an evolution of multi-sensor towed
streamer technique in which the number of cables is
theoretically unlimited and the noise level is reduced thanks
to limited tension, larger depth and the stationary cable;
thus providing true 4C with low-frequency content.
It can be seen as a floating OBC technique in which cables
are easily moved and reconfigured, removing the need for
coupling to the sea bed, seabed interface related noise and
time-consuming operations (deployment, recovery, rollover); thus providing an efficient high end 3D method the
only downside being not to be able to record shear waves.
It may also be understood as performing the equivalent of
land acquisition at sea with a perfectly sampled geometry,
and without the complex statics from the surface weather
zone and without the terrestrial obstacles such as mountains
or forests, thus offering full-offset full-azimuth in a regular
square bin with high fold.

Page 233

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Elboth, T., B. A. Reif, and . Andreassen, 2009, Flow and swell noise in marine seismic data:
Geophysics, 74, no. 2, Q17Q25, http://dx.doi.org/10.1190/1.3078403.
Elboth, T., D. Lilja, B. A. Reif, and . Andreassen, 2010, Investigation of flow and flow noise around a
seismic streamer cable: Geophysics, 75, no. 1, Q1Q9, http://dx.doi.org/10.1190/1.3294639.
Schoenberger, M., and J.-F. Mifsud, 1974, Hydrophone streamer noise: Geophysics, 39, 781793,
http://dx.doi.org/10.1190/1.1440466.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 234

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Processing considerations for seismic data acquired with wave gliders towing a 3D Sensor Array
Philippe Caprioli*, Bent Andreas Kjellesvig, Nick Moldoveanu and Ed Kragh, Schlumberger
Summary

Field test

In 2015, a field test was conducted in the United Arab


Emirates (UAE) to acquire seismic data during an Ocean
Bottom Cable survey using Autonomous Marine Vehicles,
or wave gliders, equipped with a 3D Sensor Array (3DSA)
of hydrophones. The wave gliders use wave energy to move
in the water and can be programmed to follow a certain path
or to hold station. The distribution of sensors in a 3DSA
leads to an estimation of the 3D spatial gradients of the
recorded pressure wavefield, which can be used in
processing, e.g., attenuation of receiver side ghost effects.
The paper discusses the acquisition QC and some of the
processing opportunities enabled by this novel acquisition
system. Seismic acquisition with wave gliders could
advantageously complement streamer surveys, in sensitive
areas and/or when large offsets are required.

In 2015, during an Ocean Bottom Cable (OBC) survey in the


UAE, seismic data were acquired with three wave gliders
each towing a 3DSA. The gliders were deployed in a calm
sea (NW 0.5 m swell and 11 knots wind) with their sub at a
depth of ~8 m and programmed to hold station during the
acquisition (Figure 2-top). The tow depth of the 3DSA is ~10
m. An example of closely located OBC and glider common
receiver gathers (CRG) demonstrates that physical
decoupling from the sub and standard signal processing
noise attenuation solutions allow adequate measurements to
be acquired with a glider towing a 3DSA (Figure 2-bottom,
Moldoveanu et al. 2016 submitted). The geology is flat with
a shallow (~20 m) and hard water bottom (carbonates), and
the target is at ~1.5 sec. In this test only the coordinates of
the float and the orientation of the 3DSA were available; the
3DSA is a maximum of 11 m behind the float/glider.

Introduction
The wave glider is an Autonomous
Marine Vehicle (AMV) with the
unique feature of using wave
motion for propulsion, which
enables it to stay offshore for long
periods of time without servicing.
The AMV consists of a surface
float and a submerged glider (sub)
connected by an electromechanical
Figure 1: Wave glider
umbilical (Figure 1). The wave
with 3DSA
glider can be equipped with a
range of metrological, oceanographic sensors, and
communication systems. It is well suited for a variety of
applications (Pai 2013), including the acquisition of seismic
data (Moldoveanu et al. 2014).
In the application described here, the wave glider is equipped
with a 3D Sensor Array (3DSA). The 3DSA consists of 15
hydrophones mounted on cubic frame of about 1 m in size,
as illustrated in Figure 1. The presence of a buoyancy engine
just below the top arm ensures that this arm remains on the
high side, and that the entire array is oriented and does not
rotate on its axis while moving through the water. The 3D
distribution of the hydrophones enables deriving the spatial
gradients (via Finite Difference) of the recorded pressure
field. This paper discusses acquisition geometry QC, and the
challenges and benefits when using (either explicitly or
implicitly) such pressure gradients in processing one line of
3DSA field data.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 2: Close-up of the entire survey acquisition geometry near


the three wave gliders (top). OBC and 3DSA (sum of 15
traces/shot) CRGs (bottom).

We now focus on the data acquired with the float/source


geometry mapped in red in Figure 2-top. In Figure 3a, the
float trajectory is color coded with shot number (or
acquisition time): start of the line (West) in blue with offset
~6.4 km, end of line (East) in orange with offset ~4.7 km.

Page 235

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The glider position marked in purple corresponds to the


minimum offset (~244 m, shot 5740) where the source is
directly to the South. During the acquisition of the line
(~11.2 km, ~1.25 hour duration), the glider remained within
a 100100 m2 square. The multi-coil trajectory of the glider
and its significant speed variations (0.25 0.20 m/s) reflect
the local sea conditions/currents.
Acquisition geometry QC
Given the unusual acquisition setup, akin to a floating node,
acquisition geometry QC is essential. The sensor distribution
in the 3DSA can be used to estimate the propagation
direction of the wavefield. This is illustrated in Figure 3 on
a data window around the first breaks nominally flattened
for display purposes.
A first QC approach exploits intra-array propagation time
derived from maximum cross-correlation time lags. This
requires up-sampling the data because the typical
propagation time is only around 0.6 msec (1 m dimension),
which is less than data sampling interval. The crosscorrelation for the hydrophones pairs located the along
X3DSA & Y3DSA directions of the 3DSA (Figure 1) for all the
shots in the line are shown in Figures 3c and 3d. Clearly,
there is some information in the time picks: these are
different in amplitude and sign for the two axes, and
correlate well with measured orientation angles (the yaw or
azimuth of X3DSA in particular, Figure 3e). For example at
shot point 5600 (black arrows), X3DSA is pointing South (yaw
is 180), source to float is pointing East, hence the expected
intra-array propagation time of about 0/0.6 msec along the
X3DSA/Y3DSA axes, respectively.
This idea of using small intra-array travel time can be further
developed by adapting the Wavefield Parametric Inversion
(WPI) approach (e.g. Esmersoy 1990, Leaney 1990) to data
acquired with a 3DSA. WPI is routinely used in VSP
processing for separating up and down-going travelling P
and S-wavefields. Here we tailor the method to the
decomposition in the water layer, i.e., deghosting, and
assume that the data can be modelled with 2 (possibly more)
plane waves: an up and a down-going (i.e., the ghost) wave.
In practice, the assumption that the data can be represented
with a small number of plane waves is validated by
processing the data in short overlapping time windows. The
basic equation relates the up and down-going waves (, )
at the center of the array to the recorded data located
at in the array with the unknown slowness vector
=
( , , ) and

= ( , , ) dependent phase
shift operators:

.
.
=
+

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3: Acquisition QC using a data window around the first


breaks. Float trajectory color coded with shot point (SP) number
(a), with an overlay of source-to-float directions and estimated
propagation directions from the first breaks (b). Picked time of
maximum cross-correlation between traces located on
perpendicular axes of the 3DSA (c, d). Recorded 3DSA orientation
data with source-to-float azimuth (e). Computed gradient data: in
the 3DSA frame (f, g) and rotated to the (East, North) frame (h, i)
(T-gained data was time shifted).

All data quantities are frequency dependent. The system


involving all 15 hydrophones can be solved with a two-step
approach: 1/ use the dispersion relation and find the
slowness vector that best explains the observed data (a
simple scan was used here), and 2/ invert in a least-squares

Page 236

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

sense the corresponding system to output the deghosted


wavefield.
Applying the method to the data window around the first
breaks yields the propagation direction (azimuth) of the
wavefield ( , ) for each source/glider pair in the 3DSA
frame. After rotating these directions to the (North, East)
frame, the comparison with the source-to-float directions
(output from navigation) is reasonable for most of the shots
(Figure 3b). Over the entire line, the two angles differ by ~
0 8; larger differences can be observed at the start of the
line with offset >6 km and where there is slightly more noise
(the standard deviation of the angle difference drops to ~4
if the first 40 shot points are excluded, see also Figures 3f
and 3i). This supports a radial propagation of the first breaks.
A second QC approach consists of analyzing the polarization
properties of the computed pressure gradient vector. To take
advantage of the available redundancy when computing the
Finite Difference (FD) pressure gradient vector in the 3DSA,
all the hydrophone pairs surrounding the center hydrophone
are considered with their relative distance to the center and
inverted in a least-squares sense. The gradient data
expressed in the 3DSA frame on the flattened first breaks
window are shown in Figure 3f and 3g. The complementary
extrema on the gradient along the X3DSA/Y3DSA axes (Figure
1) are clearly linked to the moving/rotating acquisition
geometry and to the source-to-float direction, e.g., at shot
5600 (as discussed above) we observe the expected min/max
amplitude on gradients along the X3DSA/Y3DSA, respectively.
After reorientation to the (East, North) frame, the gradient
towards East is strong as expected for a West-to-East source
line and shows the predicted decrease/reverse polarity near
shot 5740 which is located directly to the South of the glider.
The gradient data towards North show complementary
variations (Figure 3h and 3i). Alternatively, when the
gradient data are oriented to a radial/transverse frame as
defined by the direction from source-to-float, we observe a
strong radial component (in average 10 times stronger than
the transverse, not shown here). The undulations in the
arrival time of the first breaks are also linked with the
moving receiver geometry.
This experiment supports the assertion that the analysis of
the small intra-array travel times and the polarization of the
computed pressure gradient vector may be used for
acquisition QC (seismic data, orientation and navigation
data). This also demonstrates that it is possible to estimate
the wavefield propagation direction using 3DSA data.
Receiver deghosting
One potential application of 3DSA data is attenuating
receiver ghost effects. We adapted a wavefield parametric
inversion approach that yields directional deghosting. This

2016 SEG
SEG International Exposition and 86th Annual Meeting

approach makes an implicit use of the gradient data. Another


possibility would be to explicitly use the vertical gradient
and run a P+Z summation type algorithm where the
vertical particle velocity, expressed in equivalent pressure
units, are derived from the vertical gradient of the pressure
data after time integration and scaling by the water velocity.
A simple P+Z summation does not fully account for the
velocity data (as the 3D obliquity angle of the arrival is not
accounted for) and will leave a residual receiver ghost. This
residual ghost is however relatively small: being exact/small
at normal/medium incidence, and even at large incidence
angles of 60, the error is only half of the amplitude of
velocity data at that incidence angle (cosine effect).
Figure 4 displays the input P data (after additional Scholte
wave attenuation), the computed vertical velocity Z data,
P+Z summation, wavefield parametric inversion deghosting
and also, for reference purposes, a P-only deghosting
solution (direct P-ghost deconvolution assuming normal
incidence angle and a depth of 10 m). Data are displayed
with an 8-150 Hz nominal bandpass filter in the top row. All
three processed results are fairly comparable in terms of
ghost event attenuation and event continuity improvement.
But expected differences can also be observed: P+Z
summation and wavefield parametric inversion mainly differ
on the shallow events where angular effects impacts the
deghosting and the P-only solution exhibits ringing on these
shallow events. The strong velocity contrast at the seabed
tends to move the rays towards the vertical in the water layer
and hence contribute to the fact that the simple P+Z
summation (1D approach) yields reasonable results over a
large part of the record. Average amplitude spectra
computed in a near-offset window of the input and processed
data are shown on the right of Figure 4.
The challenge in using FD pressure gradient estimates is low
frequency noise. The sensor spacing of 1 m, while much
smaller than the seismic wavelengths, is adequate to estimate
the gradient signal. But, particularly at low frequencies (long
wavelength), the gradient signal will be small and the noise
(mostly ambient noise in practice) will become a limiting
factor. The same issue applies to the WPI approach as at low
frequencies the system will tend to be singular. In practice,
this is dependent on the acquisition parameters (e.g., source
array, tow depth) and on the amount of noise attenuation
performed. Further insight can be gained from a low
frequency filter panel (Figure 4, middle row). We see that,
for this particular data set, reasonable gradient data can be
estimated in the range of 10-15 Hz.
Further evidence that the estimated gradients are providing
useful information can be seen at around 70-80 Hz where,
assuming normal incidence, the pressure data have a notch
(1500/[210]). This filter panel (Figure 4, bottom row)
shows that indeed the pressure data has weak amplitude, but

Page 237

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 4: Deghosting results. From


left to right: input P, computed Z, P+Z
summation, wavefield parametric
inversion deghosting and pressureonly deghosting: 8-150 Hz (top), 1015 Hz (middle) and 70-80 Hz
(bottom). Same display gain and Tgain. Average spectra computed in
green data window: input (top) and
deghosted (bottom) data.

as expected, the computed vertical velocity data


complements the pressure data and does show some coherent
energy. This leads to useful deghosted results, compared to
a P-only deghosting solution. This is also observed on the
amplitude spectra at these frequencies (Figure 4).
We have discussed ghost model independent deghosting
approaches, and the depth of the 3DSA could be QC-ed from
the data. Alternatively ghost model dependent deghosting
approaches could also be envisaged.

information in the gradient data above the 10-15 Hz


frequency band. The gradient data were mainly used for
acquisition QC, for estimating the propagation directions,
and for successful receiver side deghosting. Other potential
applications to be investigated are repositioning of the data
to a nominal location and migration.
Acquiring seismic data with wave gliders can
advantageously complement streamer data in obstructed or
sensitive areas where larger vessels must be used sparingly.
They can also be used to acquire ultra-long offset data.

Conclusion
This study shows that useful seismic data can be acquired
with a wave glider towing a 3D sensor array of hydrophones.
The distribution of sensors in a 3D sensor array enables
estimating the spatial gradients of the recorded pressure
wavefield. Using a field data example, we found genuine

2016 SEG
SEG International Exposition and 86th Annual Meeting

Acknowledgments
To Abu Dhabi Marine Operating Company for allowing the
3DSA and OBC data comparison, to Liquid Robotics Oil &
Gas for running the field operations, and to Schlumberger
for permitting the publication. We also thank our colleagues
Leendert Combee and Simon Baker for valuable input.

Page 238

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Esmersoy, C., 1990, Inversion of P- and SV-waves from multicomponent offset vertical seismic profiles:
Geophysics, 55, 3950, http://dx.doi.org/10.1190/1.1442770.
Leaney, W. S., 1990, Parametric wavefield decomposition and applications: 60th Annual International
Meeting, SEG, Expanded Abstracts, 10971100, http://dx.doi.org/10.1190/1.1889919.
Moldoveanu, N., L. Combee, P. Caprioli, E. Muyzert, and S. Pai, 2016, Marine seismic acquisition with
3D sensor arrays towed by wave gliders: SEG.
Moldoveanu, N., A. Salama, O. Lien, E. Muyzert, and S. Pai, 2014, Marine acquisition using wave
gliders: A field feasibility test: 76th Annual International Conference and Exhibition, EAGE,
Extended Abstracts, http://dx.doi.org/10.3997/2214-4609.20141469.
Pai, S., 2013, Wave glider Introduction to an innovative autonomous remotely piloted ocean data
collection platform: Presented at the SPE Offshore Europe Oil and Gas Conference and
Exhibition, http://dx.doi.org/10.2118/166626-MS.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 239

3D seismic acquisition with decentralized Dispersed Source Arrays

In the last few years, the interest towards the utilization of Dispersed Source Arrays (DSAs) in seismic acquisition has considerably grown. The proposed approach offers a wide range
of practical advantages, while no physical constraint restrains
us from utilizing diverse sources with different spectral properties during seismic surveys. As a consequence, the use of
simple autonomous source boats with airgun arrays of different sizes or marine vibrators producing sweeps with different
frequency ranges (in marine) and simple autonomous source
trucks (on land) becomes a practical proposition in DSA acquisitions.
This concept could give to the system additional operational
flexibility and facilitate the automation of seismic data collection. Therefore, with this study we intend to investigate the advantages that DSAs and system decentralization would bring
to seismic data acquisition. Although the main focus of this
research is on the marine environment, a generalization to the
applications of the method on land is possible. Preliminary
examples show that it is possible to produce valid migration
outputs from 3D decentralized DSA data.

the concept. Moreover, the improved operational flexibility allowed by the DSA concept can be an important step towards
the robotization and the decentralization of the seismic acquisition process.
An interesting first attempt of DSA land data acquisition and
inversion (FWI-based) has been successfully carried on and
presented by Kim and Tsingas (2014). In the following, the
theoretical framework and the inversion scheme of reference
is presented, the benefits of seismic acquisition system decentralization are introduced, and the first preliminary migration
results of 3D decentralized DSA data are provided.

0.5

t(s)

SUMMARY

1.5

INTRODUCTION
Following the guidelines drawn by Berkhout (2012), we suggest to replace (or reinforce) traditional broadband units with
narrow(er)band devices, together representing a Dispersed Source Array (DSA). The whole inhomogeneous ensemble of
sources incorporated to the arrays is required to cover the entire temporal and spatial bandwidth of interest, according to
the users needs. Figure 1 illustrates the principle by showing a comparison between two blended shot records: the first
with three identical broadband sources, the second with three
different narrowband source units.

1000

2000

3000

4000

5000

3000

4000

5000

(a)

0.5

t(s)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Matteo Caporal , Gerrit Blacqui`ere and Mikhail Davydenko, Delft University of Technology

1.5

These measures may have extremely fruitful practical implications. Narrowband source units can be technically simpler to
produce and more effective from an acoustic energy transmission point of view than todays broadband alternatives. Modern multiple-driver loudspeaker systems are based on the same
key concept and their improved performances are demonstrated
(Davis and Patronis, 2006). Additionally, the method will allow us to adjust the source interval in a frequency dependent
manner and revolutionize the way we address the ghost problem in marine seismic acquisition, permitting us to tow different devices at different depth below the water surface, according to their specific bandwidth. In fact, to reduce the destructive effect of the ghost, each source type can be placed at the
optimum depth below the water surface, i. e. at zs = c /4, one
quarter of its central frequency wavelength c (ghost matching). As a result, the ghost wave field will enhance the signal instead of compromising it. The reader is referred to Caporal and Blacqui`ere (2015) for an illustrative explanation of

2016 SEG
SEG International Exposition and 86th Annual Meeting

1000

2000

x(m)
(b)

Figure 1: Blended shot records generated respectively by (a)


three identical broadband sources and (b) three different narrowband sources (DSA) with low (central), mid (right) and
high (left) frequencies.

THEORETICAL FRAMEWORK
In this section, wave field extrapolation based modeling will
be briefly discussed by means of the W RW+ model proposed by Berkhout (1982) and extended to the description of

Page 240

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

3D seismic acquisition with DSA


a blended DSA acquisition system. Matrices W , W+ and R
refer to the up- and downward propagation matrices and the
reflectivity matrix, respectively. This method gives the possibility to describe seismic field measurements and numerical
simulations using a compact and advantageous notation. The
domain of reference for the following theoretical consideration
is the space-frequency domain.
The seismic data is stored in the so called data matrix P (zd , zs ),
where zd and zs are the receivers and the sources depths, respectively. Each column of P (zd , zs ) represents a different
shot record while each row of P (zd , zs ) describes a receiver
gather. Therefore, each monochromatic component of the primary wave field can be mathematically expressed, for a stationary geometry, as
P (zd , zs ) =
"
D (zd )

M
X

P (zd , zs ) = D (zd ) X (zd , zs ) S (zs )

(3)

where X (zd , zs ) is the so called earth transfer function and


may also include all multiples in the extrapolation scheme.
For more detailed practical information on how to incorporate
the multiples in the modeling scheme, the reader is referred to
Berkhout (2014a).

FULL WAVEFIELD MIGRATION


Full Wavefield Migration (FWM) is a convenient and robust
inversion-based imaging method which aims at constructing
the image iteratively from the full reflection response, involving the exploitation of the multiples. In Figure 2 a schematic
FWM
representation of this closed-loop imaging approach is shown.

Delphi

#
W (zd , zm )R (zm , zm ) W+ (zm , zs ) S (zs )

(1)

m=1

S (zs ) is the source matrix. It contains amplitude and phase information about the source wave field at the frequency under
consideration. Key information about the spectral properties
of the different DSA sources is therefore enclosed here. Each
column represents one source (array). Each row corresponds to
a different spatial coordinate. W+ (zm , zs ) and W (zd , zm ) are
the forward, down- and upgoing wave field propagation matrix, respectively. Each column contains a discretized Greens
function describing the wave propagation from one lateral location in space at depth level zs (zm ) to all grid points at depth
level zm (zd ). R (zm , zm ) is the reflectivity matrix, describing
the (angle - dependent) scattering occurring at depth level zm .
In other words, it specifies how the incident wave field is converted into the reflected wave field. D (zd ) is the detector matrix. It contains amplitude and phase information about the
detectors at the frequency under consideration. Each row represents one detector (array). Each column corresponds to a
different spatial coordinate.
Within this framework, it is possible to introduce the concept
of blending by defining the so called blending matrix (zs ).
All information about the linear combination of the different
sources of the array to be employed during the blended experiments is encoded here. The blended source matrix S0 (zs ) can
be formulated by
S0 (zs ) = S (zs ) (zs )

Residual

Imaging

incl. coda
and transmission
Simulated data
at every grid
point

Gradient

Velocity

FWMod
(non-linear)

Reflectivity

Source-side
wavefield
3

Figure 2: Closed-loop for FWM


The forward modeling tool utilized by FWM, the so-called Full
Wavefield Modeling (FWMod), is based on the following input
parameters:
Reflectivity estimate (zero for the first iteration) in order to generate the scattering;
Migration velocity model in order to propagate the scattered events through the subsurface upward and downward direction;
Initial source wave field (the source wavelet, the total
wave field or a combination of the two).

(2)

Each row of (zs ) corresponds to a different source. Each


column refers to a different blended shot record. In case of
simple time delays, the elements of (zs ) are given by ik (zs )
= e jik , where ik determines the time delay relative to the
i-th source for the k-th experiments.
Using the W RW+ model, it is also possible to mathematically formulate a convenient expression for the seismic data
including multiple scattering. The effect of surface and internal multiples can be added iteratively. A more general formulation of Equation 1 would then be

2016 SEG
SEG International Exposition and 86th Annual Meeting

Observed data

The inversion is driven by the residual between the modeled


data (zero for the first iteration) and the measured reflection
data. The image of such residual results in the gradient guiding the update of the reflectivities, within the model space, for
the following iteration of the algorithm. Note that, in the modeled wavefield, all the coda and the transmission effects are
taken into account. In this way the observed data is explained
in the correct (non-linear) way and multiple scattering simultaneously contributes to the imaging. The reader is referred to
Davydenko and Verschuur (2014) and Berkhout (2014b) for a
more exhaustive explanation of the iterative inversion scheme.

Page 241

3D seismic acquisition with DSA

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

SYSTEM DECENTRALIZATION
System decentralization represents a fundamental change in
the architecture of a very wide range of scientific studies. It
constitutes both an opportunity and a challenge in information technology and communications networks as well as in
the business arena (Bacon, 1990). With this current research
we intend to emphasize and investigate the fundamental improvements that decentralization would bring to seismic data
acquisition.
With DSAs, the use of relatively simple autonomous devices
becomes a practical proposition in seismic surveys. In marine
environment we can consider utilizing at the same time several
source boats with airgun arrays of different sizes or marine vibrators producing sweeps with different frequency ranges. On
land a combination of autonomous Vibroseis trucks of varied
dimensions is suggested.
Additionally, a higher level of decentralization could affect
the way receivers are deployed. Specifically, it is proposed
to move from the considerably complex traditional acquisition
systems, such as single seismic broadband source boats dragging a set of numerous streamer cables, to a swarm of many
simple source-detector subsystems sailing together (see Figure 3). Each subsystem consists of a DSA robot dragging a
single streamer cable and towing a single narrowband seismic
source at its optimum depth.

Figure 3: Comparison between a traditional centralized acquisition system (left) and a decentralized acquisition system
(right).

2007). Nevertheless, to our knowledge, no extensive and conclusive research has been conducted and published concerning
the applications of this concept to exploration geophysics.

EXAMPLE
In this section, we will demonstrate the feasibility of the 3D
decentralized DSA acquisition method with numerical examples of forward modeling and migration. To illustrate the concept, four different source unit types are used: ultralow- (2 - 6
Hz), low- (5 - 15 Hz), mid- (10 - 30 Hz) and high-frequency
sources (20 - 60 Hz). Note that each source spans a frequency
bandwidth corresponding to the same number of octaves. In
such situation, given bandwidths are partially overlapping.
The velocity and density models used as reference are shown
in Figures 4 and 5, respectively. The marine subsurface profile
is featured by two different reflectors: the first (ocean bottom)
at a depth of 500 meters, while the second is defined by the
function f (x, y) = zl acos(2x/X)sin(2y/Y ), where zl is
the central depth level (750 meters), a is the variation amplitude (25 meters), x and y the lateral variables and X = Y = 2000
meters the lateral sizes of the model. Two different datasets
corresponding respectively to a conventional broadband and a
decentralized DSA acquisition scheme have been simulated.
In both cases each source involved has been placed at the optimum depth below the water surface, i.e. at zs = c /4. No
deghosting has been performed on the source side.

Figure 4: True velocities

Although some sort of central coordination can still set restrictions for the system components (such as the target area
coordinates and the source density requirements), each subsystem is expected to sail autonomously and to fulfill specific
individual objectives. In particular, every unit must be able to
take crucial decision on the spot and modify its own behavior
(e.g. the sailing speed and direction) to promptly adapt to environmental changes and accomplish a given task (e.g. obstacle
avoidance, pattern formation, subsystem unit failure compensation).
In the last few decades, promising advances in swarm robotics
have started making it feasible to deploy large numbers of
inexpensive robots for tasks such as surveillance and search
(Iocchi et al., 2001; Bahceci et al., 2003; Bayndr and Sahin,

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 5: True densities

Page 242

For the first simulation, the data have been modeled considering a regular source spacing of 10 meters along the inline direction and 50 meters along the crossline direction (Figure 6).
Sources have been chosen to be broadband units with a bandwidth ranging from 5 to 60 Hz. For the second simulation,
a number of 10 DSA source boats were left free to sail with
the only constraints of not colliding on each others and to stay
within the target area. No offline path computation is involved.
The interval between consecutive shots was kept the same as
the inline spacing of the first simulation for the high-frequency
sources and decreased linearly with the maximum emitted frequency for the other sources. As we can see from Figure 7,
their paths are random-like and clearly non-symmetric. On the
receivers side, a single ocean bottom node (OBN) located in
the centre of the area is chosen.

(a)

Regular Broadband acquisition

2000

y(m)

1500

1000

500

(b)

500

1000

1500

2000

x(m)

Figure 6: Regular grid broadband acquisition shot locations.


Randomized DSA acquisition

2000

1500

y(m)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

3D seismic acquisition with DSA

1000

500

500

1000

1500

2000

x(m)

Figure 7: Decentralized DSA acquisition shot locations.


Migration has been performed on the datasets using the FWM
inversion scheme introduced above and the results are shown
in Figures 8a and 8b. With these acquisition settings, the signal is clearly and heavily undersampled producing not ideal
migration results, as expected. However, our goal here is not
to provide an optimal subsurface image but to prove the feasibility of the method and to further motivate our interests in
the topic. Between the two simulations, except for the lateral
edges of the target area, the DSA result guarantees an even
clearer image with considerably less artifact.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 8: Estimated reflectivities from Broadband data (a) and


from decentralized DSA data (b).

CONCLUSIONS
The DSA concept offers a wide range of practical advantages,
while no physical constraint restrains us from employing diverse sources with different spectral properties during seismic
surveys. Narrowband frequency sources can be technically
simpler to produce and more effective from a energy transmission point of view. Their utilization will allow shot densities
to be chosen in a frequency dependent manner and offer the
possibility to tow the devices at depths (in marine) that are
optimum for their specific central frequency giving extra benefits if we look at the source ghost issue. Furthermore, surveys
may be carried out by acquisition systems that are less complex and considerably more flexible than the ones that are used
today. System decentralization becomes a practical proposition for DSA surveys: more information can be gathered with
less complexity by moving from a single, complex centralized
system to a network of several, simple decentralized subsystems. Promising preliminary migration examples of 3D decentralized DSA data encourage us to carry this research forward in the near future and to explore the significant potential
improvements for exploration geophysics.

ACKNOWLEDGEMENTS
The authors would like to thank the sponsors of the Delphi
consortium for the stimulating discussions during the Delphi
meetings and the continuing financial support.

Page 243

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bacon, C. J., 1990, Organizational principles of system decentralization: Journal of Information


Technology, 5, 8493, http://dx.doi.org/10.1057/jit.1990.17.
Bahceci, E., O. Soysal, and E. Sahin, 2003, A review: Pattern formation and adaptation in multi-robot
systems: Technical Report CMU-RI-TR-03-43, Carnegie Mellon University.
Bayindir, L., and E. Sahin, 2007, A review of studies in warm robotics: Turkish Journal of Electrical
Engineering and Computer Sciences, 15, 115147.
Berkhout, A. J., 1982, Seismic migration, imaging of acoustic energy by wave field extrapolation, A:
Theoretical aspects: Elsevier.
Berkhout, A. J., 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, no. 4, A19
A23, http://dx.doi.org/10.1190/geo2011-0480.1.
Berkhout, A. J., 2014a, Review paper: An outlook on the future seismic imaging, Part I Forward and
reverse modeling: Geophysical Prospecting, 62, 911930, http://dx.doi.org/10.1111/13652478.12161.
Berkhout, A. J., 2014b, Review paper: An outlook on the future seismic imaging, Part II Fullwavefield migration: Geophysical Prospecting, 62, 931949, http://dx.doi.org/10.1111/13652478.12154.
Caporal, M., and G. Blacquire, 2015, Benefits of blended acquisition with dispersed source arrays
(DSA): 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts.
Davis, D., and E. Patronis, 2006, Sound system engineering: Elsevier.
Davydenko, M., and D. J. Verschuur, 2014, Full wavefield migration in three dimensions: 84th Annual
International Meeting, SEG, Expanded Abstracts, 39353940,
http://dx.doi.org/10.1190/segam2014-1079.1.
Iocchi, L., D. Nardi, and M. Salerno, 2001, Reactivity and deliberation: A survey on multi-robot systems:
Balancing Reactivity and Social Deliberation in Multi-Agent Systems, 932.
Kim, Y., and C. Tsingas, 2014, Land full waveform inversion for dispersed source arrays: 76th
Conference and Exhibition, EAGE, Expanded Abstracts, http://dx.doi.org/10.3997/22144609.20141122.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 244

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Influences of blending noises to weak signal


Wang Zhengjun*, Xia Jianjun, BGP, CNPC
Summary
The high-efficiency vibroseis acquisition technology is
facilitating the vibroseis acquisition efficiency to grow
exponentially; meanwhile the larget amount of blending
noises it brings result in a drop of the S/N ratio of the raw
data.The application of the dedicated noise attenuation
technologies and the ultra-high fold number make people
ignore the impact of these powerful noises. When the
dynamic range of a geometry is less than that of the signal,
the actual signals received are indeed the imcomplete
signals with the amplitude limit processing. The
amplitude limiting extent is determined by three factors:
the dynamic range of geometry, the energy of noises and
the energy of signals. The larger the dynamic range of
geometry, the weaker the noise energy, the stronger the
signal energy, the wider the amplitude variations of the
signals received and the less the signals lost will be, and
vice versa. These lost signals cannot be recovered through
de-noising. Actual data shows that if the acquisition is
performed without taking any noise control measures,
effective signals in raw data at larger offset and the weak
deep-layer will be thousand times weaker than the blending
noiese energy, when the limit of the dynamic range of
geometry (taking the analog receiver with a 60dB dynamic
range as an example) will block the weak signals in the
noise area, leading to decline of the signal containing ratio
of data. Taking the noise control measures to reduce the
energy level of belnding noises in the field is one of the
necessary means to solve this problem.
Introduction
The high-efficiency vibroseis acquisition technology
increases the acquisition efficiency by reducing the
shooting time, it increases the mutual influence noises
between adjacent shots. We call the shooting noise,
harmonic and mechanical noieses generated among mutiple
sets of vibrators in the high-efficiency vibroseis acquisition
process as blending noises collectively (Ras et al, 1999).
Many oil service companies launched their own unique
suppression methods and techniques for these noises (Ni et
al, 2011). These technologies are based on a common
assumption: the valid signals always exist losslessly. They
are only drowned and covered noises. As long as the noises
are suppressed, they will be restored. However, in the
actual acquisition operation, this assuption is usually
invalid due to the limitated dynamic range of geometry, the
actual seismic signal acquired are always suffers some loss
in varying degrees and the weaker the signal, the greater
the loss will be (Cooper,.2000). A strong noise
environment will undoubtedly exacerbate the extent of such

2016 SEG
SEG International Exposition and 86th Annual Meeting

losses. Given that the noise influence is so high that the


valid signals required cannot be received and the traces
under noises have become invalid, the noise suppression
will be meaningless, when we can only sacrifice fold. For
this reason, it is necessary to make an analystic research on
the effects of different noise intensites to weak signals to
give specific and quantifiable acquisition proposals for
reducing the field acquisition noises and improving the
signal containing ratio of raw data.
Before the commencement of the research, we need to
solve a problem how to detect and pick up the weak
signals in the gather? Due to existence of noises and the
complexity of the information contained in the seismic
signals, the detection and pickup of weak signals are still a
difficult problem in the industry. Currently the weak signal
detection methods are mostly the denoising method (Simon
et al, 1999; Bekara et al, 2009; Liu et al, 2009; Ma et al,
2009; Sandip et al,2009; Zeng, 2009; Lao, 2010; Han,
2011), improving the identification condition for the weak
signals drawn by noises through denoising and
improvement of the S/N ratio ,but it is not a detection and
pickup method. Only the Nonlinear dynamical system
based chaostic theory is a method to directly detect the
weak signals (Li et al, 2005). Although it has been
successfully applied in other areas, it has not have any
conclusive effect in the seismic survey field due to the
complexity of the information contained in the seismic
signals.
The energy ratio method trace gather weak signal
identification approach proposed in this paper increase the
reliability of weak layer signal identification based on
improving the S/N ratio through multiple fold stacking and
in the meanwhile looks for the energy relation between the
multiple fold sections with the weak- and strong-layer of
the corresponding gathers to calculate the energy of the
weak signals in the gather, on the basis of which, calculate
the dynamic range of weak signals and carry out the
analytic studies to the impact of different noises to weak
signals.
Method
The method proposed in the paper that making a theoretical
analysis on the influence of noises to weak signals based on
the dynamic range of signals is composed of two parts: a.
identify and extract weak signals and noises; b. calculate
and evaluate the dynamic range of weak signals.
Identification and extraction of weak signals

Page 245

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Influences of blending noises to weak signal

The weak signal referred to herein is a relative term. It does


not only mean that the signal in a small amplitude, it
mainly refers to the signal overwhelmed by intensive noises
(Bose et al, 2009). The weak S/N ratio feature of weak
signals is the main reason why they are difficult to be
detected and identified. Fortunately those weak noises that
are difficult to be identified in the gather get a higher
identification accuracy in the muti-fold stacking section
due to the improvement of energy and S/N ratio.
Extracting the weak layer
signals in the section

Weak/strong layer energy


ratio in the section

the strong layer energy in the section; N GW is a the weak


layer energy in the gather; N GP is the strong layer energy in
the gather; Fw is the fold of the weak layer; and Fp is the
fold of the strong layer.
In the variables of Equation (1), four can be considered as
known ones: when the geometry parameters are chosen, the
numbers of folds for different depths are avaiable; and the
strong- and weak-layer energies in the section can be
achived through statistics and calculation. In general, the
energy of the strong layer with a high S/N ratio can be
picked up in the denoised gathers.Such that the only
unknown variable the energy of the weak-layer signals in
the gather is achieved with Equation (1).
Calculation and evaluation of weak signal dynamic
range

Fold

The dynamic range of weak signals may be computered


referring to the instantaneous dynamic range classic
equation (Liu, 1996). When the research subjects are the
weak signals under strong noise conditions, the noise
energy is often higher than the other signals for an order of
magnitude. For simplicity, the maximuim amplitude of
noises can be considered equal to the maximum amplitude
of the total information approximately and the dynamic
range of weak signals can be expressed as:

Weak/strong layer energy


ratio in the gather
Strong layer amplitude
in the gather

Signal amplitude of the


weak layer in the gather

Noise
Amplitude
Dynamic range of weak
signals

I ( dB ) = 20 log

Weak signal frequencydivision dynamite range

max
max
Atotal
Anoise
20 log min
min
Asignal
Asignal

(2)

min is the minimum amplitude value of the signal


where Asignal
max is the the maximum amplitude value of
at a moment, Atotal

gross signals at a moment (the sum of all the noises and


max is the maximum amplitude value of noises
signals); Anoise

Figure 1: Weak signal dynamic range analysis flow

at a moment.
Figure 1 shows the weak signal dynamic range analysis
idea: replace the gather with the section to perform the
extraction of weak signals so as to improve the
identification accuracy; pick up the target weak layer and
the rms energy corresponding to the position of a strong
layer, and then calculate the energy ration between weakand strong-layers , the relationship between the gather and
section energy can be considered as only having the fold
factor involved. This relationship can be expressed as

N SW
F N GW
= w
P
NS
F P N GP

(1)

where N SW is the weak layer energy in the section; N SP is

2016 SEG
SEG International Exposition and 86th Annual Meeting

After getting the energies of noises at all levels through


statistics, the dynamic range of weak signals under
different noise conditions can be obtained with Equation (2)
taking the dynamic range of the receiving system in the
instrument (generally the valid dynamic range of geophone)
as a constraint threshold. When the dynamic range of a
signal is greater than the threshold, the signal will not be
recorded, in this way, the quantitative evaluation to the
recordability of of the weak reflection under different noise
intensities can be can be achieved.
Applications
The actual data is from a high-efficiency vibroseis
acquisition project in an area in western China. The

Page 246

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Influences of blending noises to weak signal

instrument recording system is a 24-bit digital seismograph


and the high-precision analog geohpones with a the
effective dynamic range of 60dB.
Figure 2 shows the main process for extracting the weak
signals: a) define the analysis window for target weak- and
strong-layers in the section; b) compute the rms amplitude
energies for strong- and weak-layers within the window
respectively; c) compute the ratio between the weak layer
and the strong-layer energy at corresponding position, as
shown in Figure 2c, the energy ratio has a certain volatility
spatially, but the it tends to a constant value in general, in
order to analyze the influence of the signals with even
weaker energy, we chose a smaller value =0.04 as a
reference value for subsequent calculation and analysis; d)
find out the strong layer in the corresponding section in the
denoising gather and pick up the absolute amplitude values
of different offsets ablong the layer (red curve in Figure 2d).
Under the condition that , the weak- and the strong-layer
folds are given, the absolute amplitude values of the weak
layer signals in the gather at different offsets (green curve
in Fig. 2d) can be achieved from Equation (1).

characteristics (Figure 3b ). According to the characteristics


of these distributions, the mixed source noises are divided
into five levels by the sizes of energy levels: (a) harmonic
noises of vibrators, Level I, II and III mechanical noises
and background noises. Compute the average amplitudes of
the noise at each level respectively, which will be taken as
the noise factor for the weak signal dynamic range to
compute and analyze.

Figure 4: a) The dynamic ranges of weak signals under different


noise conditions; b) The frequency division dynamic ranges of
weak signals with the mechanical noises at level I.

Figure 3: a) The statistics on the absolute amplitude of the noises


in the raw single-shot record, b) The zoom in of Figure 3a and its
noise ratings

Noise energy extraction can be achieved through statistics.


Figure 3 shows the statistical distribution of the absolute
noise amplitude for 2000 raw shot gathers: except the
source-generated noises, blending noises are mainly the
harmonic noises and mechanical noises of vibrators, there
are large difference between noises in energy; and different
noises show different regional distribution features: the
vibrators harmonic noises appear concentratedly in the
offset range of 4,300-5,800 m (Figure 3a); while the
mechanical noises are almost distributed evenly in space
and their different energy levels show a banded distribution

2016 SEG
SEG International Exposition and 86th Annual Meeting

The dynamite range of the weak layer signals under


specific noise condition can be computed with puting the
weak signals to be obtained and the related noise amplitude
values into formula Equation (2). It can be seen from the
dynamic range curves of a weak-layer signal under
different noise conditions in Figure 4a, the dynamic range
of the weak-layer signals increases with the increase of
offset. If 60dB is selected as the dynamic range threshold,
then the offset corresponding to 60dB is the critical point
whether the weak signal can be recorded under noise
conditions, for instance, the critical point for Level I
mechanical noises (red curve in Figure 4a) is offset 3,900m.
When the noises occur in the receiver spread greater than
the critical point, the receiving traces within the noise
covering region will fail to record the weak layer signals.
The lower the noise level, the greater the range of the
recordable weak signals will be. As we all know, there are
differences in the energy of seismic responses between
different frequencies, so weak signal frequency-division
dynamic range analysis can help us get more sophisticated
information of different frequecies that full-range
frequency analysis is unable to do. An analysis in Figure 4a
believes that Level I mechanical noises can be recorded

Page 247

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Influences of blending noises to weak signal

effectively when they appear in a range less than the critical


point 3,900m, but the frequency division dynamite ranges
of Level I mechanical noises in Figure 4b find that the
actual recordable weak signals are only the low-frequency
components lower than 20Hz, those higher than 20 Hz
requires a even less critical threshold. It is obvious that
noises have more impact on the recordability of highfrequency signals And more detailed analysis conclusions
have been reached through the frequency division dynamite
range analyses. In addition, it can be seen from Equation (2)
that when the dynamic range is fixed, the smaller the
amplitude of weak signals, the smaller the corresponding
noise amplitude will be required. It is clear that it is more
beneficial to get weak signals to reduce the energy level of
noises.
Conclusions
Blending noises affect the recordabily of weak seismic
reflection signals significantly and the infleunces will
increase with the noise energy, so it is necessary to take
noise contral measures in the field acquisition process.the
high-frequency component is much easier to be lost than
the low-frequency component in the noise conditions and

for a weak layer target survey with requirements for


resolution, the energy intensity of background noises must
be controlled. Under the precondition without affecting the
acquisition efficiency, it is recommended to adopt the
acquisition organizational pattern that vibrators shoot and
move within the near offset surface wave zone or outside
the spread .
The prestack gather weak signal energy estimation method
based gathers and the strong-/weak-layer energy ratio of
section proposed in this paper, an algorithms has less
restrictive condition and strong adaptability of applications,
can be used as a analysis tool based on prestack targeting at
the refined study on the weak signal acquisition method.
The analysis method based on the dynamic range of signal
can make quantitative analyses on the recordable
characteristics of valid signals under noise conditions and
provide specific, quantifiable operational indicators for
noise control during field acquisition. The application
effect of the method in the practical data acquisiton has
further proved its effectiveness.

Figure 2: a) The weak- and strong-layer analysis window of the section, b) the RMS amplitude curves for the weak- and strong-layers in the
section, c) the energy ratio curves for the weak- and strong-layers, d) the energy variation curves for the strong- and weak-layer signales in the
gather.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 248

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bekara, M., and M. van der Baan, 2009, Random and coherent noise attenuation by empirical mode
decomposition: Geophysics, 74, no. 5, V89V98, http://dx.doi.org/10.1190/1.3157244.
Bose, S., H.-P. Valero, Q. Liu, R. G. Shenoy, and A. Ounadjela, 2009, An automatic procedure to detect
microseismic events embedded in high noise: 79th Annual International Meeting, SEG, Expanded
Abstracts, 15371541.
Cooper, N. M., 2000, Dynamic range and instrumentation Myths, facts and current status: 70th Annual
International Meeting, SEG, Expanded Abstracts, 2932, http://dx.doi.org/10.1190/1.1816049.
Han, W., and J. Zhang, 2011, A theoretical study on the weak reflection seismic signal characteristics and
their identifications: Petroleum Geophysical Prospecting, 46, 232236.
Lao, Y., 2010, A study on the Curvelet transform based the high-density weak seismic signal detection
and de-noising method Qingdao: China University of Petroleum.
Li, Y., B.-J. Yang, X.-P. Zhao, M. Zhang, and H.-B. Lin, 2005, An algorithm of chaotic vibrator to detect
weak events in seismic prospecting records: Chinese Journal of Geophysics, 48, 15021508,
http://dx.doi.org/10.1002/cjg2.800.
Liu, Y., 1996, The time-frequency characteristic and instantaneous dynamic range of seismic Signal:
GPP, 35, 6670.
Liu, Z., Y. Sun, S. Yang, G. Qin, and L. Deng, 2009, Calculation of spectral attribute based on the weak
signal separation method: Presented at the Beijing 2009 International Geophysical Conference
and Exposition, 156, http://dx.doi.org/10.1190/1.3603681.
Ma, N., L. Chen, X. Wang, 2009, Application of wavelet transform in weak signal detection: Journal of
Harbin Institute of Technology, 41, 257258.
Ni, Y., 2011, Progress of vibroseis seismic acquisition technology: Petroleum Geophysical Prospecting,
46, 349356.
Ras, P., M. Daly, and G. Baeten, 1999, Harmonic distortion inslip sweep records: 69th Annual
International Meeting, SEG, Expanded Abstracts, 609612.
Sandip, B., P. Henri, and D. Alain, 2009, Semblance criterion modification to incorporate signal energy
threshold: 79th Annual International Meeting, SEG, Expanded Abstracts, 376380,
http://dx.doi.org/10.1190/1.3255652.
Simon, H., and X. Li, 1999, Detection of signals in chaos: Proceedings of the IEEE, 85, 95122,
http://dx.doi.org/10.1109/5.362751.
Zeng, G., and X. Hou, 2009, Implementation of weak signal detection using wavelet transform and the
bispectrum method: Digital Signal Processing, 33, 6062.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 249

Disributed Principal Component Analysis for Data Compression of Sequential Seismic Sensor Arrays

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Bo Liu 1 , Hilal Nuha1 , Mohamed Mohandes1 , Mohamed Deriche1 and Faramarz Fekri2
1: King Fahd University of Petroleum and Minerals, KSA; 2: Georgia Institute of Technology, USA
SUMMARY
This work considers the data compression of sequential seismic sensor arrays. First, the statistics of the seismic traces
collected by all the sensors are modeled by using the mixture
model. Hence, a distributed Principle Component Analysis
(PCA) compression scheme for sequential sensor arrays is designed. The proposed scheme does not require transmitting the
traces, leading to a more efficient computation and compression compared with the conventional local PCA compression.
Furthermore, an efficient communication scheme is developed
for the sequential sensor array for delivering the local statistics to the fusion center. In this communication scheme, the
sensors update and pass a data package consisting of cumulative variables. The size of the data package does not increase
throughout the process, which is more efficient than the direct communication scheme. Finally, the performance of the
proposed scheme is evaluated by using both real and synthetic
seismic data.

INTRODUCTION
The petroleum industry uses seismic reflections as one of the
principal tools to explore and evaluate oil fields. The impetuous increase of the number of sensors (geophones) and the
number of shots in the modern seismic data acquisition during
the last several decades has caused a swift increase in volume
of seismic data stored and transferred. With the emergence
of 3D technology, the data volume is particularly large and it
is usually more than 1012 bytes in 3D surveys (Wood, 1974;
Chen and Larner, 1995). It is therefore desirable to compress
the data, in order to reduce the costs of storage and transmission (Donoho et al., 1999; Zarantonello and Bevc, 2005).
Data compression techniques are usually classified into two
categories, i.e., lossless and lossy (Kass and Li, 2012). No
information loses during the lossless compression procedure,
and the original data can be perfectly reconstructed from the
compressed data. Lossy compression, on the other hand, uses
approximations to represent the original data. Unlike lossless compression, the original data can not be perfectly reconstructed after lossy compression. However, the amount of
data reduction possible using lossy compression is often much
higher than through lossless techniques. Therefore, we focus
here on lossy compression and specifically on PCA compression.
PCA is a standard tool in modern data analysis, which with
minimal effort provides simple way to reduce a complex data
set to a lower dimension. The target of PCA is to identify a set
of the most important coordinates, i.e., Principal Components
(PCs), in terms of the second order moment of the original
data, then to re-express the original data on this set of PCs

2016 SEG
SEG International Exposition and 86th Annual Meeting

(Huang, 1996; Liu et al., 2012; Shlens, 2014). The hope is that
by keeping the data projected on the most important PCs the
dimension is decreased, and the volume of the data shrinks.
In this work, we consider a PCA based seismic data compression for sequential seismic sensor arrays. The assumptions and
constrains of this work are listed as follows. First, the sensors
(geophones) have an ability to store amount of data, to undertake computation and to communicate with other sensors.
Secondly, the sensors are arranged in a sequential order and
only the last sensor has an access to the fusion center. Our target is to obtain a set of global PCs in the fusion center to be
used among all sensors for compression instead of local PCs,
subjects to a requirement that during the procedure of obtaining the global PCs, only statistics of the traces are transmitted
among the sensors instead of all traces.
MIXTURE MODELING
In this section, the concept of the mixture model is introduced,
and the seismic traces collected by multiple sensors are modeled as the realizations of a mixture model. Then a distributed
PCA scheme is proposed.
First, for an n-dimensional stochastic variable
x

[x1 , x2 , , xn ]T ,

(1)

if the distribution within the overall population is a convex sum


of a number of sub-population distributions, the probability of
x can be described as (McLachlan, 1988; Bishop, 2006)
f (x)

K
X

i fi (x),

(2)

i=1

where K denotes the number of mixture models. Notations


f () and fi (), for i = 1, 2, , K are the Probability Density
Functions (PDFs) of x. The weights i , for i = 1, 2, , K are
non-negative and their sum is one. Define the parameter set for
each component as i , i = 1, 2, , K, then we have P(i |x =
X) = i for any realization X. Since our work only considers
the first two moments of the distribution, then the component
parameter set is limited to the mathematical expectation i and
covariance i , i.e.,
i

{i , i }.

(3)

One can easily obtain the first two moments of mixture model
(2) as

K
X

i i

(4)

i=1

K
X
i=1

i i +

M
X
i=1

i (i )(i )T .

(5)

Page 250

Disributed Principal Component Analysis for Data Compression of Sequential Seismic Sensor Arrays
One trace is a time series with a length, say n, recorded by a
sensor from one shot. Let it be defined as
(i)

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Xj

[x j (1), x j (2), , x j (n)]T ,


(i)

(i)

(i)

(6)

where i and j denote the indices of sensors and shots, respec(i)


tively. Scalar value x j (k) denotes the k-th sample of the trace
(i)

from the i-th sensor and the j-th shot. Trace X j is regarded as
a realization of a n-dimensional stochastic variable x, which is
assumed to follow an unknown local PDF fi (x). Once the i-th
(i) i
sensor collects a number of traces, say Ni traces {X j }Nj=1
, the
first two moments of PDF fi (x) can be estimated as
i

1
Ni

Ni
X

(i)

Xj

(7)

j=1

Ni

1 X (i)
(i)
(X j i )(X j i )T .
Ni 1

(8)

i
the original data. Finally, the compressed traces {Y j }Nj=1
, for
i = 1, 2, , K are transferred to the fusion center.

(i)

The volume of all traces is nN. By implementing the distributed PCA compression, it is required to store k global PCs,
k coefficients and K means. The compression ratio is
kn + kN + Kn
.
(13)
nN
By implementing the local PCA compression, it is required to
store k local PCs for each of the sensors, k coefficients and K
means. The compression ratio is
Rd

Kkn + kN + Kn
,
(14)
nN
which is larger than Rd by using same number of PCs. As for
the computation cost, the distributed PCA only needs matrix
decomposition once at the fusion center, while the local PCA
needs to do it K times.
Rl

j=1

Suppose that there are K sensors considered in the sensor array, applying the same modeling formulation mentioned above
to other sensors finally yields K local PDFs, i.e., fi (x), for
i = 1, 2, , K. Since the sensors are generally with different
geographical locations, the local PDFs fi (x) are with nuances
from each other. To capture the global statistic of the seismic
traces collected by the sensor array, we construct the global
PDF of the traces, x, as a convex sum of the local PDFs fi (x)
as in (2) with weights
i

PK

Ni
,
N

(9)

SEQUENTIAL SENSOR ARRAYS


In this section, we apply the proposed mixture model based
distributed PCA scheme to the sequential array of sensors.
This topology is widely used in practice and is shown in Figure 1. A number of sensors, S1 , S2 , , SK denoted by circles
in the schematic figure, are connected to a fusion center G,
denoted by a square. The sensors are connected in a sequential way. Each of those sensors can only communicate with
its nearest neighbors, and the sensor located at the end of this
linkage is connected to the fusion center.

where N = i=1 Ni and its first two moments are obtained as


in (4)-(5). Decomposing global covaraince as

PPT ,

(10)

where R nn = diag[1 , 2 , , n ] and P R nn = [p1 , p2 ,


, pn ]. The scalars 1 2 n and the vectors p1 , p2 ,
, pn are the eigenvalues and their corresponding eigenvectors of , respectively. Let
Pk

[p1 , p2 , , pk ], k n,

(11)

present the selected PCs with the first k columns of P. Once


the sensors have the global PCs Pk , they project their original
traces on the PCs to obtain the compressed traces, or coeffi(i) i
as follows,
cients, {Y j }Nj=1
(i)

Yj

(i)
Pk X j , j = 1, 2, , Ni .

(12)

This scheme indicates that, for calculating a set of global PCs


from the traces collected by multiple sensors, it only requires
the first two moments of the local PDFs associated with their
weights instead of accessing all traces. It can be implemented
in two stages. First, each sensor calculates its own first two
moments and sends them along with the number of traces to a
fusion center. Secondly, the fusion center calculates the global
PCs and broadcasts them to all sensors for compression. During this procedure, the scheme does not require transmitting

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: The sequential array of seimic sensors.


The most direct way is to require each sensor to send its statistics and whatever it receives from the its left-hand-side neighbor to the sensor at its right-hand-side. For example, sensor Si
receives { j , N j }i1
j=1 from sensor Si1 and sends them along
with i and Ni to sensor Si+1 . However, since there are no
direct connection between the sensors and the fusion center,
except for the last sensor, all information needed at the fusion
center must be transferred through multiple hops. The closer
a sensor is to the fusion center, the more preceding sensors it
has, which implies that it has to undertake heavier communication burden. In other words, the data package in the communication among the sensors increases as the index of the sensors
increases during this procedure. To alleviate this drawback, we
develop a peculiar scheme for this topology. First, substitute
(9) to (4)-(5) and rewrite them as follows,

K
1X
Ni i
N

(15)

i=1

K
K
1X
1X
Ni i +
Ni (i )(i )T
N
N
i=1

i=1

Page 251

Disributed Principal Component Analysis for Data Compression of Sequential Seismic Sensor Arrays
=

K

1X
Ni i + i iT i T iT + T
N

3. Fusion center G receives the package from sensor SK


and determines the global covariance and PCs.

i=1

1
N

i=1

K
K
1X
1X
Ni i iT
Ni i T
N
N
i=1

K
1X
Ni iT + T
N
i=1

K
X
i=1

K
1X
Ni i +
Ni i iT
N
K
X

1
N

i=1
!T

Ni i

i=1

EXPERIMENTAL RESULTS

i=1

K
1X
Ni i
N
i=1

+ T .

In this section, the proposed scheme is tested by using both


real seismic traces and synthetic seismic traces.

(16)

Then, in order to obtain at the fusion center, one can easily


see from (15) that it is not necessary to separately deliver all
i and Ni from the sensors to the fusion center. What we need,
take sensor Si as an example, is only to deliver the cumulations
Pi
Pi
j=1 N j j and
j=1 N j to the next sensor. The next sensor, sensor Si+1 , updates the received cumulations by adding
Ni+1 i+1 and Ni+1 to them, respectively. Then the new cumuPi+1
Pi+1
lations become
j=1 N j j and
j=1 N j . It then sends these
updated cumulations to sensor Si+2 . Finally, the fusion cenPK
PK
ter receives two cumulations
j=1 N j j and
j=1 N j from
the last sensor. The global mean is easily obtained from
those two cumulations as in (15). Similarly, in order to obtain at the fusion center, see (16), what the sensors need to
do, taking sensor Si as an example, is receiving cumulations
Pi
Pi
T
j=1 N j j and
j=1 N j j j from the previous sensor, updating them and sending the updated cumulations to the next
sensor. During this procedure, each of the sensors between the
first sensor and the last sensor only receives four cumulations
and sends four cumulations, the communication burden does
not increase. This scheme is summarized in the following algorithm.
1. Starting from sensor S1
Sending {N1 , N1 1 , N1 1 , N1 1 1T } to S2
2. For 2 i K

(a) Sensor Si
Receiving from sensor Si1

Pi1
Pi1
j=1 N j ,
j=1 N j j ,
j=1 N j j ,
Pi1
T
j=1 N j j j }
{

Pi1

(b) Updating cumulations as:

Pi

Pi1
j=1 N j =
j=1 N j + Ni
Pi
Pi1
j=1 N j j =
j=1 N j j + Ni i

Pi

j=1 N j j =
Pi
T
j=1 N j j j

Pi1

j=1 N j j + Ni i
T
T
=
j=1 N j j j + Ni i i

Pi1

(c) Sending the updated cumulations to the next sensor (or the Gateway, if i = K):

Pi

Pi

j=1 N j ,

Pi

j=1 N j j ,

T
j=1 N j j j }

Pi

j=1 N j j ,

2016 SEG
SEG International Exposition and 86th Annual Meeting

Real Data
We extract 18 traces for each sensor from East Texas USA data
base (Mousa and Al-Shuhail, 2011), in which the shots are
generated in 80 100 feet depth on 18 different locations. We
consider 10 sensors that are arranged in a sequential array with
an interval of 220 feet. The length of each trace is 1501 time
samples, which is approximately 3 seconds. Since the magnitude of the shot wave is attenuated as it propagates to the sensors over distance and the sensors are away from each other,
different sensors receive traces with different magnitude. See
Figure 2 as an example, for a sensor tha is far away from the
shots, its traces are with too small magnitude to be visually
observed. To mitigate this nuisance, we normalize the traces
with small wave magnitude, for example, see Figure 3.
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1

trace no

Ni i +

200

400

600

800
time

1000

1200

1400

1600

Figure 2: 18 traces gathered by a sensor.


19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1

trace no

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

K
1X

200

400

600

800
time

1000

1200

1400

1600

Figure 3: Normalized 18 traces gathered by a sensor.


We take the Normalized Cumulative Energy (NCE) as a metric
to evaluate the preserved energy after PCA compression with
k PCs,
NCE(k)

Pk
i
Pi=1
,
n
i=1 i

(17)

Page 252

To get a given NCE, the distributed PCA needs more PCs than
the local PCA, however, the distributed PCA compression still
has some overall benefits. First, it requires one matrix decomposition in the fusion center, instead of 10 matrix decompositions as in the local PCA compression. Secondly, it has lower
compression ratio than the local PCA. For example, to preserve approximately 99% energy, we need 12 PCs for the local
PCA or 34 PCs for the global PCA. According to (13)-(14),
the compression ratio of the distributed PCA is Rd = 0.26 and
the compression ratio of the local PCA is Rl = 0.73.
110
100
90
80
70
60
50
40
30
20

Global Dist PC
Local PC sensor i
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
number of PCs

Figure 4: Then NCEs of the local PCA and the global PCA by
using 10 sensors (real data).

seismic data (as in Figure 4) are also observed in this figure.


The only difference between those two experiments is that the
NCE of the global PCA by using synthetic data as in Figure 5
increases more slowly than than by using real data. This is
case dependent and due to the different setting of sensors and
traces.
110
100

90
80
70

NCE %

where i denotes the i-th largest eigenvalue of the covariance


matrix used for determining the PCs. Figure 4 shows the NCE
of 10 sets of local PCs, each of them is calculated from a local covariance matrix, in red, and the NCE of the global PCs
in blue. One can easily notice some phenomena. First, NCE
increases as the number of PCs increases, and it finally levels
off when the number of PCs is large enough. This can be explained as that the more PCs are used in the PCA compression,
the more energy is preserved. Since the PCs, or the eigenvalues, are arranged in a decreasing order in terms of importance,
using too many PCs does not improve the NCE significantly.
Secondly, the NCEs of the local PCAs, i.e., the red curves,
increase dramatically at the beginning, which means that the
traces collected by the same sensor share high similarities, they
could be largely compressed by using small number of local
PCs. Last, the NECs of the local PCA are obviously higher
than the global PCA, i.e., the blue curve, by using the same
number of PCs, or in other words, to preserve same amount
of energy during the compression, we need more global PCs
than local PCs. This can be explained as that the sensors are
located away from each other, then the traces collected by all
the sensors share relative lower similarities than that from the
same sensor.

NCE %

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Disributed Principal Component Analysis for Data Compression of Sequential Seismic Sensor Arrays

60
50
40
30

Global Dist PC
Local PC sensor i

20
10

10

13

16

19

22 25 28 31
number of PCs

34

37

40

43

46

49

Figure 5: Then NCEs of the local PCA and the global PCA by
using 10 sensors (synthetic data).

CONCLUSIONS
In this work, we present a distributed PCA compression scheme
for sequential sensor arrays. The seismic traces collected by
multiple sensors are modeled as realizations of a mixture PDF,
whose mean and covariance can be estimated at a fusion center in a distributed fashion without accessing the traces. A set
of global PCs is determined by decomposing the global covariance and then used by all the sensors to project the traces.
The advantage of the proposed scheme compared with the local PCA scheme is twofold. First, it has a lower computation cost in terms of matrix decomposition, and secondly, it
achieves a lower compression ratio. Furthermore, an efficient
communication method is proposed to alleviate the communication burden of the sequential sensor array, in which the data
package transmitted among the sensors does not accumulate
in size. Finally, a number of experiments are implemented to
show the efficiency of the proposed scheme.
ACKNOWLEDGMENTS
The work presented in this paper is supported by Center for
Energy and Geo processing (CeGP) at King Fahd University
of Petroleum and Minerals (KFUPM) and Georgia Institute of
Technology.

Synthetic Data
In the rest of this section, we test the proposed scheme by using
synthetic seismic traces. First, Ricker wavelet is employed as
shot source to simulate 18 shots, which are recorded by 10
sensors. Each trace is with 1501 samples and a synthetic noise
with 40 dB SNR is added.
The NCEs are plotted in Figure 5. One can notice that all
the phenomena observed in the experiment by using the real

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 253

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Bishop, C., 2006, Pattern recognition and machine learning: Springer.


Chen, T., and K. L. Larner, 1995, Seismic data compression.
Donoho, P. L., R. A. Ergas, and R. S. Polzer, 1999, Development of seismic data compression methods
for reliable, lownoise, performance: SEG Technical Program Expanded Abstracts, 486, 1903
1906.
Huang, K.-Y., 1996, Seismic principal components analysis using neural networks: 66th Annual
International Meeting, SEG, Expanded Abstracts, 12591262,
http://dx.doi.org/10.1190/sbgf2005-213.
Kass, M. A., and Y. Li, 2012, Inversion of electromagnetic data processed by principal component
analysis: Presented at 22nd Geophysical Conference, ASEG, Extended Abstracts, 14.
Liu, C., M. Han, and L. Han, 2012, Application of principal component analysis for frequency-domain
full waveform inversion: 82nd Annual International Meeting, SEG, Expanded Abstracts, 15,
http://dx.doi.org/10.1190/segam2012-0909.1.
McLachlan, G. J., 1988, Mixture models: Inference and applications to clustering: Dekke.
Mousa, W. A., and A. A. Al-Shuhail, 2011, Processing of seismic reflection data using MATLAB:
Synthesis Lectures on Signal Processing, 5, 197,
http://dx.doi.org/10.2200/S00384ED1V01Y201109SPR010.
Shlens, J., 2014, A tutorial on principal component analysis: arXiv:1404.1100, 112.
Wood, L. C., 1974, Seismic data compression methods: Geophysics, 39, 499525,
http://dx.doi.org/10.1190/1.1440443.
Zarantonello, S. E., and D. Bevc, 2005, Compression of seismic data using ridgelets: 75th Annual
International Meeting, SEG, Expanded Abstracts, 21852188,
http://dx.doi.org/10.1190/1.2148148.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 254

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Offset design for deep targets: Kuwait example


Adel El-Emam* and Jarrah Al-Jenaie, Kuwait Oil Company; Amr El-Sabaa, Schlumberger
Summary
Forward modeling forms the basis of survey planning and
design nowadays. Modeling based on numerical solutions
of the wave equation or ray tracing depending on the
subsurface complexity typically leads to better
understanding of the quality of the seismic image with
different acquisition geometries. Hence, facilitate important
decisions related to quality versus cost for seismic
operations. Surface geometry, source bandwidth and
subsurface complexity are major factors impacting the
quality of image at target. The purpose of our study is to
assess the impact of surface geometry, and in particular, the
maximum offset on imaging.
Introduction
Developments in seismic acquisition (Rached et al 2010)
and processing tools (El-Emam 2012) enabled achieving
KOC exploration and development objectives. With
increased interest in deep targets, it is necessary to revisit
the acquisition parameters to ensure optimum and cost
effective survey. Although the contribution of long offset in
earth model building is significant as previously
demonstrated (Vigh, D. et al 2013), in this study we focus
on understanding the impact of the maximum offset on
imaging, to help develop fit-for-purpose design for future
3D surveys targeting the deep Permian reservoirs.
Inverion for acoustic and elastic rock properties relies on
seimic amplitudes. As a result, the impact of suboptimal
subsurface illumination caused by limited surface geometry
may hinder the inversion results, unless addressed during
acquisition or processing through depth domain inversion
(Robin P.Fletcher et al 2012).

caused by
bandwidth.

different

offsets

and

source

frequency

The validation has been performed through cross checking


the ray tracing, wave equation modelling and the PSF
simulation results against results of the analysis performed
on field data shots and images.
Field data description
Kuwait Oil Company (KOC) acquired 2D seismic test line
with split-spread geometry. The line extends for 60 km and
connects two exploration wells. The full spread of 60 km
was active for each shot. As a result, the maximum offset
recorded varied with each shot and reached 60 km on one
side at the two ends of the line. It has relatively dense
spacing of 12.5m for source and 6.25m for receiver. Data
was processed through ambient and coherent noise
attenuation, amplitude recovery, surface consistent
amplitude corrections and deconvolution. The processed
shots were then migrated using Reverse Time Migration
(RTM).
The Near surface velocity model (Figure 1a) was derived
using simultaneous joint inversion of the first breaks picks
and surface wave dispersion curves. The deep model
(Figure 1b) was derived using horizon-guided interpolation
of calibrated sonic log velocity after smoothing. The
shallow and deep models were merged while preserving the
measured travel times at the wells. The merged model was
used for ray tracing, forward modelling and imaging of
field data.

Firsly ; to undertsand which offsets illuminate particular


target, we constructed a representative velocity model and
traced offset rays upward from three different reserviour
levels. The recorded rays at the sufrace marked an offset
limitation based on the critical angle, where rays beyond
this angle were not recorded at the surface. The maximum
usable offsets for stacking and imaging at each of the three
levels were further assessed using modelled and field shots
with NMO applied.
Secondly; we simulated the imaging responses of the
scattering objects using wave equation based Point Spread
Function (PSF) for selected maximum offsets. PSFs did not
only illustrate the bandwidth effects on the image
vertical and horizontal resolutions at depth, but also
demostrated the amplitude distortions at reservior levels

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: (a) Near surface velocity model, (b) deeper velocity


model prior to merging. Vertical tubes represent the caibrated and
smoothed sonic velocities at the well locations

Page 255

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The three reservior levels used throughout the entire study


are lower craterous reservior at depth of 3 kms and 1.7 sec
TWT, Jurassic reservior at depth of 5 kms and at 2.5 sec
TWT and Permian reservior at depth around 6 km below
and at about 2.9 sec TWT.
Analysis of results
Rays traced from the deeper Permian reservior reached the
surface at a maximum offset of 22 km (Figure 2b); this has
been set as the maximum offset for the later wave equation
modelling. Rays from shallower Cretaceous target reached
the surface at a maximum offset of 11 km (Figure 2a).

Figure 2: Ray tracing from subsurface points placed at different


targets, (a) Cretaceous, (b) Permian. Velocity model overlaying the
seismic image are on the background

The PSFs on the other hand demonstrate the dependency of


the seismic image resolution on target depth and source
bandwidth (Figure 3). The original point scatterers are
gussian tapered spikes with vertical/horiontal aspect ratio
of one. When modelled and imaged using all offsets and
infnite source bandwidth they appear as cirles with a
diameter of approximatly 88 m. Figure 3 compares the
shape and size of the imaged point scatterers at the
crateous, Jurassic and Permian reservior levels using all
offsets up to 22 km and two different finite source
bandwidth. It shows approximatly 50% increase of the
point scatterer diameter with the 2-30Hz source bandwidth
relative to the 2-60Hz.
It is noticeable that while the maximum offset of 22 km
provides best focusing and the least blurring effects at all
reservior levels, limited offset of 10 km provides similar
focusing at all reservior levels (Figure 4); the similarity
varies however from almost identical at the cretaceous level
to slightly defomed and less focused than the 22 km offset
at the Permian. Accordingly offsets larger than 10 km have
insignificant impact on the imaging even at deepest
reservior level.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3: PSFs using 22 km maximum offset and two different


band width (2-30Hz, 2-60Hz) at (a) Cretaceous, (b) Jurassic, (c)
Permian, respectively.

After defining the maximum offsets for capturing and


imaging reflection energy at different reservoir depths, we
examine the maximum usable offset for conventional time
velocity analysis and stacking purposes on NMO corrected
synthetic shot generated using the actual shot and receiver
geometry and the unsmoothed velocity model. The mute
pattern is selected to visually track the NMO stretch
borderline (Figure 5a). The same mute pattern was then
overlaid on the field shot after the application of coherent
and ambient noise attenuation (Figure 5b). Although
reflection events at longer offsets on the field shot exhibit
residual move out that can be attributed to anisotropy
effects, both the synthetic and the field shots demonstrate
good kinematics match. The maximum usable offset for
imaging cretaceous target is approximately 7.5 km,
whereas for the Jurassic and Permian reservoir levels it is
10 km. Figures 5a&5b illustrate how primary reflections
can be obscured by the residual noise and different types of
multiples in the field data records at longer offsets.

Page 256

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

which may deteriorate the final image specially when


associated with inaccurate velocity model.
The full offset field dataset in addition to 3, 6 and 10 km
limited offset versions were processed and migrated using
the same parameters and velocity model. Difference plots
between the image produced with full offset and images
from the three limited offset versions support the earlier
observations from the PSF modelling (Figure 6). The
difference starting shallow at 2 km indicates that 3 km
maximum offset is insuffient to fully image cretaceous
reserviors. The 6 km difference although adequate for
cretaceous it is not for Jurassic and Permian. The
differences greatly diminishes with 10 km offset
confirming the modelling based results.
Conclusions
Long offset acquisition is essential for imrpoving fold and
consequently signal to noise ratio at target. Similarly;
Model building, imaging and pre stack inversion, all
significantly benefit from long offsets.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Although recent years witnessed substantial increase in the


number of live channels deployed in the field, achieving
healthy compromize between cost and data quality remians
fundamental aspect of any seismic operation. Longer offset
implied longer spread with more channels and
consequently higher operational cost. Therfore we
thoroughly examined the 2D synthetic and field datasets
before deciding on the maximum usable offset for the next
3D seismic survey planned for deep exploration objectives.
Based on the combination of forward modelling and field
data anaysis, it has been shown that the maximum offset
required to optimally image the deep Permian target is
approximately 10 km. The higher the accuracy of the
velocity model the higher the confidence of the modelled
and field data results. The impact associated with any
future operationl compromize should be analyzed and
considered.
Acknowledgements
The authors would like to thank Kuwait Oil Company and
Kuwait Ministry of Oil for their kind permission to publish
this paper. Thanks also extend to Mr. Clement Kostov and
Chris Koeninger for their comments and fruitful
discussions.

Page 257

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 258

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

El-Emam, A., and S. Khalil, 2012, Maximizing the value of single-sensor measurements, Kuwait
experience: SEG, http://dx.doi.org/10.1190/segam2012-1000.1.
Racked, G., A. El-Emam, K. S. Al-Deen, J. Al-Jenai, and B. Al-Ajmi, 2010, Designing 3D seismic
surveys to image and characterize multi-objective targets, study from Kuwait: SEG,
http://dx.doi.org/10.1190/1.3513819.
Fletcher, R. P., S. Archer, D. Nichols, and W. Mao, 2012, Inversion after depth imaging: SEG,
http://dx.doi.org/10.1190/segam2012-0427.1.
Moldoveanu, N., R. Fletcher, A. Lichnewsky, D. Coles, and H. Djikpesse, 2013, New aspects in seismic
survey design: SEG, http://dx.doi.org/10.1190/segam2013-0668.1.
Vigh, D., K. Jiao, W. Huang, N. Moldoveanu, and J. Kapoor, 2013, Long-offset-aided Full-waveform
Inversion. 75th Annual International Conference and Exhibition, EAGE, Extended Abstracts.
http://dx.doi.org/10.3997/2214-4609.20130825.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 259

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

An Application of Crooked Wide Line Acquisition Technique in Complex Mountainous Region


Lu Liangxin, Xia Yong, Ding Tongzhe, Liu Xinggang, BGP, CNPC
Summary
Seismic exploration in complex mountains remains a tough
challenge. In this paper, an example of seismic exploration
in complex terrains in Central Asia is discussed. The
seismic lines cross over mountains, rivers, loess and valleys
in the work area. High elevation difference, changeable
surface lithologies and great variations of transverse
velocity and thickness in the low-velocity layer contribute
to the difficulty of seismic exploration. The broad
distribution of inhomogeneous bodies in the near-surface
induces serious secondary disturbances. Besides, the target
layers in the area are as deep as 5-6 seconds with developed
faults, broken formation and complex seismic wave fields.
Last but not least, the wide distribution of high-risk red
zones brings a tough problem to field acquisition. In order
to solve the above problems effectively and efficiently, a
crooked wide-line geometry was employed in the work area.
The results show that the approach has increased the fold of
target layers, suppressed the side waves and near surface
secondary disturbances. The design of crooked lines and its
flexible lateral offset have enhanced the efficiency
significantly while reducing the operation risks. In this
paper, a quantitative analysis is made on the side wave
suppression, the effects of offset and the stack response of
inflexions of crooked lines to discuss the rational of the
acquisition scheme.

seismic acquisition in Central Asia discussed in this paper,


the receiver line interval up to 120 m is a highlight in the
research. In comparison with 3D survey, side interferences
is a tough challenge to 2D survey. It is also one of the main
causes of low S/N ratio. In this case, the paper focuses on
analysing the recording geometry in the suppression of side
interferences; in addition, the great elevation differences in
a single seismic line (can be up to 1,000 m or more) and
large numbers of cliffs also create difficulties to the field
acquisition. The design of crooked lines has avoided the
adverse terrains effectively in some degree. It reduced the
difficulties in field operation and preserved the effective
signals from deep target layers to a great extent. In the
following context, a quantitative analysis is made on the
crooked wide-line geometry adopted in an area of Central
Asia.
Suppression of Side Interferences by Wide Line
Geometry
The recording geometry adopted in the area is displayed in
Figure 1, where the dots in green are receiver points, the
dots in red are source points; the receiver line interval for
the 3 receiver lines is 120 m; the distance from the shot line
to the nearest receiver line is 60 m; the group interval is 25
m; the source point interval in each shot line is 100 m; the
maximum channel number of a single receiver line is 720
channels, thus the full fold is 540.

Introduction
The conventional 2D single-line data from the area shows
that the target layer was clearer in the plain region, but
subsurface structures can hardly be seen once entering
mountains and loess. The quite low S/N ratio results from
the extreme terrains in Central Asia, including mountains,
loess, rivers and hills. The rugged surfaces and great
elevation variations of mountainous area as well as the
complex underground structures lead to the imaging
difficulties of the data from mountainous area
comprehensively. The distribution of inhomogeneous nearsurface bodies results in large numbers of secondary
disturbances and the loess of hundreds of meters thick has a
significant attenuation effect to the shooting energy,
leading to the quite low energy at middle-deep target layers.
To solve the above problems, the crooked wide-line
acquisition scheme was widely employed in the early stage
survey. The wide-line survey has also been broadly used in
recent years, for example in the complex mountains in
Western China (Zhang et al, 2012) and in Pakistan (Shi et
al, 2008). Some scholar made a study on the spacing of
receiver lines (Lv in 2013) and concluded that the
reasonable range is from 40-80 m. In the crooked wide-line

2016 SEG
SEG International Exposition and 86th Annual Meeting

In order to research on the suppression of geometry to side


interferences, the diffraction time-distance curve of side
interference source in CMP domain is considered firstly.
Assuming that the relative distance between the lateral
interference source and the given bin is 1,800 m laterally
and 3,000 m in depth and the average velocity of the
diffraction waves that the later interference source
generates is 4,000 m/s, then its diffraction wave timedistance curve is represented by the red line in Figure 2 and
the roughness of the curve reveals that it would not be a
straight line after NMO correction; after the NMO
correction, horizontal stacking is performed on the
diffracted-wave time-distance curve to achieve the curve in
Figure 3. Here two assumptions are made: amplitude
compensation is completed and the wavelets are Ricker
wavelet with the amplitude of 1 and a dominant frequency
of 20 Hz. The waveform shown in Figure 3 is only the
lateral source from a single direction. There was a side
interference source of 180 direction in the inline direction.
If the semi-circle is uniformly divided into 20 portions, 21
even sampling points are obtained. Assuming the
diffraction sources coincide with the sampling points, the
average velocity of seismic waves from diffraction sources

Page 260

1.6
1.8
2

t/s

2.2
wide line
fitted

2.4

2.6
2.8
3
-1

-0.5

0.5

offset/m

1
x 10

Figure 2: Time-distance curve of side interference source

60
40
20

amp

Offsets of shooting points could not be avoided during the


field acquisition due to the broad distribution of cliffs, steep
hills, deep trenches, broad rivers and villages in the work
area. Lateral offset was preferable and the maximum offset
was set to be 200 m in this area. For analyzing the effects
of offset to data quality, offsets were random made within
200 m laterally from the theoretical coordinates of shout
points so as to simulate the offsets of shot points in the real
operation, see Figure 4. After that, the suppression of the
offset geometry to side interferences was calculated. As
shown by the red dots in Figure 5, the near-surface
interference source was suppressed more than that before
offset, the reason is that the offset further lowered the
coherence of lateral interferences; meanwhile, the effective
signals from below the bin were almost the identical to that
without offset, indicating in this view that the offset of shot
point hardly has any negative effect on the data quality
while improving the field operation efficiency.

0
-20
-40
-60
0

50

150

100

200

t/ms
receiver point
shot point

200

Figure 3: Post stack waveform of side interference source

receiver point
shot point

200

crossline/m

crossline/m

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

to ground surface is 5,000m/s. The waveform shown in


Figure 3 is computed for each diffraction point and the
peak amplitude of the waveform is recorded to get the
maximum waveform amplitudes from 21 directions and
mark them in the polar coordinates, see the blue circle in
Figure 5. The stack response of the diffraction waves from
various orientations can be clearly seen in Figure 5. If the
time-distance curve of diffraction waves is staked in-phase
completely, its poststack waveform amplitude equals 540,
the number of full fold. It could be seen from Figure 5 that
the signals just below the seismic line are hardly
suppressed, whereas the side interferences from other
directions are suppressed to a great extent.

-200
-400
-600 -400 -200

200

400

0
-200
-400

inline/m
Figure 1: Wide line geometry

-500

500

inline/m
Figure 4: Geometry with shot- point offsets

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 261

1200

600
60

30

150
no offset 200
offset
180

210

crossline/m

1000

400

800
600
400
200

330

5500

5000

4500

inline/m
300

240
270

Figure 6: CMP gather at the inflection

300
Figure 5: The poststack waveform amplitudes of the side
interference in different directions

Analysis of Crooked Line Geometry


When to design the crooked line geometry, it is required to
consider whether it will have any negative effect to data
quality while improving the acquisition efficiency. Since all
the deflection angle are within 10 degrees at the inflections
in the work area, the case will be analyzed at the angle of
10 degrees because the negative effect from an angel less
than 10 is less obvious than that from an angel = 10. A
bin at the inflection is chosen for the analysis, see Figure 6,
the deflection angle at the inflection is 10 degrees precisely.
The crooking of lines even widened the CMPs shown in
Figure 6 laterally. Calculating using the signal source 4,000
m underneath the bin and the Ricker Wavelet with an
average velocity of 5,000m/s, the dominant frequency of 20
Hz and the amplitude of 1, a stacked waveform is achieved
as shown in Figure 7, showing little changes in the
waveform. Compared with the maximum amplitude in
Figure 5, the amplitude is reduced in some way, but it is
still acceptable as a whole, the effective signals underneath
the bin can still be recorded. In brief, such crooked line
scheme bring about convenience for field operation, on one
hand, and its effects on data quality at the inflection is
within an acceptable range, on the other hand.

2016 SEG
SEG International Exposition and 86th Annual Meeting

200
100

amp

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

90
120

-100
-200
0

100

50

150

t/ms
Figure 7: Poststack waveform of the effective signal at the
inflection

Results
Figure 8 shows a single shot record with AGC of a line and
its pre-stack time migration profile. The convex portion in
the middle is a high mountain with an elevation difference
over 1,000 m. It is loess on the right side of the mountain.
The green arrow points to a inflexion of the wide line,
where the bending angle is 7 approximately. The target
layers as deep as 5-6s, the faults and the geological
structures just below the inflection point are clearly visible
in the figure.

Page 262

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Conclusions
Based on the above analysis, the wide line geometry used
in an area of Central Asia has a significant effect in
suppressing side interferences and thereby the S/N ratio is
improved obviously. The narrow azimuth of the wide line
geometry enables the side interferences to be canceled out
mutually after NMO correction, while the effective signals
from the target layer right below the seismic line stack inphase hardly with any influence, thus the side influences
are suppressed and the S/N ratio is enhanced greatly. The
wide line geometry makes it possible to acquire higher S/N
ratio data without adding much cost. The crooked line
scheme has avoided the extreme terrains in the operation,
improved the efficiency of field operation and has
preserved the effective signals to a great extent.
Consequently, the crooked wide-line geometry is relatively
cost-effective acquisition scheme in the complex area of
Central Asia.

inflexion

mountain
loess

Figure 8: The single-shot record from the complex terrein in


Central Asia and its PSTM profile

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 263

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Gonghe, L., 2013, The relationship between wide line geometry parameters and S/N ratio: Geophysical
Prospecting for Petroleum, 52, 495501.
Shi, J., G. He, Z. Zhang, 2008, Complex surface seismic acquisition technique in Pakistan: Oil
Geophysical Prospecting, 43, 1114.
Zhang, C., D. Qiao, S. Li, M. Zhang, G. Gan, Y. Xu, and Y. Tang, 2012, The wide line seismic
exploration technique of complex mountain in west Qaidam Basin: Oil Geophysical Prospecting,
7, 189193.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 264

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The imaging resolution analysis for complex models applied in seismic survey design
Baoqing He12*, Donglei Tang1, Yongqing He1, Xiaobi Xie2, Bo Long1( 1BGP CNPC, 2Modeling and Imaging
Laboratory, Earth & Planetary Science, University of California, Santa Cruz, CA)
Summary
In this paper, based on a complex seismic-geolocial model,
the Time-Spatial Domain Acoustic Equation FiniteDifference Forward Modeling with given seismic
geometries and wavelets are used to get the Point
Spreading Function (PSF) at different analysis positions, on
the basis of which, we further calculate the illumination
spectrum and the envelope of a point scatter. In order to
provide an useful tool for seismic survey design and
optimization, in the numerical experiment, with the
maximum offset unchanged, we use variable source
interval or receiver interval to analyze the influences of
different geometry parameters to the PSDM resolution.
Introduction
Nowadays more and more seismic survey design
technologies have been used before seismic acquisition for
giving a balance between the cost and the quality of seismic
data. We all hope that we can get good enough seismic data
with minimum cost. The wave equation modeling based
and the wave equation illumination analysis based seismic
survey designs have been used nearly throughout each large
seismic survey design project. The seismic imaging
resolution has been the highest goal of the seismologist, but
the methods are mostly based on the ray tracing theory
(Gelius et al., 2002; Lecomte, 2008;). Such high frequency
asymptotic approximation based ray tracing theory cannot
deal with the model for complex structures, but it has its
unique characterisitcs, i.e. it can get the ray-path from the
source through the subsurface interface to the receiver,
such that it can easily get the local open angle of incident
wave and reflect wave at the reflection position. This open
angle is necessary for either imaging resolution or PSDM
amplitude of the target. In other words, the angle domain
illumination is closely related with the imaging resolution.
Xie et al., (2005, 2006); Wu et al., (2006) expanded the
above methods to the one way wave equation, which
required to calculate a large number of frequencies due to
the application of the wave equation operators in the F-K
domain. In this way, it is very difficult to meet the require
for a broadband seismic wavelet. In addition, the angle
limit of one way propagator is also a flaw. Although the
local incident angles at different scatter points can be
calcualted, the calcualtion amount is enormous, especially
when the broadband wavelet is used as the input source.
There are several ways to get seismic illumination based on
full-wave equation (Xie and Yang, 2008; Yang et al., 2008;
Cao and Wu, 2009; Yan, et al., 2014), but the computation

2016 SEG
SEG International Exposition and 86th Annual Meeting

are unbearable. Xie, et al. (2005, 2006), Wu, et al. (2006);


and Mao and Wu (2011) calculated the relationship among
the PSFs, the pre-stack depth migration and the PSDM
resolution based on the one way propagator, and thus the
seismic wave equation illumination, the seismic migration
imaging and the imaging resolution analysis are connected.
Cao, et al. (2013), Chen, et al. (2015) proposed a new
method which can directly calculate the angle based
seismic wave illumination through PSDM imaging.
In this paper, we further develop this method and apply it to
the seismic survey design. We use different geometries,
seismic wavelets of different frequencies and different
maximum offsets to compute the PSFs at different position
in a complex model, compute the envelopes of the PSFs,
which indicates the PSDM resolution, and compare the
different results to provide reference for the choice and
optimization of different geometry parameters.
Methodology
For PSDM, many factors affect the seismic imaging
resolution, even though an accurate velocity model cannot
ensure to get a good PSDM image because some effects
such as limited acquisition geometry aperture, insufficient
acquisition sampling, bandwidth-limited seismic wavelets
and the wave propagation itself in complex models will
also affect the seismic resolution.
General speaking, seismic imaging is just as follow:
I=R*m
(1)
where I means the seismic PSDM image, * means the
convolution and m is the velocity model. When R is a pulse
function, the best seismic resolution will be achieved. But
due to the limited geometry aperture, different source and
receiver distance or different source wavelets will not allow
R to become a pulse. It will spread out correspondingly at
the imaging point, which we called the Points Spreading
Functions (PSF). The PSF carries the full information,
including the information regarding the model, such as
complex model structure; the wave propagation
information such as the multiple reflection of seismic
waves, the type conversion of seismic waves and the noises
generated by random media; and the geometry parameters
such as the receive apertures of each shot, the offset
variations and the bandwidth of source wavelets.
After the 2D Fourier transform, the 2D wavenumber
spectrum will be generated from the PSF at the interested
position in the complex model. The physical meaning that
the 2D wavenumber spectrum is that the illumination

Page 265

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

vectors of all the source and receiver pairs sum up at the


interested position, which, based on the ray tracing theory,
can be expressed as follow for a source point (S) and a
receiver point (R) within a local range (see Figure 1):
ISR=pS-pR
(2)
where pS and pR are the slowness vectors of the incident
wave and the scatterd wave fields respectively within the
local subsurface range, which can be obtained by solving
Eikonal equation; ISR is the illumination vector; and
wavenumber vector kSR=fISR =f(pS-pR)=kR-kS, through
which you can see that kSR is a function closely connected
with frequency.

effects of different geometry parameters to the imaging


resolution.

Figure 2: Wavenumber spectrum of different source-receiver


points after multiple superposition processing

Numerical example

Figure 1: The local illumination vector

Here, we used a 2D model with a very complex structure


(Figure 3). The length of the model is 10 km; the depth of
model is 5 km; both the lateral and the vertical grid sizes
are equal, 5m; and the velocity ranges from 2,000m/s to
5,000m/s.

The wavenumber vector from the same point underground


are supperpositioned multiply using different source and
receiver pairs to get a wavenumber spectrum (see Figure 2);
and the wavenumber specturm achieved from the PSFs
through Fourier transform is just the superposition of the
kSR that makes up of all the source-receive points of the
entire geometry, where the length of kSR indicates the
resolution:

2
k x
2
(3)
Rz =
k z
where Rx and Rz are the horizontal and the vertical size of
imaging resolutions respectively at the corresponding
interested point, i.e. the range of the PSF reflects the size of
resolution, and its amplitude reflects the intensity of
illumination, which is closely relative with the migration
imaging resolution.
Rx=

The size and the sharpness of PSF directly indicate the


imaging resolution of the target for different geometry
parameters. In order to further quantify them into the
imaging resolution, we can simplify it into an envelope
computation process. For a local point, we can obtain its
envelope through Hilbert transform. The size of its
envelope indicates the size of resolution and the amplitude
of its envelope indicates the sharpness of the imaging point.
The bigger the amplitude of envelope, the sharper the
image will be. By the quatification, we can compare the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3: A velocity model: length depth = 10km 5km.

In order to avoid the interaction between the target points,


the distance between the perturbation points are enlarge to
500m, i.e. we embed a 10% perturbation every 500m in
both lateral and vertical direction in the model to compute
the PSFs and their wavenumber spectrums so as to
compared the effects of different geometries and wavelets
to the imaging resolution.
Firstly we lay out 300 shot points. The first source is
located at 2 km and the last one at 8 km from the beginning
of the model; the source interval is 20m; each shot contains
801 receivers and the receiver interval is 5m; the maximum
offset for each shot is 2 km; the dominant frequencies of
ricker wavelets used are 16 Hz, 24 Hz and 32 Hz
respectively. Figure 4 are the PSFs computed using these
three wavelets. The data windowed by the red squares are
chosen for resolution comparison (Figure 5). We can see

Page 266

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

that with the decrease of the wavelet frequency, the size of


PSF for the same geometry changes, the higher the wavelet
frequency, the smaller the size of PSF.

compute the envelopes of PSFs in Figure 8 and 9 using


Hilbert transformation.

Figure 6 and 7 are the wavenumber spectrums achieved


from the PSFs in Figure 4 and 5 respectively by Fourier
transform. With the increase of the dominant frequency of
wavelet, the distribution range of the wavenumber
spectrum is increasing. The size of the spectrum in Figure 7
(right) is twice as that in Figure 7 (left), similar to the case
of the dominant frequency of wavelet used; while the size
of imaging point is inversely proportional to that of
wavenumber spectrum, indicating that the higher the
dominant frequency of wavelet, the better the PSDM
resolution will be. This conclusion coincides with our
common sense, namely, wavelet of high dominant
frequency is more beneficial to PSDM imaging resolution.
As we know that the imaging quality is not only controlled
by the dominant frequency of wavelet, it is also affected by
many geometry parameters such as the maximum offset,
the source and receiver intervals. We use the above method
to compare the influence of different source or receiver
interval to imaging resolution.
Similarly, we also lay out 300 shot points. The first source
is located at 2 km and the last one at 8 km from the
beginning of the model; the source interval is 20m; the
maximum offset for each shot is 2 km; the receiver interval
is 5m, 10m, 20m, 40m and 50m. The dominant frequency
of Ricker wavelet is 32Hz. Figure 8 shows the zoom in of
the PSFs at the same position in Figure 5. It can be seen
that when the different receiver intervals 5m, 10m or 20m
is adopted, the distribution range of PSF is almost
unchanged, in another word, the variation of receiver
interval does not affect the size of imaging point. With the
continuously increasing of the receiver interval, the noise in
the PSDM image increases gradually, but you can see if
you check carefully that the size of PSF is still unchanged
except that the noise blurs the image of the PSFs, leading to
the decrease of imaging S/N ratio. Hence, we can get an
initial conclusion that when other factors are unchanged,
the change of receiver interval does not affect the size of
imaging resolution, but the sharpness of the imaging
resolution at the imaging point.
Then we fix the receivers in each shot, the maximum offset
is also 2 km, the source intervals are 20m, 40m, 50m, 100m
and 200m. The dominant frequency of Ricker wavelet is
also 32 Hz. The analyzed position in Figure 9 is identical
with that in Figure 8. So does its distribution when the
receiver interval changes, and similarly the change of
source interval does not affect the size of image much
except the sharpness of PSDM image. After that, we

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 10: The envelopes of PSFs computed with the geometry


parameters in Figure 8 and 9.

Figure 10 is the envelopes of PSFs drawn using different


geometry parameters. The amplitudes of the envelopes
indicate that with the increase of source or receiver interval,
the amplitude will decrease gradually, which also indicates
that the sharpness of its image will also decrease gradually.
Conclusions
In this paper, a time-spatial domain acoustic finite
difference forward modeling method is used on the basis of
any complex seismic-geological model to compute the
PSFs of different geometry parameters at different interest
positions as well as their wavenumber spectrum and
envelope, and thus compare the PSDM imaging resolutions
of different parameters. The method is mainly implemented
in 2D complex model in this paper and it can be extended
to 3D cases easily.
It is observed that seismic wavelet is the most import
element that affects the seismic PSDM resolution; and the
S/N ratio of seismic data and the geometry parameters are
other very important elements that affect the sharpness of
resolution. Using this method, we can change the input
wavelet, the source or receiver interval and the maximum
offset, and thereby compare the influences of different
geometry parameters to the imaging resolution and provide
a method for seismic survey design and optimization. In the
seismic survey design process, the dominant frequency of
wavelet must be ensured firstly to meet the requirement of
resolution size; and then reasonable geometry parameters
must be used to ensure the S/N ratio of data according to
the sharpness of imaging resolution.

Page 267

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 4: PSF of different Ricker wavelets. Left: 16 Hz; middle: 24 Hz; and right: 32 Hz.

Figure 5: Zoom in of the PSFs in Figure 4. Left: 16 Hz; middle: 24 Hz; and right: 32 Hz.

Figure 6: The wavenumber spectrums achieved from the PSFs in Figure 4 by Fourier transform. Left: 16 Hz; middle: 24 Hz; and right: 32 Hz.

Figure 7: The zoom in of the wavenumber spectrums in Figure 5. Left: 16 Hz; middle: 24 Hz; and right: 32 Hz.

(a)

(b)

(c)

(d)

(e)

Figure 8: The zoomed in PSFs of different receiver intervals: (a) 5m; (b) 10m; (c) 20m; (d) 40m; and (e) 50m.

(a)

(b)

(c)

(d)

(e)

Figure 9: The zoomed in PSFs of different source intervals: (a) 20m; (b) 40m; (c) 50m; (d) 100m; and (e) 200m.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 268

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Cao, J., and R. S. Wu, 2009, Full-wave directional illumination analysis in the frequency domain:
Geophysics, 74, no. 4, S85S93, http://dx.doi.org/10.1190/1.3131383.
Cao, J., 2013. Resolution/illumination analysis and imaging compensation in 3D dip-azimuth domain.
83rd Annual International Meeting, SEG, Expanded Abstracts, 39313936,
http://dx.doi.org/10.1190/segam2013-0380.1.
Chen, B., and X. B. Xie, 2015. An efficient method for broadband seismic illumination and resolution
analyses. 85th Annual International Meeting, SEG, Expanded Abstracts, 42274231,
http://dx.doi.org/10.1190/segam2015-5926976.1.
Gelius, L. J., I. Lecomte, and H. Tabti, 2002, Analysis of the resolution function in seismic prestack depth
imaging: Geophysical Prospecting, 50, 505515, http://dx.doi.org/10.1046/j.13652478.2002.00331.x.
Lecomte, I., 2008, Resolution and illumination analyses in PSDM: A ray-based approach: The Leading
Edge, 27, 650663, http://dx.doi.org/10.1190/1.2919584.
Mao, J., and R. S. Wu, 2011, Fast image decomposition in dip angle domain and its application for
illumination compensation: 81st Annual International Meeting, SEG, Expanded Abstracts, 3201
3204, http://dx.doi.org/10.1190/1.3627860.
Wu, R. S., X. B. Xie, M. Fehler, Xie, X.B., and Huang, L.J., 2006. Resolution analysis of seismic
imaging. 68th Annual International Conference and Exhibition, EAGE, Extended Abstracts,
G048.
Xie, X. B., R. S. Wu, M. Fehler, and L. Huang, 2005. Seismic resolution and illumination. A waveequation based analysis. 75th Annual International Meeting, SEG, Expanded Abstracts, 1862
1865, http://dx.doi.org/10.1190/1.2148066.
Xie, X. B., S. W. Jin, and R. S. Wu, 2006, Wave-equation based seismic illumination analysis:
Geophysics, 71, no. 5, S169S177, http://dx.doi.org/10.1190/1.2227619.
Yan, R. H., H. Guan, X.-B. Xie, and R.-S. Wu, 2014, Acquisition aperture correction in the angle domain
toward true-reflection reverse time migration: Geophysics, 79, no. 6, S241S250,
http://dx.doi.org/10.1190/geo2013-0324.1.
Yang, H., X. B. Xie, S. Jin, and M. Luo, 2008, Target oriented full-wave equation based illumination
analysis, 78th Annual International Meeting, SEG, Expanded Abstracts, 22162220,
http://dx.doi.org/10.1190/1.3059326.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 269

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Development and application of single-point land piezoelectric sensor


Shang Xinmin, Liu Liping* and Guan Jian, Geophysical Research Institute of Shengli Oilfield, Sinopec; Zhu Jun,
CATY photoelectric technique Co.LTD; Zhang Guangde, Shengli branch of geophysical corporation, Sinopec
Summary
For a long time, 20DX geophone group was used for
seismic acquisition on land, the decreased peak frequency
caused by combination response, usually 5~10Hz, cannot
meet the current demand of high-precision prospecting
anymore. The single-point land piezoelectric sensor (LPS
for short) we developed have advantages of wide frequency
band, large dynamic range, high sensitivity, etc. A highdensity geometry was designed based on these single-point
detector in DFG survey, shot-receiver density is more than
one million traces, and the subsequent field acquisition
achieved good results. This is the first single-point-receiver
3D seismic acquisition by self-developed receiver here in
China.
Introduction
In land seismic exploration, geophone array (usually 36
20DX moving-coil geophones or 18 updated 20DX
geophones) are used as one channel in acquisition, while
single-point piezoelectric sensor is used for marine
acquisition. Compared with the conventional moving coil
geophone, the updated 20DX geophone improves nothing
but manufacturing precision. Geophone array is designed to
attenuate the noise, but the synergistic effect will to some
extent decrease the frequency, and fuzzy subsequent
seismic reflection [1]. Its been proved by model and field
data that the detector pattern length, surface elevation, nearsurface velocity variations can lead to 5~10Hz frequency
decreasing in acquisition, therefore, the geophone array is
not suitable for the current high resolution exploration.
Massive single-point receiver is a trend for high resolution
seismic prospecting.
Principle of land piezoelectric sensor
MEMS geophone is digital three-component single-point
detector, with advantages of sensitivity and dynamic range,
and shortcomings of high price and compatibility, so large
scale application cannot be widely used except in multiwave acquisition and pioneer acquisition. By now, only two
surveys have been accomplished since 10,000 MEMS
geophones were introduced in Shengli oilfield, far away
from large scale production.
Moving-coil geophone is speed seismometer; while MEMS
geophone and piezoelectric sensor are acceleration
seismometers. As frequency increase, the acceleration

2016 SEG
SEG International Exposition and 86th Annual Meeting

seismometer has obvious advantages in amplitude receiving


than speed seismometer, which is beneficial to highresolution acquisition. And this is a fundamental reason for
high frequency feature in piezoelectric sensor data.
A piezoelectric sensor is a device that uses the piezoelectric
effect, to measure changes in pressure, acceleration, strain,
or force by converting them to an electrical charge. A
dielectric material (dielectric for short) is an electrical
insulator that can be polarized by an applied electric field.
Because of dielectric polarization, positive charges are
displaced toward the field and negative charges shift in the
opposite direction, then the electromotive force generated.
This is how piezoelectric sensor worked [2].
According to the principle above, we developed LDKJ-1A
type LPS, which includes two piezoelectric movements and
a high-precision matching circuit inside. A weight block is
fixed on each piezoelectric ceramic, to follow Newton's
second law when movement happens. When there is a
vibration, the inertia-pressure was generated from the block,
so that the voltage emerges across the piezoelectric crystal.
This sensor does not contain any moving parts, so aliasing
and harmonic distortion can be avoided [3-4]. The dynamic
range of this LPS is up to 100dB, which is a breakthrough
for seismic receiving, and guarantee a high resolution in
seismic data.
Compared with conventional 20DX geophone, LPS has
advantages of wide frequency band, large dynamic range,
high sensitivity, low distortion, etc. The main technical
indicators are shown in Table 1.
Table 1 The technical parameters comparison among
different geophones
LDKJTypes of detector
MEMS
20DX
1A
Operating
frequency
0~800
5~400
bandwidthHz
Dynamic range
dB

120

100

60

sensitivity

7
V/g

0.28(V/m/
s)

Alias (Hz)

400

180

distortion

0.18

0.2%

0.002

Page 270

Development and application of single-point land piezoelectric sensor

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Field example of land piezoelectric sensor


Since 2013, 2D and 3D comparative experiments have been
implemented between LPS and conventional geophones in
Shengli Oilfield, and a satisfactory testing result have been
observed.
In year 2013, a detector comparison of 2D acquisition
among LPS, MEMS and 20DX geophone was conducted in
PH survey, Shengli Oilfield. The 3 parallel cable layout is

12.5m trace interval, 50m shot interval, and a total of 180


shots were acquired for comparison.
From the record comparison, single-point LPS and MEMS
geophone share the similar feature, both have a higher
frequency performance than shot from 20DX geophones
(Figure1, Table2). From the stack section comparison in
Figure2, the section of single-point LPS has explicit
reflection, good event consistency and higher vertical
resolution.

Figure1 The record comparison among different geophones (MEMS, LPS, 36 20DX array from left to right)

Figure2 The stack section comparison among different geophones (MEMS, LPS, 36 20DX array from left to right)

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 271

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Development and application of single-point land piezoelectric sensor

Table 2 Frequency comparison of data among different


geophones
Type of
MEMS
LDKJ-1A
20DX
detector
Time
1100~1900ms
window
Dominant
frequency
7~93Hz
7~84Hz
6~56Hz
bandwidth
-18dB
Effective
frequency
5~121Hz
5~103Hz
4~63Hz
bandwidth
-24dB

When compares to the previous migration of data acquired


in 1996, in figure5, the migration of LPS data got better
event consistency, explicit reflection and improved deep
part imaging. Besides, the increased 15Hz frequency lead
to a higher vertical resolution and clear fault block, the
whole image accuracy improved greatly.

In year 2016, a single-point LPS production was


accomplished in DFG survey, Shengli Oilfield, 22,610
shots were acquired in 124km2. This is the first time in
China to acquire seismic data with self-developed single
point receiver. A corresponding geometry was designed for
this geophone (parameters are in Table 3), shot trace
density is up to 1 million, triple the conventional high
precision acquisition.
Table 3 Comparison of geometry parameters between
previous and new acquisition
3D survey
CG531996
DFG2016
geometry
8L5S240T
32L7S280T
channels
240
32280=8960T
bin
25m50m
12.5m12.5m
fold
102=20
208=160
Trace interval
50m
25m
Cable interval
100m
175m
Shot interval
150m
50m
Perpendicular
400m
175m
offset
vertical0.15
0.78
horizontal ratio
Shot-receiver
16,000
1,024,000
density
In this survey, a comparison experiment between singlepoint LPS and updated 20DX geophone was accomplished
precede the field production. Through lower S/N, the
single-point LPS record got stronger reflected amplitude
and wide frequency band than the conventional geophones
array, shown in figure3 and figure4.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure3 The record comparison between different


geophones in DFG survey (left: single-point LPS shot;
Right: 18 updated 20DX geophone array)

Figure4 The frequency spectrum comparison of record by


different geophones

Page 272

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Development and application of single-point land piezoelectric sensor

Figure5 The migration comparison between previous conventional 20DX geophone and single-point LPS data (left: Section of
20DX geophone data reprocessed in 2010; right: Section of single-point LPS data acquired and processed in 2016)
Conclusions
Compared with conventional geophone array, the singlepoint LPS keeps better event consistency, increased
resolution and accurate imaging substantially, while its
SNR defect could be compensated by high density
geometry. The field production proves single-point LPS
technique to be an effective way for high resolution
prospecting on land.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 273

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Lv, G. H., 2009, Analysis on principles and performance of seismic geophone and relevant issues:
Geophysical Prospecting for Petroleum, 48, 531543.
Yuan, X., 1986, Sensor technology Handbook: National Defense Industry Press.
Cyril M.Harris,AIIan G.PiersoI., 2008, Shock and vibration handbook: China Petrochemical Press.
Zhao, D., W. Han, and D. Wei, 2012, Practice and exploration of high precision prospecting: Geological publishing
house.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 274

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The direct arrival in blended data


Zhenbo Zhang, Shenzhen Branch CNOOC Ltd., Qiang Liu*, Jilin University, Yihua Xuan Shenzhen Branch
CNOOC Ltd., Hongyu Sun, Yong Hu & Liguo Han, Jilin University
Summary
We found that the direct arrival has great relation to the
signal-to-blending noise ratio (S/N) during the deblending
of simulated and real simultaneous source dataset. In this
paper, the influence of direct arrival was presented, and a
new method to attenuate the direct arrival in debending
process was proposed at the same time. The example of
synthetic datasets proved that the S/N of deblended shot
gather will be doubled.
Introduction
Aiming to reduce the limitation of no interference between
adjacent sources during the seismic exploration, the
simultaneous source technology (also known as blended
acquisition) has become a popular topic for both industry
and academic due to its outstanding features of improving
the acquisition efficiency and imaging quality over recent
years. Specifically, in theoretical aspect, a large amount of
references could be found now (Berkhout, 2008, 2012;
Beasley, 2008; Berkhout et al., 2009, 2012; Mahdad, 2011;
Ibrahim and Sacchi, 2014). In field application area, an
abundance of experiments have been performed by several
companies on both offshore and onshore. Especially on
onshore exploration, BP has formed the commercial
acquisition system, like ISS and ISSN (Abma et al.,
2015). So far the simultaneous shooting technology has
obtained more attention by researchers and oil companies.
And in the last few years, the blending acquisition on
marine also developed vigorously: Hampson et al., (2008)
presented the application in the Green Canyon area and 3D
on the Petronius field which both located in the Gulf of
Mexico. Moore et al., (2012) showed a case history from
Australia of the first field-development-scale use of
simultaneous sources technology in the world, and they got
a preliminary conclusion that the SimSource acquisition
technique was successful. Long et al., (2013) presented an
innovative 3D towed streamer project in offshore Gabon
used a dual-vessel continuous long offset streamer
configuration, and they have yielded encouraging sub-salt
and pre-salt imaging. Poole et al., (2014) introduced the
blended dual-source acquisition combined with a
broadband variable-depth streamer acquisition conducted
offshore Indonesia, and their results validate
simultaneously the effectiveness of the deblending methods
and compatibility with the broadband processing sequence.

to-blending noise ratio (S/N) would be decreased due to the


direct arrival. In this paper we proposed a new method to
attenuate the direct arrival in debending process, and the
S/N of deblended shot gather has been greatly improved.
The influence of direct arrival in deblending process
As we known, the blended shot gather was contaminated
seriously by direct arrival (see Figure 1). Figure 1a and
Figure 1b show the influence of direct arrival (indicated by
the green arrow) in synthetic datasets and real dataset
respectively. Moreover, as the amplitude and energy of
direct arrival are stronger than that of any other wave, the
effect of interference of direct arrival was aggravated
further.

Figure 1: The direct arrival (indicated by the green arrow) in


blended data. Figure (a) is synthetic dataset and Figure (b) is real
dataset

It was found that the direct arrival has the great effect on
the signal-to-blending noise ratio (S/N) and the
convergence rate (see Figure 2). Figure 2 shows the
contrast of residual between the data containing direct
arrival and the data which does not, and both kinds of data
was the result processed through the same number of
iterations.
At the same number of iterations, the residual energy of the
data containing direct arrival (Figure 2a) is larger than that
of data which does not contain the direct arrival (Figure 2b).
Therefore, the convergence rate of the data containing
direct arrival will be lower than that of the data which does
not.

However, during the deblending of simulated and real


simultaneous source dataset, we have found that the signal-

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 275

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The direct arrival in blended data

Simultaneous
sources data

Next blending

Pseudodeblending

Subtracted the
direct arrival

Figure 2: The residual between the data containing direct arrival (a)
and the data which was not (b) . And these pictures are conducted
through the same number of iterations.

Build the
model of direct
arrival

Coherent
filtering

Method
Based on the conventional separation methods of
simultaneous source data, where the separation process was
addressed as de-noising one, we proposed a new separation
method. The process flows of conventional method were
illustrated in Figure 3. And the process flows of our method
are illustrated in Figure 4.
During the process flows of our method, the model of
direct arrival was built from the near-surface velocity
(onshore exploration) or sea-water velocity (offshore
exploration) and then subtracted adaptively from the
simultaneous source dataset.
And in the Figure 4, the two blue boxes show the additional
processing flow which was used to attenuate the direct
arrival.
Simultaneous
sources data

Next blending

Subtracted by
first Pseudodeblending
results

Added with
last separation
result

Separation
results

Figure 4: The process flows of our method

Example
To evaluate the effectiveness of our method, two tests on
synthetically blended dataset were performed. The SDR
(shot density ration) was 2 in these two dataset and the
models were presented in Figure 5. There are 100 receivers
with 10m receiver separation.

Pseudodeblending

Coherent
filtering
Subtracted by
first Pseudodeblending
results

Added with
last separation
result

Figure 5: The velocity model used for simulation of simultaneous


source acquisition. Figure (a) is the simple model, and Figure (b) is
the Hess model.

Separation
results

After simulation acquisition, we got 50 blended shot


gathers. Then the blended records were separated by
conventional method and our method respectively.

Figure 3: The process flows of conventional method

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 276

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The direct arrival in blended data

The signal-to-blending noise ratio (S/N) was showed in


Table 1. The SNR was calculated with the following
equation (Mahdad et al., 2011).
Qrms
signals
20log10
,
blending noise
QN Q rms

(1)

Where, the subscript rms stands for root mean square; the
Q is the unblended data and QN is the separation results.

method
conventional filtering
method
our method

The examples of synthetic datasets have proved that the


S/N of deblended shot gather could be doubled through our
method.

Acknowledgments

Table 1: The signal-to-blending noise ratio

Model

model of direct arrival was built from the near-surface


velocity (onshore exploration) or sea-water velocity
(offshore exploration) and then subtracted adaptively from
the simultaneous source dataset.

Simple
model

Hess
model

16.0

5.3

39.5

10.5

The authors would like to thank CNOOC Ltd. for


permission to publish this work. The paper is sponsored by
the project of CNOOC (YXKY-2013-SZ-02).

From the table 1, we could draw the conclusions that: the


signal-to-blending noise ratio was almost doubled through
our method.
The pictures in Figure 6 were the comparison of deblending
results between conventional iteration de-noising
deblending method and our method. And we only presented
the results of Hess model.
Where, Figure A shows the blended data. Figure B shows
the deblended data separated through the conventional
iteration filtering method. Figure C shows the deblended
results separated by our method. Figure D shows the
unblended shots. Figures E show the residuals of the data in
Figure B and D. Figure F show the residuals of the data in
Figure C and D.
Through the comparison of the deblending results and the
residual map (especially indicated by the green oval) of the
two separation methods, it is clear that we could get much
cleaner separation results and with lower leaking of desired
energy than that of conventional method.
Conclusions
In this paper, we have discussed the influence of direct
arrival in separation simultaneous source data. It was found
the direct arrival has the great effect on the signal-toblending noise ratio (S/N) and the convergence rate. The
signal-to-blending noise ratio (S/N) would be decreased
due to the direct arrival, and the convergence rate of the
data containing direct arrival will be lower than that of the
data which does not.
A new separation method was proposed in this paper.
During the separation process of the new method, the

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 277

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

The direct arrival in blended data

Figure 6: The comparison of deblending results and the residual map (especially indicated by the green oval) between
conventional iteration de-noising method and our method.Where, Figure A shows the blended data. Figure B shows the
deblended data separated through the conventional iteration filtering deblending method. Figure C shows the deblended results
separated by our method. Figure D shows the unblended shots. Figures E show the residuals of the data in Figure B and D. Figure
F show the residuals of the data in Figure C and D.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 278

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Abma, R., D. Howe, M. Foster, I. Ahmed, M. Tanis, Q. Zhang, A. Arogunmati, and G. Alexander, 2015,
Independent simultaneous source acquisition and processing: Geophysics, 80, no. 6, WD37
WD44, http://dx.doi.org/10.1190/geo2015-0078.1.
Beasley, C. J., 2008, A new look at marine simultaneous sources: The Leading Edge, 27, 914917,
http://dx.doi.org/10.1190/1.2954033.
Berkhout, A. J., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924938,
http://dx.doi.org/10.1190/1.2954035.
Berkhout, A. J., 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, no. 4, A19
A23, http://dx.doi.org/10.1190/geo2011-0480.1.
Berkhout, A. J., G. Blacquire, and D. J. Verschuur, 2009, The concept of double blending: Combining
incoherent shooting with incoherent sensing: Geophysics, 74, no. 4, A59A62,
http://dx.doi.org/10.1190/1.3141895.
Berkhout, A. J., G. Blacquire, and D. J. Verschuur, 2012, Multiscattering illumination in blended
acquisition: Geophysics, 77, no. 2, P23P31, http://dx.doi.org/10.1190/geo2011-0121.1.
Hampson, G., J. Stefani, and F. Herkenhoff, 2008, Acquisition using simultaneous sources: 78th Annual
International Meeting, SEG, Expanded Abstracts, 28162820.
Ibrahim, A., and M. D. Sacchi, 2014, Simultaneous source separation using a robust Radon transform:
Geophysics, 79, no. 1, V1V11, http://dx.doi.org/10.1190/geo2013-0168.1.
Long, A., E. von Abendorff, M. Purves, and J. Norris, 2013, Simultaneous long offset (SLO) towed
streamer seismic acquisition: 75th Annual International Conference and Exhibition, EAGE,
Extended Abstracts, Tu 04 02.
Mahdad, A., P. Doulgeris, and G. Blacquire, 2011, Separation of blended data by iterative estimation
and subtraction of blending interference noise: Geophysics, 76, no. 3, Q9Q17,
http://dx.doi.org/10.1190/1.3556597.
Moore, I., D. Monk, L. Hansen, and C. J. Beasley, 2012, Simultaneous sources: The inaugural full-field,
marine seismic case history from Australia: 74th Annual International Conference and Exhibition,
EAGE, Extended Abstracts, paper no. 160.
Poole, G., K. Stevens, M. Maraschini, T. Mensch, and R. Siliqi, 2014, Blended dual-source acquisition
and processing of broadband data: 76th Annual International Conference and Exhibition, EAGE,
Extended Abstracts, Th ELI2 05.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 279

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Integrated workflow for modern 2D programs in frontier areas

Milos Cvetkovic*, Laurie Geiger, Tim Seher, Richard Clarke, Cesar Arias, Oscar Ramirez and Steve Ward,
Spectrum Geo Inc.
Summary
Modern 2D seismic programs in frontier areas need to be
effective and flexible in order to provide high quality data
for block definition, petroleum system identification and
prospect interpretation. This can be achieved with an
integrated workflow which starts from basin evaluation and
proceeds through modeling and finally data acquisition.
This process must be aligned with environmental and
economic constraints while still meeting the requirements
of data processing and effective seismic imaging. We
present a case study from the southern Gulf of Mexico,
offshore the Yucatan Peninsula in an unexplored basin.

acoustic Finite Difference (FD) algorithm to forward model


and generate synthetic shot gathers. For imaging we use
RTM with a minimal amount of post-migration processing
applied (Cvetkovic et al., 2013).

Introduction
While the Bay of Campeche has been a location for
significant exploration activity since the discovery of the
Cantarell Field in 1976, the basin offshore the Yucatan
Peninsula has yet to see any hydrocarbon exploration. The
presence of a working petroleum system adjacent to the
Yucatan Peninsula would indicate there should be many
opportunities for potential hydrocarbon discoveries north
and northwest of the Yucatan. A practical and effective
seismic survey design for the deepwater Yucatan Basin
must take into account imaging objectives ranging from
basement structures to shallow sediments. These include a
significant Tertiary and Cretaceous sedimentary section as
well as a Jurassic salt layer overlying rift basins which
contain potential source rocks.
Modeling and acquisition
There are only a few seismic datasets available in this area,
however there are a sufficient number of detailed crosssections in the public domain to enable us to construct a
reasonably detailed subsurface model. Figure 1a shows a
Pre-Stack Time Migration (PSTM) stack with regional
interpretation extending from offshore the Yucatan
Peninsula into abyssal the plane.
The goal of this modeling exercise is to optimize the
proposed 2D seismic survey in order to achieve good
imaging of multiple identified target sections. These
include 1) the shallow Tertiary post-salt section, 2) the presalt section, 3) the steeply dipping escarpment and 4) the
deep basement and Moho structures. Our objective is to
capture all possible target types with a combination of
structural and stratigraphic synthetic models. We use an

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 1: Map of proposed acquisition. Modeling line would


represent any of lines that are perpendicular to the shore line (dip
line).

We start with a very simple conceptual model shown in


Figure 2, which illustrates a regional structural model
representative of the geologic section extending from the
Yucatan carbonate platform out into the abyssal plain of the
Gulf of Mexico. We have constructed this model using an
interpretation (Rodriguez, 2011) of an existing legacy 2D
time dataset by interpreting the main regional sequences,
converting the section into the depth domain, and assigning
appropriate velocities and other physical properties to the
model. This is a conceptual model which we use for initial
testing of acquisition parameters which are subsequently
modified based on further interpretation of modeling
results.
The construction of accurate geologic models for the
purpose of generating synthetic seismic data from
numerical FD algorithms is both interdisciplinary and
iterative. For this particular model we are increasing and
decreasing the thickness of the main stratigraphic
sequences, changing onlap boundaries and shapes of the
autochthonous and allochthonous salt bodies, and adjusting
carbonate thickness. Using this model we will test different
acquisition parameters including shot spacing, receiver
spacing, minimum and maximum offsets and record length.
In order to image the key targets represented in the model,
we demonstrate that there is a need to acquire at least 15s

Page 280

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

of data with a maximum offset of 12km. The synthetic


seismic data clearly show that record lengths less than 15s
will be insufficient to image the upper crust and mantle, as
illustrated in Figure 4, bottom left display.

Figure 2: Building simple, structural models for numerical


modeling (top) The legacy pre-stack time migration section with
interpretation used to build the synthetic depth interval velocity
model (bottom) of regional Yucatan Basin setting.

We refine shot and receiver intervals using this model, by


adding different types of coherent and random noise as we
process the synthetic data through a conventional Pre-Stack
Depth Migration (PSDM) workflow. After analyzing
results from both models, we propose a survey design that
is operationally efficient without compromising seismic
data quality or imaging objectives. We propose a 25m shot
spacing with a maximum offset of 12km to achieve
maximum fold, and continuous recording to achieve a
record length of 15s with overlapping 5s intervals (Figure
4, bottom display). We generate synthetic shot records
including multiples, random noise events and overlapping
shot noise that we later successfully separate (de-blend) in
the pre-processing.
The 2D survey that is the subject of this paper was acquired
in late 2015, and comprised 31,000km of high fold, long
offset 2D data on a nominal 10km x 10km grid. The data
was recorded with 15 second records and a 25m SP
interval, with source de-blending to remove the N+1 shot
energy from the longer shot records. Source and receiver
depth are 8m and 15m respectively with preferred relative
deep cable tow for better low frequency response and
overall quieter recording (below the swell).
PSTM processing workflow
The PSTM workflow we apply is following conventional
2D marine workflow with additional steps taken to provide
non-compromised broadband dataset.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Figure 3: Acquisition parameter testing with one of the conceptual


structural models. Top - velocity model used for forward modeling
and migration; bottom - RTM stack with 12s record length and
12km of offset, with single synthetic shot gather. The yellow line
indicates the location of the shot gather. The red arrows indicate
deficiencies in the image.

Figure 4: Acquisition parameter refinement via modeling. Top RTM stack with 15s record length and 12km of offset, with single
synthetic shot gather; bottom - RTM stack with continuously
recorded 15s record length, 5s overlap and 12km of offset, with
single shot gather. The yellow arrows indicate areas with blending
noise. The green arrows indicate areas that are improved versus the
image using 12s record length.

To attenuate the blended seismic energy, we apply a new


processing flow described in more detail by Seher and
Clarke (2016, submitted to SEG). The processing flow

Page 281

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

combines two complementary de-blending approaches: 1)


Blended wavefield estimation/subtraction and 2) random
noise attenuation. In a first step, the predictable component
of the blended wavefield is estimated. The blended direct
wave is estimated using a median stack in the common
channel domain. The blended reflected arrivals are
estimated using a parabolic Radon transform in the
common midpoint domain. This model of the blended
wavefield is subsequently subtracted from the seismic
record. This first step successfully attenuates in particular
the low frequency part of the blended wavefield and as a
result the remaining blended energy appears more erratic.
In a second step, the remaining blended signal is attenuated
using random noise attenuation techniques including
median filtering in the common channel domain and timevariant low-pass filtering.

ghost parameters (ghost delay times and reflection


coefficients) is solved as a nonlinear inversion process.
Since the process is recursive, it is very sensitive to the
non-reflective noise in the seismic data. Direct arrivals,
swell noise and other coherent noise will be amplified by
this process and are addressed before the recursive deghosting is applied. The quality of the results of ghost
removal will depend on effective data preconditioning.
With the proposed deghosting solution we recover energy
in both source and receiver notches on both ends of the
spectrum. The recovered low frequencies are very
beneficial for imaging the pre-salt section and aiding
interpretation of the deep section while we did not see a
direct uplift in model building.
The velocity model for PSTM is an RMS velocity model
picked at coarse locations throughout the survey and later
refined on a denser grid. After imaging with this model
additional noise suppression is considered to suppress
interbed and peg-leg multiples that are being generated
from the top of salt boundary.
PSDM workflow and interpretation
For depth model building and imaging we are following a
workflow described by Benson et al. (2015) where we
construct models from time RMS velocities and all
available geological constraints. As we only had access to
one well at the time of initial model building we constrain
our starting model mainly with interpretation of key
regional horizons from time image stacks. We then
precondition PSDM image gathers and run several passes
of Vertical Transverse Isotropic (VTI) tomography from
the water bottom to the first regional horizon. This
distinctive high amplitude regional event can be interpreted
as interleaved carbonates and other thin sediment layers on
top of salt. We are using non-parametric moveout picking
and a conjugant gradient solver followed by structural
smoothing.

Figure 5: Near-offset common channel gather before (top) and


after (bottom) de-blending achiving very good suppression of noise
from contious recording.

We stress that the first processing step does not require shot
time randomization. The moveout differences between the
primary arrivals and the blended secondary arrivals are
sufficiently large for source separation in simple geological
settings. The second processing step profits from shot delay
variations due to variations in vessel speed caused by ocean
currents. If these variations are small, random noise
attenuation only allows for limited uplift. Last, time-variant
low-pass filtering is rarely used in de-blending, but allows
further attenuation of blended energy at late times.
For deghosting we use the method proposed by Yilmaz and
Orhan (2014) where the problem of obtaining the unknown

2016 SEG
SEG International Exposition and 86th Annual Meeting

Salt interpretation is done using a conventional approach


including picking a top salt horizon, flooding below that
horizon with an appropriate constant velocity, migrating
and then interpreting the base. For this particular area and
dataset we find that inverting the velocities within the salt
(dirty salt velocity inversion) to derive variable salt
velocities did not significantly improve base salt structure
or the pre-salt image (Figure 6).
The section is interpreted as a thick Tertiary section
overlaying a relatively thin Cretaceous section which in
turn sits on top of a late Jurassic syn-rift section. For
purposes of illustration we have shaded the interpretation to
show an early and late Tertiary section, a Cretaceous
section including a salt layer and a syn-rift section sitting
on a faulted basement structure (Figure 7).

Page 282

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Conclusions
We have presented a robust integrated workflow for
developing and delivering a frontier area acquisition
program that offers an uncompromised dataset for modern
day exploration needs.

Figure 6: PSDM image stack overlaid with final velocity model.


Different salt scenarios were tested but without significant
improvement in image of sub-salt and pre-salt.

Using all available geological and geophysical information,


we acquired continuously recorded data offshore Yucatan
Peninsula that we followed with a conventional marine
PSTM workflow with additional de-blending and
broadband steps. With this dataset we achieved good
imaging throughout the section maintaining high fold and
good signal to noise ratio. We successfully imaged both the
pre-salt section and also the deep structures needed for an
understanding of the basin.
We expect further improvement with this dataset to be
mainly on model building and imaging side where more
detailed and advanced velocity inversion workflows such
as interpretation driven tomography and waveform
inversion; and additional preconditioning coupled with
RTM derivative imaging (Cogan and Boochoon, 2013).
Acknowledgements

Figure 7: Interpreted section showing late and early Tertiary (blue


and green), Cretaceous (beige), Jurassic (red) and basement
(purple).

2016 SEG
SEG International Exposition and 86th Annual Meeting

We thank Spectrum Geo Inc. for permission to publish this


work. Special thanks goes to Mike Saunders as well as our
colleagues Jackie Wynn, Matt Yates, Leandro Gabioli,
Geoffroy Paixach and Mike Ball for their valuable support
and help.

Page 283

Downloaded 12/16/16 to 79.62.242.250. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

EDITED REFERENCES
Note: This reference list is a copyedited version of the reference list submitted by the author. Reference lists for the 2016
SEG Technical Program Expanded Abstracts have been copyedited so that references provided with the online
metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web.
REFERENCES

Benson, C. E., P. Esestime, and M. Cvetkovic, 2015, Using geological constraints to improve velocity
model building for depth imaging in Frontier areas Case studies: 77th Annual International
Conference and Exhibition, EAGE, Extended Abstracts.
Cogan, M., and S. Boochoon, 2013, Sub-salt seismic interpretation using RTM vector image partitions:
83rd Annual International Conference and Exhibition, EAGE, Extended Abstracts, 13381342.
Cvetkovic, M., C. Caldern-Macas, P. Farmer, and G. Watts, 2014, Efficient numerical modelling and
imaging practices for aiding marine acquisition design and interpretation, First Break, 32, 99
105.
Rodriguez, A. B., 2011, Regional structure, stratigraphy, and hydrocarbon potential of the Mexican sector
of the Gulf of Mexico: M.S. thesis, The University of Texas.
Yilmaz, O., and E. Baysal, 2015, An effective ghost removal method for marine broadband seismic data
processing: 77th Annual International Meeting, SEG, Expanded Abstracts.

2016 SEG
SEG International Exposition and 86th Annual Meeting

Page 284

You might also like