Dispersion curve interpretation

From GeopsyWiki
Jump to navigation Jump to search

!! DRAFT (from Donat's guidelines document) !!

The various array processing techniques lead to estimates of the dispersion curves for Rayleigh and possibly Love waves. The interpretation of such estimates should be done carefully, keeping in mind a number of limitations related to

  • the array geometry
  • the uneven energy distribution as a function of frequency and surface wave modes
  • the uncertainties in the velocity values
  • the underlying 1D assumption (i.e., horizontally layered media)

Array geometry and resolution capabilities

The resolution capabilities of an array can be summarized by the shape of its beampattern or, in other words, the array response function. The resolving power of an array (or capability of distinguish between two similar signals) strongly depends on the number of receivers used for the survey and the geometry of the array. As a matter of fact, the number of receivers controls the width of the central lobe in the response function, whose convolution with input impulse signal produce the final array power output. It follows that the sharper is this peak, the higher will be the possibility to distinguish between two signals close in the wave-number domain. This is often the case where two modes are close to apparent intersection (at an osculation point). In such a case, if the array has not been correctly designed, distinguish between modes is impossible and the f-k analysis will lead to an erroneous unique signal of the average properties. This phenomenon is called mode superposition. There is no general agreement on the minimum number of sensors that assure a useful result. This depends also on the analysis method (e.g. SPAC, BF, HRBF) and, to some extend, on the site under investigation (level of structural complexity, distribution of noise sources). Even if some results has been obtained in literature using a relatively low number of sensor (e.g., 6), our recommendation is to use the highest possible number of sensor (or at least higher than XXX), since this will primary control the quality of the final result. There is not one optimal configuration for array measurements, but each configuration has some small advantages and disadvantages over others. As a general rule, too regular configurations have to be avoided, if possible. Such geometries indeed, like squares or hexagon, produce redundancies on the station inter-distances. Consequently, this can lead to the presence of high amplitude side-lobes in the beam-pattern, responsible of aliasing in the final output. The current practice is to perform random sampling, i.e. positioning seismic sensors in a random fashion as opposed to a regular grid-like structure. However, while this is an effective way to prevent aliasing, it might introduce some additional noise, which downgrades the elaboration.

Spatial and temporal resolution limits

The output of the elaboration with the different array can be reliably interpreted only within certain limits. Interpretation of results outside these limits is in general not recommended. The two more important parameters are directly related to array geometry, namely the minimum wave-number and the maximum wave-number . They are defined as follows:


where is the maximum inter-station distance, or array aperture, and is the minimum inter-station distance. The dispersion curve obtained from array processing is considered to be reliable within these two boundaries. The limit is valid for every processing technique and defines a limit over which spatial aliasing is very likely to occur. The first limit defines the smaller wavenumber that can be interpreted reliably with FK analysis. For HRFK this limit is lower and cannot be written explicitly. For SPAC techniques there is, in theory, no lower limit. Temporal aliasing is typically not a problem since an anti-aliasing filter is found before analog to digital conversion. However above a certain temporal frequency the effect of spatial aliasing will be larger. In fact the spatial bandwidth is larger at higher temporal frequencies and therefore aliasing is more likely to occur. Therefore even within the interval, at high frequencies, it is possible that the elaboration is downgraded by spatial aliasing. It is not possible to known a priori at which temporal frequencies this situation will occurs, since this depends on the velocity of the signal. Some considerations are useful to better understand occurrence of aliasing and, to some extent, mitigate its impact. Different array geometries will lead in general to a different aliasing pattern. Therefore if a measurement campaign is repeated at the same site using a different array geometry, the dispersion curve will appear in the same location of the velocity / frequency plane while the aliasing pattern will be different.

Mode identification and energy distribution

Influence of the fundamental frequency


Filtering effect of the soil layer : Surface waves excited by local sources only beyond the fundamental frequency


Close the fundamental frequency Rayleigh waves tend to be polarized on the horizontal component, and are hardly seen on the vertical.

Mode identification

One of the most problematic issues in array processing of surface waves is the mode addressing of dispersion curves. In general, there is no strict and unambiguous criteria to distinguish between fundamental and higher modes. The fundamental mode, however, is most of time the dominant one, due to its energy content, but some exceptions can exist. For this reason, mode addressing should be carefully done, since mistakes can lead to large deviation in the final result. The use of any prior geological/geophysical knowledge about the soil structure might help to constraint the choice on mode number. Sometimes, several attempts of inversion are necessary, before obtaining a reasonable model compatible with the local structure. The development of surface waves higher modes strongly depends on the velocity structure and the sources depth. When large contrast of seismic impedance are present, waves are forced to resonate within the structure and their energy is trapped. For analogue reasons, buried sources has been demonstrated to promote the development of surface waves higher modes. Higher modes are also very sensitive to heterogeneities in the soil structure, due to their low energy content. Consider that an array deployment covers a wide area, whose extension is in direct relation to the depth to be investigated. In this context, the assumption of one-dimensionality is not always satisfied along the whole area and lateral variation of the elastic properties might be present. Consequently, the wave-field might not correlate between all station locations and array output might suffer energy some degradation according. Any low energy portions of the dispersion curve tends then to be minimized, or even disappear. In some cases and some frequency ranges, energy can strongly vary and increase between different modes. If these modes are close, moreover, there could be situations where the dispersion curve “jumps” from one mode to the other. This misleading result, if not correctly identified, lead to an erroneous velocity estimation and, in some cases, to the impossibility to invert for any realistic model.

LVZ cases


Definition of the uncertainties

Uncertainties in surface-wave dispersion analysis are assessed by mean of a windowing procedure as previously introduced. In such a way, a phase-velocity likelihood function can be estimate at each frequency separately by mean of an histogram representation. The type of statistic to use is still under discussion. Most likely dispersion uncertainties are Gaussian in the wavenumber domain but, since we use velocity (or slowness) for the inversion, is sometimes more convenient to use the approximation of log-normal statistic in these domains. This leads, however, to an asymmetry of the standard deviation, with an apparent increase at low frequencies, where normally phase velocity are higher. In situation with modes apparent intersection and jump, where the processing algorithm can fail, the statistic might be strongly biased and should be used carefully. Aliasing can also produce deviated results, since spurious point are included in the statistic, erroneously increasing the standard deviation.

2D/3D issues