SciELO - Scientific Electronic Library Online

 
vol.125 issue8Benhaus - a landmark decision, one less hoop for contract miners but a clarion call for an overhaul of the South African mining regime author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

    Related links

    • On index processCited by Google
    • On index processSimilars in Google

    Share


    Journal of the Southern African Institute of Mining and Metallurgy

    On-line version ISSN 2411-9717Print version ISSN 2225-6253

    J. S. Afr. Inst. Min. Metall. vol.125 n.8 Johannesburg Aug. 2025

    https://doi.org/10.17159/2411-9717/3640/2025 

    SLOPE STABILITY THEMED EDITION

     

    Guide to using gravity in the detection of underground voids

     

     

    L.M. LinzerI, II; P. LinzerI

    ISRK Consulting, South Africa. ORCiD: L.M. Linzer: http://orcid.org/0000-0003-1581-0420; P. Linzer: http://orcid.org/0009-0005-6204-3553
    IIUniversity of the Witwatersrand, South Africa

    Correspondence

     

     


    ABSTRACT

    The gravity method is often applied in mining-related scenarios such as geological and mineral exploration, detecting subsurface cavities and voids ("receptacles") that may become sinkholes, siting of surface structures, fault-mapping, and more. It is also widely applied in geotechnical engineering to map subsurface karst profiles and to detect subsurface cavities, such as can occur in open pit mines. While it is widely acknowledged that, compared with other available geophysical methods, this method is the most applicable to the karst profiling problem, survey geometries applied in the field are often far from optimal. The gravity surveying specifications prescribed in SANS 1936 omit some key requirements for optimal survey outcomes, and this paper is intended to fill those gaps. The authors have seen scopes of work that require only a few measurements over the footprint of a proposed structure, indicating a lack of understanding of the nature of the gravitational field. A single gravity measurement is subject to the camouflaging effects of several influences, many of which can be larger than the anomaly caused by the subsurface density variation itself. Papers in the body of literature show an awareness of the need to map the regional field so that it can be subtracted to unmask the residual gravity anomaly, but rarely do they examine how instrument sensitivity governs the size of detectable anomalies or discuss how a detailed terrain correction can be instrumental for isolating faint targets.
    This paper aims to explain how to apply the gravity method correctly to mining and geotechnical problems, specifically in the detection of subsurface cavities. It discusses various factors to assist with optimising the acquisition of gravity data and its processing so as to maximise the practical value of gravity surveys.

    Keywords: geophysics, gravity survey, gravity data corrections, subsurface voids, field discipline


     

     

    Introduction

    The gravity surveying specifications given in SANS 1936 (2012), Part 2 (2012), require that a competent geophysicist should either perform or oversee a survey. In executing the survey, it states that. a) A gravity map shall be produced and used to determine borehole positions. A residual gravity map shall be produced after completion of the borehole drilling. On large projects, a preliminary residual gravity map may be produced after the initial phase of drilling and then refined after all drilling is completed. The final residual gravity map shall be incorporated into a report to be prepared by the competent geophysicist. b) The accuracy of reduced observations on a relative basis shall be at least 0,01 mGal or better. c) Contour intervals on gravity maps shall not exceed 0,1 mGal."

    While these specifications are in themselves correct, they are not adequate and miss some key requirements. This paper intends to explain the subtleties that affect gravity survey data to the engineering and geotechnical community. This entails background theory and definitions necessary for the rationale behind the second part of the paper, which follows here.

    Gravity anomaly

    Gravity anomalies caused by subsurface density variations are superimposed onto the Earth's gravitational field. The Earth's gravitational field varies from g = 978.0318 cm/s2 at the equator (978.0318 Gal) to g = 983.152 cm/s2 at the poles (983.152 Gal) (Dentith, Mudge, 2014). The reason for this 5 Gal variation is that the Earth is an oblate spheroid, flattened at the poles and bulging at the equator, so a measurement of the surface gravity at the poles is higher because it is closer to the Earth's centre of mass. Gravity anomalies in mineral exploration are typically ~1 mGal in magnitude, which is 10-8 of the Earth's surface gravity. For geotechnical applications, the gravity anomalies are often even smaller at ~0.02 mGal, which is 10-10 of the Earth's surface gravity.

    Measurement of gravity anomalies

    Very sensitive instruments called gravimeters are used to measure gravity anomalies caused by subsurface density variations. Commercial gravimeters currently in use realistically have sensitivities of ~0.01 mGal (10-10 of Earth's surface gravity), although manufacturing specifications may quote 0.001 mGal sensitivity, but this accuracy is mostly unrealistic for a field survey where there are various sources of noise (micro-vibrations caused by distant earthquakes, waves, wind, etc.). In addition, more than one gravimeter may be used in a large gravity survey, adding to the measurement error by way of subtly differing instrument responses. The gravimeters in common use contain metal or quartz zero-length springs to support the test mass, and they are relative instruments that record the difference in gravity between each survey station and a survey base station.

    Superconducting gravimeters can achieve sensitivities of approximately one trillionth (10-12) of the Earth's surface gravity, but these are not widely used owing to their cost. These gravimeters do not have a mechanical spring, and their functioning principle involves the levitation of a spherical specimen by a magnetic field generated by coils (Goodkind, 1999). The sphere moves up and down in response to changes in gravity; therefore, the voltage is altered automatically, with the objective of maintaining equilibrium by moving the sphere to the correct position (Amarante, Trabanco, 2016). The voltage changes required to maintain this equilibrium are direct analogues of changes in ambient gravity.

    The important point to note here is that the gravity method is designed to detect relative anomalies, not deviations from an absolute standard. As such, an adequate background sampling must exist against which any anomalies can be contrasted. For geotechnical and exploration work, this requirement remains in force, regardless of whether an absolute or a relative gravimeter is being used. Therefore, gravity needs to be properly sampled not only within the area of interest, but also sufficiently beyond it to provide the necessary body of background readings.

     

    Factors influencing a single gravity measurement

    A single measurement of the gravitational attraction on the surface of the Earth is the combined effect of several influences: the latitude at which the measurement is taken; the elevation or distance from the Earth's centre of mass (centroid); the topography of the surrounding terrain; Earth tides and density variations in the subsurface (Figure 1). In addition, the zero-length spring in the gravimeter stretches with use and temperature changes over the course of a day, resulting in instrument drift. The regional subsurface geology also affects measurements, for instance the presence of horst/graben structures and/or variable basement topography, among others.

     

     

    Latitude

    There is an increase in gravitational intensity with latitude due to both the rotation of the Earth and the bulge at the equator. The centrifugal acceleration due to the Earth's rotation is at a maximum at the equator and zero at the poles, and it opposes the force of gravitational attraction (Telford et al., 1990). In contrast, the flattening at the poles increases the gravity since the measurement point on the surface is closer to the centre of mass. This is counteracted partly by the increased attraction due to the excess mass at the equator (Telford et al., 1990).

    The correction is maximum at a latitude of 45° and varies by approximately 0.81 mGal per kilometre north or south from the equator, while it is zero at the poles and equator. This means that to achieve an accuracy of 0.01 mGal, the north-south location of the gravity stations is required to an accuracy of ~12 metres.

    Elevation

    Gravity varies with the inverse square of the separation distance between the Earth's centroid and a mass near its surface, and it is therefore necessary to correct for changes in elevation between stations and to normalise the field readings to a datum plane. The magnitude of gravity change with elevation is 0.3086 mGal/m. This effect considers only the change in elevation and not the density of the material between the station and datum. This correction is known as the free-air correction. For an accuracy of 0.01 mGal, the usual accuracy of a gravimeter, the elevations must be known to within 3 cm (Telford et al., 1990).

    To compensate for the attraction of material between the datum plane and the station elevation (which is ignored in the free-air correction above), a second correction known as the Bouguer correction is required. This correction assumes a slab of constant density, with infinite horizontal extent between the datum and station. For an average crustal density of 2.67 g/cm3, the Bouguer correction changes according to 0.112 mGal/m, where it is subtracted if a station is above the datum (Telford et al., 1990).

    The free-air and Bouguer corrections are usually combined into a single operation.

    Topography

    If the topography has rapid elevation changes, a complex correction, known as the terrain correction, is required. In such a case, the assumption of the Bouguer slab of infinite horizontal extent is not adequate, and it becomes necessary to compensate for the increased gravitational attraction exerted by the excess mass of hills, ridges, mountain ranges, and other elevated terrain, and also for decreased gravitational attraction from the mass deficits due to valleys, depressions, gullies, excavations, and such.

    In Figure 2, at point "A" (hill) there is excess mass above the elevation-corrected measurement point "P", which reduces the gravity measurement there because it exerts an upward vertical gravitational force component on "P". At "B" (valley), there is absent mass below "P", which reduces the net vertical gravitational force component because it does not attract as strongly downwards.

     

     

    One needs to be reminded that there is 0.3086 mGal variation over a 1 m elevation change. A 5 m elevation change will cause a change in gravity of 1.5 mGal, which is 50% more than a typical exploration target of 1 mGal, and 75 times more than a geotechnical target of 0.02 mGal.

    For high-resolution surveys, relatively small features close to the survey station, such as culverts, rock dumps, storage tanks, reservoirs, open pits, hillocks, washout gullies, etc., can have a significant effect on the measured gravity and can be a major source of error in high-resolution gravity work. Large topographic features tens of kilometres away from the station, and even very large mountain ranges (> 100 km distant) may also need to be included in the correction.

    In cases where there are voids, cavities, or receptacles in an open pit, the changes in terrain due to the benches and highwalls can have a large effect, and an accurate terrain correction will be critical to unmask the anomalies caused by subsurface low-density zones. This aspect will be discussed in more detail shortly.

    Tidal effect

    A gravity measurement taken on the surface of the Earth is also affected by the gravitational attractions of the Sun and Moon. The Sun's effect is smaller, despite it being of a far larger mass than the Moon because of the Moon's closer proximity to the Earth. Collectively, the effects of the Sun and Moon are referred to as tidal effects and they produce gravitational changes of less than 0.3 mGal. Tides oscillate on a period of approximately 12 hours and 24 minutes. The period of 12 hours is due to the rotation of the Earth, while the 24 minutes result from the daily delay caused by the lunar orbit, which has a cycle of 29.5 days (Amarante, Trabanco, 2016). The tidal effect is greatest at the equator.

    When the Earth, Sun, and Moon are in the same alignment (during full moon and new moon), a phenomenon known as the syzygy tide occurs, where the variation between high and low tide is at a maximum (Amarante, Trabanco, 2016). When the Moon is waxing or waning, and not in alignment, the quadrature tide occurs where the tidal variations are smaller.

    Modern computerised gravimeters automatically calculate and apply the tidal correction to the measured gravity using the method of Longman (1959). Other formulae have been proposed by Bartels (1957), Schureman (1940), and Pettit (1954), but they are essentially the same (Dehlinger, 1978). A recent study by Amarante and Trabanco (2016) has updated some of the constants (such as the mass of the Sun). The results do not vary significantly but the updates will prove increasingly significant as gravimeters of greater precision and sensitivity are developed.

    Instrument drift, repeats, and loops

    A gravimeter will usually not record the same results if read repeatedly at the same station mainly due to creep of the spring over time, resulting in changes in their null reading value or reference value. The changes in spring properties are usually related to temperature, despite gravimeters being constructed out of materials that are relatively insensitive to temperature changes. This drift in modern gravimeters is lower than that of older instruments, but it is still significant, being of the order of ~0.1 mGal per day.

    The drift is monitored by making repeat readings at one or more designated base stations. The time elapsed between repeated readings will determine the desired accuracy of the survey. Milsom (2003) recommends a maximum time between repeat readings of 1 to 2 hours. Good field practice dictates at least 10% repeat measurements in each loop (Murray, Tracey, 2001), however, the authors have been involved in surveys where ~40% repeat measurements were taken due to noisy field conditions. Thus, the gravity survey practitioner should be aware of the environmental conditions in which the survey is being conducted and adjust the base station measurement frequency accordingly.

    To compute and remove the instrument drift effect, a linear interpolation is applied between successive base station measurements. The drift correction per minute to be applied to each measurement can be calculated using the following formula (Moleleki, 2019):

    where B1 and B2 are the base station readings of observed gravity at times 1 and 2, respectively, while Tb1 and Tb2 are the times (in minutes) of these readings. Ts is the time at each gravity station at which a measurement was taken between those at the base stations B1 and B2. LI is the drift correction/minute to be applied to each measurement.

    Looping procedures are often applied in the field, which involve measurements at subsets of the survey's stations being taken between a start- and an end reading at a base station for each subset. That is, the survey consists of a series of loops. In such a procedure, a central easy-to-access base station is chosen and repeat readings are made at that station every hour. Sometimes a leapfrogging procedure is used, where one station in a loop is repeated, so effectively two readings are made at that point, then a new base station is selected, and a new loop is surveyed. The risk with this approach is that a systematic error can propagate through the loops, resulting in an apparent tilt in the regional field that is not real, i.e., an unwanted artefact. For large surveys, a better approach is to lay out a base station grid with a master base station.

    More complex looping schemes are often employed, particularly when the survey, because of its large areal extent, requires the use of multiple base stations.

     

    Techniques to unmask the gravity anomaly: important considerations

    Non-uniqueness

    Potential field measurements of the Earth's gravitational (and magnetic) field are affected by both ambiguity and non-uniqueness; the phenomenon where many different distributions of density (or magnetic susceptibility) can produce the same geophysical anomaly. A well-known example is shown in Figure 3 where a deep sphere and a shallower lens-like body with the same density contrast can produce the same residual anomaly. Figure 4 shows a famous case study where the minimum of a Bouguer gravity survey in the Moray Firth in NE Scotland was interpreted as being due to a granite pluton (Arkell, 1933). Many years later, after drilling and a seismic reflection survey, it was realised that the cause of the minimum was actually a sedimentary basin (Collette, 1958; Sunderland, 1972).

     

     

     

     

    Terrain correction

    The gravity terrain correction (GTC) is especially important where expected gravity anomalies are relatively small and/or where the topography is highly irregular in the near vicinity of the survey area. Historically, there have been cases in dolomite environments where constructions on surface (e.g., runways) and mining infrastructure (e.g., ramps) were sited over voids, the collapse of which had significant adverse consequences, both operational and financial. Such disasters could well have been avoided through the application of the gravity method with comprehensive GTC processing since the anomalies in such cases are typically very small, as will shortly become clear.

    Prior to the advent of digital computing, this correction was done manually using the so-called Hammer net method, which involved dividing the area surrounding each measurement station into sectors and concentric radial sections ("compartments") according to a large, superimposed circular net template, averaging the elevations in each portion, and calculating a gravity correction based on the elevation differences between the station and the average portion elevations.

    Performed manually, the Hammer net approach is obviously labour-intensive and repetitive, and therefore prone to mistakes. "Terrain corrections can be extremely tedious" (Milsom, 2003). For the same reasons, the method clearly also lends itself well to computerisation.

    Some GTC computer codes exist but they are either unobtainable or outdated in respect of both capability and the hardware and/or software platforms on which they will run. Proxy methods that rely on mathematical simplifications of topographical complexities and/or on statistical techniques for estimating the corrections also exist, but these methods invariably lose relevance where the topography becomes increasingly irregular close to the survey area-which is precisely the situation in which the greatest possible correction accuracy is required.

    In response to these deficiencies, SRK Consulting has developed its own GTC algorithms from first principles, based on the Hammer net method but with some enhancements, and implemented them first as a statistical analysis software (SAS1) script prototype for testing and validation, and later as a standalone Microsoft Windows program. For the latter, the algorithms are optimised for processing speed, making extensive use of multithreaded parallel processing, and large GTC jobs can be farmed out to multiple PCs, each processing a different portion of the survey data. Even so, GTC processing runs can take a considerable time to complete but tests with real-world data have shown that this effort can add real value in terms of target contrast, resolution, and definition. The required input data consist of survey station positions and elevations, and a terrain digital elevation model (DEM, typically taken from public domain, 1-arcsecond SRTM data) out to 100 km or more in all directions beyond the survey area.

    The following example, while highly idealised, serves well to illustrate the difference that proper GTC processing can make. The example is of a gravity survey done on the floor of a circular opencast mining pit to identify subsurface cavities. The hypothetical pit has five benches and a spherical cavity due east of the centre of the pit floor, halfway between the centre and the pit floor edge. Figure 5 shows the conceptual pit layout and dimensions.

     

     

    In addition, a ridge 10 km to the north of the pit is assumed, which runs east-west for 30 km. It has a symmetric trapezoidal cross-section (i.e., its north-south section) with sides sloping at 45° and is 500 m wide at its base, 200 m high, and 100 m wide at the top. Apart from the interior of the pit itself and this ridge, all other terrain around the survey area is assumed to lie at the same elevation, which is the ground level at the uppermost edge of the pit. All rock densities are assumed to be 2.5 g/cm3 throughout.

    Figure 6 shows a plot of the reduced gravity data for this theoretical survey after all corrections have been applied, except for the terrain correction.

     

     

    Figure 7 shows a plot of the reduced gravity data for this theoretical survey after all corrections, including the terrain correction, have been applied. The gravity residual values have been shifted by a constant offset to emphasise the narrow range within which they fall. This would be the final output from the gravity survey.

     

     

    When comparing Figures 6 and 7, it is important to note the ranges of their respective gravity values. In Figure 6, the range is 1.7021 mGal, compared to 0.0685 mGal in Figure 7. This wide disparity in the magnitudes of the ranges underscores the earlier point that cavities or voids such as incipient sinkholes typically produce a very small gravity anomaly compared to other influences, in this case the effect of the near-field topography. The plots should therefore be compared qualitatively in a relative sense without reference to their gravity values per se.

    It is immediately clear that there is a reversal of the gravity highs and lows between the two plots: Figure 6 shows the highest values at the centre of the pit and the lowest values at the pit floor's periphery, whereas Figure 7 has the highest values at the pit floor's outer region. This difference is entirely attributable to a cancelling upward gravitational pull by the surrounding benches, where the upward pull is greatest at the pit floor's edge, closest to the bottom bench. The upward component diminishes rapidly towards the centre of the pit floor-as the inverse cube2 of the distance from the outer edge- and is at a relative minimum at the pit's centre. This efiect explains why, in Figure 6, the gravity values are low at the periphery of the pit floor and higher at its centre.

    Figure 6 also highlights a very small gravitational anomaly at "A", which is the result of the subsurface cavity. Without the terrain correction, this could easily escape notice or be ignored as being insignificant. However, the anomaly is the desired target of the gravity survey, and applying the terrain correction brings it into much sharper focus and improved contrast, as is shown in Figure 7.

    It is also clear that the east-west ridge mentioned earlier at 10 km to the north of the pit has no discernible efiect on the measured gravity values or on the terrain correction, but this does not justify a general reduction of the extent of the surrounding terrain that should be included in the terrain correction processing. A higher, more prominent ridge-like feature such as a mountain range at the same distance would produce a correspondingly more pronounced terrain effect.

    From these considerations it is therefore evident that in the right circumstances, the effect of the near-field terrain can almost completely obscure the subsurface cavity, possibly leaving it undetected, likely to the detriment of the pit operations and personnel. Moreover, since it is often the case that such cavities are for safety reasons sought inside pits or other excavations, similar circumstances to those referred to here can occur in practice.

    On a more abstract note, with regard to the afore-mentioned example, omitting the gravity terrain correction could clearly in this case result in an inaccurate, substandard, and possibly useless report on the presence of cavities in the pit, which is what the survey was intended to reveal. As such, this could prompt the pit operator to question the usefulness of the gravity technique (and perhaps even geophysical exploration methods in their entirety), especially if the cavity were subsequently to collapse. In turn, this loss of confidence does a disservice to gravity survey practitioners and may result in such surveys not being performed at all, or where they are mandatory, them being performed as mere box-ticking exercises.

    The preceding illustration and discussion of the gravity terrain correction demonstrate its importance as a further gravity data processing step, especially in situations where gravity anomalies are expected to be subtle and/or where the terrain at, or surrounding the survey area, is highly variable in terms of local elevations. However, a reliable guideline or heuristic is currently lacking to decide when applying the terrain correction is warranted and how far out it needs to be taken. Future gravity survey work will likely assist with establishing appropriate rules in this context.

    Line lengths to record the regional field

    The regional gravity field is the long wavelength gravity caused by the bedrock topography such as horsts, grabens, dipping basements, etc. When the gravity is measured at a point on the surface, it is the combined effect of the local, subsurface density variations (short wavelength) that is usually the target of such studies, and the regional field caused by larger structures. An example is shown in Figure 8, where an air-filled cavity (10 m diameter) at a depth of around 50 m in a material with density contrast of -2.0 g/cm3 (e.g., sandstone) overlays deeper granitic material (relative density contrast of -0.5 g/cm3), dipping to the right of the Figure.

     

     

    The total Bouguer anomaly recorded (Figure 8a) will be the sum of the effects of the cavity, the subsurface material, and a dipping basement. The regional trend caused by the subsurface overlaying the dipping basement is shown in Figure 8b. Subtraction of the regional trend from the Bouguer gravity will reveal the residual anomaly (Figure 8c) caused by the cavity. It should be noted that the areas labelled "Model edge effects" in the figures are identified to emphasise that the apparent gravity reduction occurring towards the model's edges is a modelling artefact, rather than what would be seen in reality. The modelling was done using Grav2D, a freeware program written by Cooper (1998).

    There is no hard-and-fast way to isolate the regional trend, but there will usually be a difference in the lateral scale of the various anomalies. Hence, there is a strict need for survey lines that are long enough to detect regional trends. Prior knowledge of the regional-scale geology can help guide the line length selection.

    Direct estimates of the regional gravity anomaly can be made from an independent dataset, e.g., the regional gravity map of South Africa. However, this requires the local gravity survey to be tied into the regional grid by taking measurements at common base stations, which typically would entail additional costs that survey requestors are usually reluctant to pay. Graphical estimates of the regional trend are based on simply plotting the observations and sketching the interpreter's estimate of the regional gravity attributes. There are several mathematical techniques for estimating regional trends such as moving averages, long wavelength filtering, upward continuation, function fitting, etc., but all of these require a data grid that is large enough and detailed enough to capture the longer wavelength regional variations.

    Station spacing

    Gravity is a potential field that needs to be sampled on a grid to map relative differences. The station spacing needs to be close enough to define the anomaly, of which its requirement depends on the target size and its depth. The sample spacing should be small enough to record at least two points over the anomalous mass and one on either side of it, along a survey long enough to define the regional effect. The station spacing geometry is illustrated conceptually in Figure 9.

     

     

    Gravimeter sensitivity and target resolvability

    The sensitivity of the gravimeter is also an important consideration, since a survey will not reveal anything useful if the anomaly

    amplitude is below the detection limit of the instrument. If the realistic sensitivity of the instrument is 0.01 mGal, it will not be possible to detect an anomaly of 0.001 mGal, no matter how close the station spacing.

    To illustrate this point more concretely, the limits of detection of a spherical 10 m radius air-filled cavity at depth increments of 10 m, from 20 to 100 m, were calculated using forward modelling, and are shown in the two figures that follow. The line length in Figure 10 is 500 m, and gravity readings are made at 5 m and 30 m intervals in 10(a) and (b), respectively. The anomalies caused by the cavity at 40 m (-0.010 mGal) and shallower depths are still within detectability limits for both station spacings, provided there is no or only insignificant background noise. The cavities lying deeper than 40 m will not produce gravity anomalies large enough for a standard gravimeter to measure accurately. Realistically, however, only the cavities at 20 m for both station spacings will be reliably uncovered (-0.025 mGal). Note also how the anomaly widens with the wider station spacing.

    In Figure 11, the line length is reduced to 100 m, with the same station spacings. The effect of the shorter line length is that the gravity anomaly does not return close enough to the background level, resulting in smaller overall anomalies over the cavity. (To calculate the anomaly's magnitude, subtract the furthest measurement from the absolute maximum). The cavities at 40 m are now below the limit of detectability (< 0.010 mGal in magnitude). Note also the slightly different scales on the gravity anomaly (vertical) axes of the plots.

    Figure 12 shows the modelled gravity response of a variable radius spherical cavity (5, 10, to 25 m radius) at depths of 10, 20, to 100 m, sampled along a 500 m line at 5 m station spacing. The 0.01 mGal limit of detectability is shown as a horizontal dashed line. A large cavity that has a 25 m radius produces a detectable anomaly at all depths from 10 to 100 m, whereas a 5 m cavity is only detectable at depths from 10 to 20 m, and at 30 m depth, it is below the detectability limit. The graph in Figure 13 shows the same permutations of anomaly size with depth, but for a 100 m line length. Note how the detectability of the different cavity sizes drops, with the 25 m radius cavity now only being detectable for depths from 10 to 50 m.

     

     

     

     

    Note again the slightly different scales on the maximum gravity anomaly (vertical) axes between Figures 12 and 13.

    Achieving an accuracy of 0.01 mGal

    Accuracy and sensitivity are different concepts. Instrument manufacturers will state the sensitivity of an instrument, for example that it is sensitive to field changes of ~0.01 mGal. However, an accuracy of a specified level will only be achieved if readings are carefully made, and drift and tidal corrections are diligently applied. The SANS specifications mention that the accuracy of reduced observations on a relative basis shall be at least 0.01 mGal or better. This means that the standard deviation of all measurements made at a single base station should be 0.01 mGal or less, which is right at the sensitivity limit of most commercial gravimeters. It is doubtful that such accuracy is ever achieved in dolomite surveys, given the survey designs and field practices in the industry, as well as the level of anthropogenic noise in urban areas where these surveys are often performed.

     

    Conclusions

    This work provides guidelines and explains their origins to aid the geotechnical community with planning and conducting gravity surveys that are tailored to the target type, survey purpose, and geological environment, that will yield useful data. The key considerations are:

    > As explained in the introduction, the usefulness of the gravity method significantly hinges on adequate sampling not just in the area of immediate interest but also in its near vicinity. This was illustrated by examples in which the line lengths, station spacing, and target sizes were varied.

    > Looping procedures, often applied in field surveys, run the danger of systematic error propagation, possibly producing a tilt artefact in the apparent regional field. For large surveys, a better approach is to lay out a coarse base station grid with one master base station.

    > Applying the gravity terrain correction (GTC) is especially important where expected gravity anomalies are relatively small and/or where the topography is highly irregular in the near vicinity of the survey area. The additional value derived from proper GTC processing is not limited to detecting subsurface cavities; it can improve gravity survey results, regardless of a gravity survey's purpose and despite this papers title.

    > Simple forward modelling can help to assess whether expected targets are detectable in a given geological environment and for a given survey design.

    > Survey lines must be long enough to capture the regional trends so that they can be subtracted to show residual anomalies. Knowledge of the regional scale geology can help guide the line lengths.

    > Station spacing must be designed with the minimum target size and expected anomaly magnitude in mind. At least two survey points are required over the target, and one on each side of it.

    >The practically achievable gravimeter sensitivity and ambient background noise must be borne in mind to avoid pointless or substandard surveys.

     

    Acknowledgements

    The authors hereby extend their deep and heartfelt gratitude to the two anonymous reviewers whose valuable comments and suggestions have added significantly to the quality, the substance, and the readability of this paper.

     

    References

    Amarante, R.R., Trabanco, J.L.A. 2016. Calculation of the tide correction using in gravimetry, Revista Brasileira de Geofísica (2016) 34(2): 193-206 © 2016 Sociedade Brasileira de Geofísica, ISSN 0102-261X.         [ Links ]

    Arkell, W.J. 1933. The Jurassic System in Great Britain. Oxford: Oxford University Press.         [ Links ]

    Bartels, J. 1957. Geophysik II/Geophysics II. Handbuch Der Physik, Encyclopedia of Physics/ Geophysik/Ge. Springer-Verlag.         [ Links ]

    Collette, R.J. 1958. Structural sketch of the North Sea. Geologie Minjnb, vol. 20, pp. 366-371.         [ Links ]

    Cooper, G.R.J. 1998. GRAV2DC for Windows User's Manual (Version 2.05). Geophysics Department, University of the Witwatersrand, Johannesburg.         [ Links ]

    Dehlinger, P. 1978. Marine Gravity, vol. 22 of Elsevier Oceanography Series. Elsevier.         [ Links ]

    Goodkind, J. 1999. The superconducting gravimeter. Review of Scientific Instruments, vol. 70. 10.1063/1.1150092.         [ Links ]

    Longman, I.M. 1959. Formulas for computing the tidal accelerations due to the Moon and the Sun. Journal of Geophysical Research, vol. 64, no. 12. pp. 2351-2355.         [ Links ]

    Milsom, J. 2003. Field Geophysics Third Edition. John Wiley & Sons Ltd, ISBN 0-470-84347-0.         [ Links ]

    Mussett, A.E., Aftab Khan, M. 2000. Looking into the Earth: An introduction to geological geophysics. Cambridge University Press, 32 Avenue of the Americas, New York, NY 10013-2473, USA.         [ Links ]

    Pettit, J.T. 1954. Tables for the computation of the tidal accelerations of the Sun and Moon, vol. 35. Transactions American Geophysical Union.         [ Links ]

    Schureman, P. 1940. Manual of harmonic analysis and prediction of tides. No. 98. U.S. Department of Commerce. Special Publication.         [ Links ]

    Sunderland, J. 1972. Deep sedimentary basin in the Moray Firth. Nature,vol. 236, pp. 24-25.         [ Links ]

    Telford, W.M., Geldart, L.P., Sheriff, R.E. 1990. Applied Geophysics Second Edition. Cambridge University Press, ISBN 0-521-33938-3.         [ Links ]

     

     

    Correspondence:
    P. Linzer
    Email: plinzer@srk.co.za

    Received: 9 Jan. 2025
    Revised: 7 Apr. 2025
    Accepted: 16 Jul. 2025
    Published: August 2025

     

     

    1 SAS = Statistical Analysis Software (https://www.sas.com)