Scielo RSS <![CDATA[Journal of the Southern African Institute of Mining and Metallurgy]]> vol. 115 num. 1 lang. en <![CDATA[SciELO Logo]]> <![CDATA[<b>Danie</b><b> Krige</b>]]> <![CDATA[<b>President's Corner</b>]]> <![CDATA[<b> </b><b> In memoriam: Professor D.G. Krige FRSSAf</b>]]> <![CDATA[<b>Criteria for the Annual Danie Krige Medal Award</b>]]> <![CDATA[<b>Memories of Danie Krige, Geostatistician <i>Extraordinaire</i></b>]]> <![CDATA[<b>Zibulo Colliery named runner-up in Nedbank Capital Sustainable Business Awards</b>]]> <![CDATA[<b>Self-similarity and multiplicative cascade models</b>]]> In his 1978 monograph 'Lognormal-de Wijsian Geostatistics for Ore Evaluation', Professor Danie Krige emphasized the scale-independence of gold and uranium determinations in the Witwatersrand goldfields. It was later established in nonlinear process theory that the original model of de Wijs used by Krige was the earliest example of a multifractal generated by a multiplicative cascade process. Its end product is an assemblage of chemical element concentration values for blocks that are both lognormally distributed and spatially correlated. Variants of de Wijsian geostatistics had already been used by Professor Georges Matheron to explain Krige's original formula for the relationship between the block variances as well as permanence of frequency distributions for element concentration in blocks of different sizes. Further extensions of this basic approach are concerned with modelling the three-parameter lognormal distribution, the 'sampling error', as well as the 'nugget effect' and 'range' in variogram modelling. This paper is mainly a review of recent multifractal theory, which throws new light on the original findings by Professor Krige on self-similarity of gold and uranium patterns at different scales for blocks of ore by (a) generalizing the original model of de Wijs to account for random cuts; (b) using an accelerated dispersion model to explain the appearance of a third parameter in the lognormal distribution of Witwatersrand gold determinations; and (c) considering that Krige's sampling error is caused by shape differences between single ore sections and reef areas. <![CDATA[<b>Improving processing by adaption to conditional geostatistical simulation of block compositions</b>]]> Exploitation of an ore deposit can be optimized by adapting the beneficiation processes to the properties of individual ore blocks. This can involve switching in and out certain treatment steps, or setting their controlling parameters. Optimizing this set of decisions requires the full conditional distribution of all relevant physical parameters and chemical attributes of the feed, including concentration of value elements and abundance of penalty elements. As a first step towards adaptive processing, the mapping of adaptive decisions is explored based on the composition, in value and penalty elements, of the selective mining units. Conditional distributions at block support are derived from cokriging and geostatistical simulation of log-ratios. A one-to-one log-ratio transformation is applied to the data, followed by modelling via classical multivariate geostatistical tools, and subsequent back-transforming of predictions and simulations. Back-transformed point-support simulations can then be averaged to obtain block averages that are fed into the process chain model. The approach is illustrated with a 'toy' example where a four-component system (a value element, two penalty elements, and some liberable material) is beneficiated through a chain of technical processes. The results show that a gain function based on full distributions outperforms the more traditional approach of using unbiased estimates. <![CDATA[<b>Dealing with high-grade data in resource estimation</b>]]> The impact of high-grade data on resource estimation has been a longstanding topic of interest in the mining industry. Concerns related to possible over-estimation of resources in such cases have led many investigators to develop possible solutions to limit the influence of high-grade data. It is interesting to note that the risk associated with including high-grade data in estimation appears to be one of the most broadly appreciated concepts understood by the general public, and not only professionals in the resource modelling sector. Many consider grade capping or cutting as the primary approach to dealing with high-grade data; however, other methods and potentially better solutions have been proposed for different stages throughout the resource modelling workflow. This paper reviews the various methods that geomodellers have used to mitigate the impact of high-grade data on resource estimation. In particular, the methods are organized into three categories depending on the stage of the estimation workflow when they may be invoked: (1) domain determination; (2) grade capping; and (3) estimation methods and implementation. It will be emphasized in this paper that any treatment of high-grade data should not lead to undue lowering of the estimated grades, and that limiting the influence of high grades by grade capping should be considered as a last resort. A much better approach is related to domain design or invoking a proper estimation methodology. An example data-set from a gold deposit in Ontario, Canada is used to illustrate the impact of controlling high-grade data in each phase of a study. We note that the case study is by no means comprehensive; it is used to illustrate the place of each method and the manner in which it is possible to mitigate the impact of high-grade data at various stages in resource estimation. <![CDATA[<b>Robust and resistant semivariogram modelling using a generalized bootstrap</b>]]> The bootstrap is a computer-intensive resampling method for estimating the uncertainty of complex statistical models. We expand on an application of the bootstrap for inferring semivariogram parameters and their uncertainty. The model fitted to the median of the bootstrap distribution of the experimental semivariogram is proposed as an estimator of the semivariogram. The proposed application is not restricted to normal data and the estimator is resistant to outliers. Improvements are more significant for data-sets with less than 100 observations, which are those for which semivariogram model inference is the most difficult. The application is illustrated by using it to characterize a synthetic random field for which the true semivariogram type and parameters are known. <![CDATA[<b>Regression revisited (again)</b>]]> One of the seminal pioneering papers in reserve evaluation was published by Danie Krige in 1951. In that paper he introduced the concept of regression techniques in providing better estimates for stope grades and correcting for what later became known as the 'conditional bias'. In South Africa, the development of this approach led to the phenomenon being dubbed the 'regression effect', and regression techniques ultimately formed the basis of simple kriging in Krige's later papers. In the late 1950s and early 1960s, Georges Matheron (1965) formulated the general theory of 'regionalized variables' and included copious discussion on what he termed the 'volume-variance' effect. Matheron defined mathematically the reason for, and quantification of, the difference in variability between estimated values and the actual unknown values. In 1983, this author published a paper that combined these two philosophies so that the 'regression effect' could be quantified before actual mining block values were available. In 1996 and in some earlier presentations, Krige revisited the regression effect in terms of the conditional bias and suggested two measures that might enable a practitioner of geostatistics to assess the 'efficiency' of the kriging estimator in any particular case. In this article, we revisit the trail from 'regression effect' to 'kriging efficiency' in conceptual terms and endeavour to explain exactly what is measured by these parameters and how to use (or abuse) them in practical cases. <![CDATA[<b>The use of indirect distributions of selective mining units for assessment of recoverable mineral resources designed for mine planning at Gold Fields' Tarkwa Mine, Ghana</b>]]> For new mining projects or for medium- to long-term areas of existing mines, drilling data is invariably on a relatively large grid. Direct estimates for selective mining units (SMUs), and also for much larger block units, will then be smoothed due to the information effects and the high error variance. The difficulty is that ultimately, during mining, selection will be done on the basis of SMUs on the final close-spaced data grid (grade control), which will then be available, i.e. the actual selection will be more efficient, with fewer misclassifications. However, this ultimate mining position is unknown at the project feasibility stage and therefore has to be estimated. This estimation is required because any cash flow calculations made on the basis of the smoothed estimates will obviously misrepresent the overall economic value of the project, i.e. the average grade of blocks above cut-off will be underestimated and the tonnage overestimated for cut-off grades below the average grade of the orebody. Similarly, unsmoothed estimates will be conditionally biased and will give even worse results, particularly in local areas of short- and medium-term planning or mining. This paper presents a case study of indirect post-processing and proportional modelling of recoverable resources designed for medium-and long-term mine planning at the Gold Fields' Tarkwa Mine in Ghana. The case study compares long-term indirect recoverable mineral resource estimates based on typical widely spaced feasibility data to the corresponding production grade control model as well as the mine production. The paper also proposes certain critical regression slope and kriging efficiency control limits to avoid inefficient medium- to long-term recoverable estimates, and highlights the danger of accepting block estimates that have a slope of regression less than 0.5. <![CDATA[<b>Cokriging of compositional balances including a dimension reduction and retrieval of original units</b>]]> Compositional data constitutes a special class of quantitative measurements involving parts of a whole. The sample space has an algebraic-geometric structure different from that of real-valued data. A subcomposition is a subset of all possible parts. When compositional data values include geographical locations, they are also regionalized variables. In the Earth sciences, geochemical analyses are a common form of regionalized compositional data. Ordinarily, there are measurements only at data locations. Geostatistics has proven to be the standard for spatial estimation of regionalized variables but, in general, the compositional character of the geochemical data has been ignored. This paper presents in detail an application of cokriging for the modelling of compositional data using a method that is consistent with the compositional character of the data. The uncertainty is evaluated by a Monte Carlo procedure. The method is illustrated for the contents of arsenic and iron in groundwaters in Bangladesh, which have the peculiarity of being measured in milligrams per litre, units for which the sum of all parts does not add to a constant. Practical results include maps of estimates of the geochemical elements in the original concentration units, as well as measures of uncertainty, such as the probability that the concentration may exceed a given threshold. Results indicate that probabilities of exceedance in previous studies of the same data are too low. <![CDATA[<b>Application of Direct Sampling multi-point statistic and sequential gaussian simulation algorithms for modelling uncertainty in gold deposits</b>]]> The applicability of a stochastic approach to the simulation of gold distribution to assess the uncertainty of the associated mineralization is examined. A practical workflow for similar problems is proposed. Two different techniques are explored in this research: a Direct Sampling multi-point simulation algorithm is used for generating realizations of lithologies hosting the gold mineralization, and sequential Gaussian simulation is applied to generate multiple realizations of gold distributions within the host lithologies. A range of parameters in the Direct Sampling algorithm is investigated to arrive at good reproducibility of the patterns found in the training image. These findings are aimed at assisting in the choice of appropriate parameters when undertaking the simulation step. The resulting realizations are analysed in order to assess the combined uncertainty associated with the lithology and gold mineralization.