Scielo RSS <![CDATA[Journal of the Southern African Institute of Mining and Metallurgy]]> vol. 110 num. 6 lang. pt <![CDATA[SciELO Logo]]> <![CDATA[<b>Nuggets or Nano Gold</b>]]> <![CDATA[<b>A comparison between the duplicate series analysis method and the heterogeneity test as methods for calculating the sampling constants, <i>K</i> and alpha</b>]]> The compilation of the sampling nomogram is an essential tool in the optimization of a sampling protocol and allows the operator to track the change in relative variance of the sampling error (the Fundamental sampling error) during steps of comminution and mass reduction in the process of recovery of the aliquot for analysis. The data required for the compilation of the nomogram are centred on the sampling constant K and the exponent alpha (α) of the nominal fragment size. Five methods for establishing K have been identified, but two of them, the duplicate sampling analysis method and the heterogeneity test method purport to provide such constant. The step by step procedure for each of these two methods was examined using broken ores from three mines, Mponeng mine, Kloof mine, and Lily mine. Two of these mines (Mponeng and Kloof) are typical Witwatersrand-type gold mines, and the third is an Archaean-type shear related gold deposit situated in the Barberton Greenstone belt. The results suggest that the factor K derived using the heterogeneity test is very accurate, but applies only to a single size fraction in the spectrum of comminuted materials. The calculation of gold grain sizes using this method appear to be too small and do not conform with the results from detailed mineralogical studies. The factors K and alpha derived using the duplicate series analysis are appropriate for use across a wide spectrum of comminution sizes and also provide realistic gold grain sizes, comparable with the equivalent circular diameter of gold grains identified in mineralogical studies of the ores. <![CDATA[<b>Sampling mineral commodities - the good, the bad, and the ugly</b>]]> A wide range of drill holes and process streams are sampled for resource estimation, grade control, and contractual purposes in the minerals industry. However, despite the availability of training courses, conferences and both national and international standards on correct sampling practices, it is still surprising how often little attention is given to ensuring that representative samples are collected for analysis. The reason for this is that the responsibility for sampling is often entrusted to personnel who do not appreciate the significance and importance of sampling, with cost being the main driving force rather than whether the sample is representative of the material from which it was extracted. This seriously undermines the precision and accuracy of the analyses subsequently generated and can render the analysis process a total waste of time and money and expose mining companies to serious, potential, financial losses. Company management needs to reverse this situation and ensure that sampling is given the attention it deserves to generate representative samples for analysis. <![CDATA[<b>Grab sampling for underground gold mine grade control</b>]]> Geologists in some underground gold mines collect grab samples from broken ore piles or trucks as a method of grade control. It is often known as muck sampling. Generally, the goal of grab sampling is to try and reconcile the mined grade at the ore source to the predicted grade and/or predict the mill feed grade. The mass of the sample collected is limited by health and safety issues, as well as by the capacity of the laboratory to process the samples within a given time frame. In general terms, grab sampling is known to be problematic because samplers tend to oversample the fines, and/or pick out high-grade fragments; surface sampling of piles does not test material within the pile; muck piles in development drives/faces are likely to be zoned due to the blasting sequence; high or lowgrade material may preferentially segregate in the pile during mucking; the five per cent mass reject size of the material in muck piles is very large from underground blasting; some correlation usually exists whereby the larger fragments are enriched or depleted in the critical component of value; and the average error made in estimating the true stockpile grade is likely to be high. The method is prone to chronic fundamental sampling, grouping and segregation, delimitation, and extraction errors. Substantial warnings must be given about the use of grab sampling for grade control in gold mines. The method may appear to work sometimes, which can be attributed to a fine gold particle sizing and more disseminated distribution. As with all sampling methods, its appropriateness must be determined by ore characterization and heterogeneity testing to ensure the method suits the ore type. <![CDATA[<b>Bias testing of cross-belt samplers</b>]]> The cross-belt sampler is a device that is very widely used even though it has not been ratified by the Standards associations. This paper is concerned with the bias testing of two cross-belt sampler designs against stopped belt samples and a cross-stream cutter at the belt end. The bias analysis is carried out using size distribution as the analyte and conventional t-testing and the Hotelling T squared test, which are compared for their effectiveness. The paper considers the issue of detectable bias and the statistical planning of bias tests. <![CDATA[<b>Nugget effect, artificial or natural?</b>]]> Knowing the origin of the relevant and irrelevant components of the nugget effect is essential to optimize the process under study. This paper describes the causes of the nugget effect and, its dependency on the sample support, sampling density, and on the QA/QC procedures. Several examples of natural and 'human' nugget effect are given. A method to estimate the magnitude of the relevant and irrelevant components is suggested. Finally, the economic consequences of misunderstanding the nugget effect are analysed <![CDATA[<b>Statistics or geostatistics? Sampling error or nugget effect?</b>]]> What is a nugget effect? In the early development of geostatistics, the term 'nugget effect' was coined for the apparent discontinuity at the beginning of many semivariogram graphs. This name was chosen to reflect the large differences found between neighbouring samples in 'nuggety' mineralizations such as Wits gold reefs. Geostatistical theory assumes that the difference between a sampled value and a potential repeat sample at the same location is actually zero. Included in this 'nugget effect' would be true variation between contiguous samples due to the nature of the mineralization,micro-fracturing, nugget or crystal size, and so on. Also included in the nugget effect would be any 'random' sampling variation which might occur due to the method in which the sample was taken, the adequacy of the sample size, the assaying process, etc. Arguments were put forward that 'sampling errors' actually exist at zero distance. Some geostatistical schools actually maintain that the 'nugget effect' is all sampling error. This would imply that 'perfect' sampling would eliminate the nugget effect entirely. There is now a dichotomy both in the geostatistical world and in the software packages provided for geostatistical analyses. It may seem academic to argue over whether the semivariogram model should take a value of zero, a value equal to the nugget effect, or a partial value at distance zero. However, the decision can have a profound effect on both the estimated resource and in our confidence on that resource. Whereas most geostatistical texts define the semivariogram model as taking the value of zero at zero distance, others imply that the full nugget effect should be used at zero distance. For example: • The nugget effect refers to the nonzero intercept of the variogram and is an overall estimate of error caused by measurement inaccuracy and environmental variability occurring at fine enough scales to be unresolved by the sampling interval¬≥ • Christensen4 has shown that the 'nugget effect', or nonzero variance at the origin of the sernivariogram, can be reproduced by a measurement error model • The nugget effect is considered random noise and may represent short-scale variability, measurement error, sample rate, etc.5. In many training texts and Web courses, the definition of the semivariogram is ambiguous as the formulae for semivariogram models is not actually specified at zero distance6,7,8. <![CDATA[<b>Theoretical, practical, and economic difficulties in sampling for trace constituents</b>]]> Many industries base their decisions on the assaying of tiny analytical sub-samples. The problem is that most of the time several sampling and sub-sampling stages are required before the laboratory provides its ultimate assays using advanced chemical and physical methods of analysis. As long as each sampling and sub-sampling stage is the object of due diligence using the theory of sampling it is likely that the integrity of the sought after information has not been altered and the generated database is still capable to fulfil its informative mission. Unfortunately, more often than not, unawareness of the basic properties of heterogeneous materials combined with the unawareness of stringent requirements listed in the theory of sampling, lead to the conclusion that massive discrepancies may be observed between the expensive outcome of a long chain of sampling and analytical custody, and reality. There are no areas that are more vulnerable to such misfortune than sampling and assaying for trace amounts of constituents of interest in the environment, in high purity materials, in precious metals exploration, food chain, chemicals, and pharmaceutical products. Without the preventive suggestions of the theory of sampling serious difficulties may arise when making Gaussian approximations or even lognormal manipulations in the subsequent interpretations. A complementary understanding of Poisson processes injected in the theory of sampling may greatly help the practitioner understand structural sampling problems and prevent unfortunate mistakes from being repeated over and over until a crisis is reached. This paper presents an overview of the theoretical, practical and economic difficulties often vastly underestimated in the search for quantifying trace amounts of valuable or unwelcome components <![CDATA[<b>Principles of an image-based algorithm for the quantification of dependencies between particle selections in sampling studies</b>]]> A generalization of Gy's model for the fundamental sampling error introduced the new 'parameter for the dependent selection of particles', denoted as Cij. This allows for modeling deviations from the ideal situation where the selection of a pair of particles is composed of two independent selections. The generalized model potentially leads to more accurate variance estimates in the case of clustering of particles, differences in densities or sizes of the particles or repulsive inter-particle forces. A straightforward and practically applicable method is needed for the determination of this parameter for miscellaneous mixtures in industrial settings. In this contribution, the feasibility of using digital image analysis to determine this parameter Cij, will be demonstrated. Line transect sampling of a digital image was used to construct a transition probability matrix. A new algorithm to derive quantitative estimates for Cij will be presented and discussed. The applicability was verified by a photograph of zirconium silicate particles of sizes typical for industries dealing with pharmaceutical, food/feed, and environmental applications. Conditions affecting the practical applicability are identified and potential pitfalls will be discussed, including e.g. how a potential unrepresentative surface can affect the quality of the estimate of Cij. <![CDATA[<b>Summary of results of ACARP project on cross-belt cutters</b>]]> A project was funded by the Australian coal industry to investigate the mechanisms that might lead to sample bias when using crossbelt cutters, in order to help coal industry personnel to make better decisions about the purchase, maintenance, and operation. It concentrated on DEM modelling of skew cutters. These are set at an angle to the belt with the intention of minimizing disturbance to the non-sampled material. Two bias mechanisms are likely to cause bias for cross-belt cutters. Waves of material are bulldozed off the belt by the upstream side of the body of square cutters and material is thrown by the leading edges of cutter blades for all types of cross-belt cutters. These mechanisms cause some parts of the load of material on a belt to be over-represented. The effects of these mechanisms cannot be made to be negligible, so cross-belt samplers cannot be trusted to produce unbiased samples, especially for segregated streams of material. However, it is possible to give a bound on the maximum likely bias. The grades of two portions of the stream can be estimated by stopping the belt and shovelling off 1/3 of the cross-section of the load on the belt into a container, concentrating on the final side of the belt and the top of the load. The remaining material should be put into another container and the difference in grade determined. The maximum likely bias is typically about 10% of this difference. For a cross-belt cutter, having an extraction ratio near to 100% is not a reliable indication that the cutter has little or no bias. Some bias mechanisms affecting cross-belt sample cutters make sample mass too high and some make it too low, so an extraction ratio near 100% can occur if two bias mechanisms are both active.