Journal of the Southern African Institute of Mining and Metallurgy
versão On-line ISSN 2411-9717
Current mine production planning, scheduling, and allocation of resources are based on mathematical programming models. In practice, the optimized solution cannot be attained without examining all possible combinations and permutations of the extraction sequence. Operations research methods have limited applications in large-scale surface mining operations because the number of variables becomes too large. The primary objective of this study is to develop and implement a hybrid simulation framework for the open pit scheduling problem. The paper investigates the dynamics of open pit geometry and the subsequent material movement as a continuous system described by time-dependent differential equations. The continuous open pit simulator (COPS) implemented in MATLAB, based on modified elliptical frustum is used to model the evolution of open pit geometry in time and space. Discrete open pit simulator (DOPS) mimics the periodic expansion of the open pit layouts. Function approximation of the discrete simulated push-backs provides the means to convert the set of partial differential equations (PDEs), capturing the dynamics of open pit layouts, to a system of ordinary differential equations (ODEs). Numerical integration with the Runge-Kutta scheme yields the trajectory of the pit geometry over time with the respective volume of materials and the net present value (NPV) of the mining operation. A case study of an iron ore mine with 114 000 blocks was carried out to verify and validate the model. The optimized pit limit was designed using Lerchs-Grossman's algorithm. The best-case annual schedule, generated by the shells node in Whittle Four-X yielded an NPV of $449 million over a 21-year mine life at a discount rate of 10% per annum. DOPS best scenario out of 2 500 simulation iterations resulted in an NPV of $443 million and COPS yielded an NPV of $440 million over the same time span. The hybrid simulation model is the basis for future research using reinforcement learning based on goal-directed intelligent agents.