SciELO - Scientific Electronic Library Online

 
vol.32 número2Improved semi-supervised learning technique for automatic detection of South African abusive language on TwitterUsing Summary Layers to Probe Neural Network Behaviour índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


South African Computer Journal

versión On-line ISSN 2313-7835
versión impresa ISSN 1015-7999

Resumen

THEUNISSEN, Marthinus Wilhelmus; DAVEL, Marelie H.  y  BARNARD, Etienne. Benign interpolation of noise in deep learning. SACJ [online]. 2020, vol.32, n.2, pp.80-101. ISSN 2313-7835.  http://dx.doi.org/10.18489/sacj.v32i2.833.

The understanding of generalisation in machine learning is in a state of flux, in part due to the ability of deep learning models to interpolate noisy training data and still perform appropriately on out-of-sample data, thereby contradicting long-held intuitions about the bias-variance tradeoff in learning. We expand upon relevant existing work by discussing local attributes of neural network training within the context of a relatively simple framework. We describe how various types of noise can be compensated for within the proposed framework in order to allow the deep learning model to generalise in spite of interpolating spurious function descriptors. Empirically, we support our postulates with experiments involving overparameterised multilayer perceptrons and controlled training data noise. The main insights are that deep learning models are optimised for training data modularly, with different regions in the function space dedicated to fitting distinct types of sample information. Additionally, we show that models tend to fit uncorrupted samples first. Based on this finding, we propose a conjecture to explain an observed instance of the epoch-wise double-descent phenomenon. Our findings suggest that the notion of model capacity needs to be modified to consider the distributed way training data is fitted across sub-units.CATEGORIES: Computing methodologies ~ Machine learning Computing methodologies ~ Neural networks Theory of computation ~ Sample complexity and generalisation bounds

Palabras clave : deep learning; machine learning; learning theory; generalisation.

        · texto en Inglés     · Inglés ( pdf )

 

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons