SciELO - Scientific Electronic Library Online

 
vol.32 número2Benign interpolation of noise in deep learningHt-index for empirical evaluation of the sampled graph-based Discrete Pulse Transform índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

    Links relacionados

    • En proceso de indezaciónCitado por Google
    • En proceso de indezaciónSimilares en Google

    Compartir


    South African Computer Journal

    versión On-line ISSN 2313-7835versión impresa ISSN 1015-7999

    Resumen

    DAVEL, Marelie H.. Using Summary Layers to Probe Neural Network Behaviour. SACJ [online]. 2020, vol.32, n.2, pp.102-123. ISSN 2313-7835.  https://doi.org/10.18489/sacj.v32i2.861.

    No framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network architectures: fully-connected feedforward networks with rectified linear activations and a limited number of hidden layers. For such an architecture, we show how adding a summary layer to the network makes it more amenable to analysis, and allows us to define the conditions that are required to guarantee that a set of samples will all be classified correctly. This process does not describe the generalisation behaviour of these networks, but produces a number of metrics that are useful for probing their learning and generalisation behaviour. We support the analytical conclusions with empirical results, both to confirm that the mathematical guarantees hold in practice, and to demonstrate the use of the analysis process.CATEGORIES: Computing methodologies ~ Neural networks Theory of computation ~ Machine learning theory

    Palabras clave : deep learning; machine learning; learning theory; generalisation.

            · texto en Inglés     · Inglés ( pdf )