SciELO - Scientific Electronic Library Online

 
vol.112 issue2Economic impact of electricity supply interruptions in South Africa author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


SAIEE Africa Research Journal

On-line version ISSN 1991-1696
Print version ISSN 0038-2221

SAIEE ARJ vol.112 n.2 Observatory, Johannesburg Jun. 2021

 

ARTICLES

 

Ear-based biometric authentication through the detection of prominent contours

 

 

Aviwe KohlakalaI; Johannes CoetzerII

IDepartment of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa (email: avi.kohlakala@gmail.com)
IIDepartment of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa (email: jcoetzer@sun.ac.za)

 

 


ABSTRACT

In this paper novel semi-automated and fully automated ear-based biometric authentication systems are proposed. The region of interest (ROI) is manually specified and automatically detected within the context of the semi-automated and fully automated systems, respectively. The automatic detection of the ROI is facilitated by a convolutional neural network (CNN) and morphological postprocessing. The CNN classifies sub-images of the ear in question as either foreground (part of the ear shell) or background (homogeneous skin, hair or jewellery). Prominent contours associated with the folds of the ear shell are detected within the ROI. The discrete Radon transform (DRT) is subsequently applied to the resulting binary contour image for the purpose of feature extraction. Feature matching is achieved by implementing an Euclidean distance measure. A ranking verifier is constructed for the purpose of authentication. In this study experiments are conducted on two independent ear databases, that is (1) the Mathematical Analysis of Images (AMI) ear database and (2) the Indian Institute of Technology (IIT) Delhi ear database. The results are encouraging. Within the context of the proposed semi-automated system, accuracies of 99.20% and 96.06% are reported for the AMI and IIT Delhi ear databases respectively.

Index-Terms: ear shell, biometric authentication, convolu-tional neural network


 

 

I. INTRODUCTION

A Biometric system performs personal authentication based on a specific physiological or behavioural characteristic of the individual. Biometric systems are increasingly utilised for security purposes, as they are more reliable and secure than most traditional modes of personal authentication, such as access cards, personal identification numbers and passwords. The human ear is one of the most distinctive human biometric traits that can be employed to establish or verify an individual's identity. Furthermore, the human ear constitutes a relatively stable structure that evolves very little with aging and may be acquired in a non-intrusive manner.

The concept of ear-based biometric authentication emerged relatively recently as an active field of research. Traditional systems rely on the extraction of hand-crafted features, while modern systems are able to learn so-called deep features by employing neural networks. A traditional ear-based biometric authentication system typically involves (1) segmentation of the ear or the detection of the region of interest (ROI), followed by (2) feature extraction and (3) feature matching (recognition or verification).

Different types of ear segmentation techniques, that is the localisation of the ear shell within an ear image have been investigated. A semi-automated ear detection technique based on an improved Adaboost algorithm and an active shape model (ASM) was proposed by Yuan and Mu [1], while an automated ear detection technique based on the combined use of the circular Hough transform and anthropometric ear proportions was presented by Velez, Sanchez, Moreno and Sural [2].

A variety of algorithms have been proposed for extracting discriminative features from ear images. The extracted handcrafted features are generally categorised into the following three types: geometrical features, local appearance-based features, and global features.

A number of geometrical feature extraction techniques that characterise the shape of the ear have been presented. Othman, Alizadeh and Sutherland [3] proposed a novel ear description technique based on a shape context descriptor. Annapurani, Sadiq and Malathy [4] proposed a technique that fuses the shape of the ear shell and tragus in such a way that a feature template is obtained. Omara, Zhang and Zuo [5] proposed a technique that uses the lines of minimum and maximum height associated with the ear's contour image to describe the outer helix.

Extensive techniques have been suggested to extract local appearance-based features from ear images. The scale invariant feature transform (SIFT) algorithm was employed by Anwar, Ghany and ElMahdy [6] and a robust algorithm for local similarity invariant extraction was proposed by Gaktómez, Arrieta and Ramon [7]. A feature extraction technique based on the fusion of texture-based features (through local binary patterns) and geometric features (through the Laplacian filter) was proposed by Jiddah and Yurtkan [8].

Within the context of global feature extraction, techniques such as principal component analysis as proposed by Querencias-Uceta, Rios-Sanchez and Sanchez-Avila [9], and a technique based on a combination of the wavelet and discrete cosine transforms as proposed by Ying, Debin and Baihuan [10] have been investigated.

A number of feature matching protocols for quantifying the difference between two ears have been proposed, which include the utilisation of the Euclidean distance ([4], [7]), and the Hamming distance [4], as well as a minimum distance classifier [6], a k-nearest neighbour (KNN) classifier [8] and a nearest neighbour classifier that is based on a weighted distance [10].

As previously mentioned, the extraction of deep features from ear images has been investigated more recently. A number of deep learning techniques have been proposed. Dodge, Mounsef and Karam [11] used deep neural networks for the explicit purpose of feature extraction. Kacar and Kirci [12] introduced a novel architecture called ScoreNet for unconstrained ear recognition; the architecture fuses a modality pool with a learning approach based on deep cascaded score-level fusion. Hansley, Segundo and Sarkar [13] proposed an unconstrained ear recognition framework based on a convolutional neural network (CNN) model for ear normalisation and description, which is subsequently fused with hand-crafted features.

In this paper novel semi-automated and fully automated ear-based biometric authentication systems are developed. Within the context of the fully automated system, a CNN is designed for the purpose of facilitating the automatic detection of a suitable ROI which contains the entire ear shell. Within the context of the semi-automated system, the ROI is manually selected. Robust prominent contours that correspond to the folds of the ear shell are subsequently isolated within the ROI. These contours serve as input for a hand-crafted feature extraction protocol that is based on the calculation of the discrete Radon transform (DRT). A template matching protocol is employed that quantifies the difference between corresponding feature vectors through the calculation of the Euclidean distance. A rank-based verifier is finally constructed for the purpose of establishing the authenticity of a questioned ear image. The aforementioned steps are discussed in more detail in Section II.

The systems proposed in this paper are evaluated on the Mathematical Analysis of Images (AMI) and Indian Institute of Technology (IIT) Delhi databases. A detailed description of these databases is provided in Section III. In Table VII the proficiency of existing systems also evaluated on the above-mentioned databases are compared to that of the semi-automated system proposed in this paper.

The paper is structured as follows. Section II details the design of the proposed systems. Section III introduces the data, outlines the experimental protocol, and analyses the results. Possible avenues for future research are laid out in Section IV.

 

II. SYSTEM DESIGN

An overview of the enrollment and authentication stages of the proposed semi-automated and fully automated ear-based biometric authentication systems are conceptualised in Figure 1

A. Image segmentation

In the case of the semi-automated system a suitable ROI that contains the ear shell is manually selected, while for the fully automated system a suitable ROI is automatically detected. A CNN-based model (see Figure 2 is proposed to facilitate automatic ROI detection. The proposed CNN consists of four convolutional layers, where each of these layers is followed by a batch normalisation (BN), rectified linear unit (ReLU) and/or max pooling layer. The final pooling layer is followed by two fully connected (FC) layers. A detailed description pertaining to the mathematical underpinning of the CNN can be found in ([14], [15]).

 

 

For each database, ear images from different individuals are used for training, validation and evaluation purposes. The training set (seen data) is used to learn the parameters (weights) for the CNN in question, the validation set is used for avoiding overfitting by enforcing a stopping condition, while the evaluation set is used to measure the performance of the CNN on unseen data.

Each ear image (see Figure 3 (a)) is subdivided into overlapping regions by sliding a 82x82 square window across the image in question (see Figure 3 (b)). Each sub-image in the training and validation set is manually annotated as either positive (foreground) or negative (background). The positive training sub-images (see Figure 4 are representative of the foreground and typically forms part of the ear shell, while the negative sub-images (see Figure 5 represent the background and typically contains homogeneous skin, hair or jewellery. The objective of the CNN is to classify each patch within a test image as either foreground or background (see Figure 6.

 

 

 

 

 

 

 


 

The CNN is trained by employing stochastic gradient descent with momentum (SGDM) [16]. The weights of the first convolutional layer are initialised using normally distributed random numbers. The proposed CNN is trained from scratch. No fine-tuning of an existing pre-trained network (transfer learning) is conducted. After each epoch, the accuracy of the network is gauged by employing a validation set in order to avoid overfitting [17]. Morphological closing is subsequently applied to the resulting binary image (see Figure 6(b) and (d)) in order to reduce noise and render the detected foreground boundaries more regular (see Figure 7.

 

 

B. Contour detection

After a Gaussian filter is applied to an input ear image (see Figure 8 (a)), a smoothed version (see Figure 8 (b)) is obtained. Canny edge detection is subsequently performed on the preprocessed image in Figure 8 (b), which results in a binary edge image (see Figure 9 (a)). Morphological dilation is applied in order to connect disconnected contours and remove noise (see Figure 9 (b)). Finally, the manually selected or automatically detected ROI is employed as a mask for the purpose of removing all of the edges not associated with ear contours, followed by the removal of the remaining small connected components (see Figure 10.

 

 

 

 

 

 

In the case of the IIT Delhi ear database, the same protocol (as the one for the AMI ear database) has been followed, except for the fact that the image borders are also cleared during the ROI-masking. In Figure 11 the detected prominent contours are presented for the IIT Delhi ear database.

 

 

C. Feature extraction and normalisation

Feature vectors are extracted from the prominent contours by applying the DRT to the binary edge image. The DRT is obtained when projections of an image are calculated from equally distributed angles within the interval 9 є [0°, 180°) [18]. The DRT of the binary image I(m, n) of size M x N pixels containing the prominent contours associated with the ear shell can be expressed as follows

where Rjdenotes the jth beam-sum which constitutes the cumulative intensity of the pixels that overlap with the jth beam, ß denotes the number of non-overlapping beams per angle, (-) represents the total number of angles and denotes the weight indicative of the contribution of the ith pixel towards the jth beam-sum. A detailed description of the theory and implementation of the DRT can be found in [18].

Examples of contour images and their corresponding DRTs within the context of the AMI and IIT Delhi ear databases are depicted in Figures 12 and 13 respectively. In order to ensure translation and scale invariance, the zero-valued components are removed from each projection profile, after which the dimension of each projection profile is adjusted to a predefined value of 160 through linear interpolation. The DRT image intensities are also normalised in such a way that the standard deviation across all features equals one (see Figure 14).

 

 

 

 

 

 

D. Feature matching and verification

The dissimilarity between two feature sets is quantified by the average Euclidean distance between the corresponding normalised feature vectors. The Euclidean distance between a normalised questioned and training feature vector, denoted by x and y respectively, is calculated as follows

In order to ensure rotational invariance, the normalised feature vectors associated with a questioned sample are iteratively shifted (with wrap-around) with respect to those belonging to a template. The alignment is deemed optimal when the average Euclidean distance is a minimum.

A questioned sample is compared to a reference sample (known to belong to the claimed individual), as well as to samples belonging to other (ranking) individuals. The dissimilarity between a questioned ear and the reference ear, as well as the respective dissimilarities between the questioned ear and those belonging to the ranking individuals are placed in a list, with the smallest dissimilarity at the top of the list and the largest dissimilarity at the bottom of the list. Verification is subsequently based on the relative position (ranking) of the dissimilarity associated with the reference ear in the list.

 

III. EXPERIMENTS

A. Data

Experiments are conducted on (1) the AMI ear dataset and (2) the IIT Delhi ear dataset. The AMI ear database consists of 700 images from 100 individuals. For each individual, six images of the right and one image of the left ear are available and each ear image was captured at a resolution of 702x492 pixels. The IIT Delhi ear database consists of 375 images that belong to 125 individuals, i.e. three images per individual. Each of these images has a resolution of 272x 204 pixels. Figures 15 and 16 depict samples of ear images from both databases.

 

 

 

 

B. Protocol

In this study three main experiments are conducted to investigate the proficiency of the proposed semi-automated and fully automated ear-based authentication systems. The experimental protocols are dichotomized as follows:

Experiment 1. This experiment investigates the proficiency of the proposed semi-automated ear-based authentication system where the ROI is manually specified. This experiment is further dichotomized into two sub-experiments, i.e. Experiment 1A and Experiment 1B, which respectively considers so-called "Rank-1" and "Optimal ranking" scenarios as explained later in this section.

Experiment 2. This experiment investigates the proficiency of the proposed automated ROI detection algorithm.

Experiment 3. This experiment investigates the proficiency of the proposed fully automated ear-based authentication system, in which case the ROI is automatically detected through deep learning. Similar to Experiment 1, this experiment is further dichotomized into two sub-experiments, i.e. Experiment 3A and Experiment 3B.

It is assumed that only one positive sample is available for each individual enrolled into the system which serves as a reference sample for the corresponding individual during template matching. A fc-fold cross-validation protocol is employed for each experiment as outlined below:

Experiment 1A (Rank-1 scenario): In this scenario a questioned ear is only accepted as authentic when the distance associated with the reference sample belonging to the claimed individual is the smallest, in which case the questioned ear has a ranking of one. This is referred to as the rank-1 scenario. For this experiment both of the ear databases are partitioned into two sets, that is the evaluation and ranking sets. For the AMI ear database, a 100-fold cross-validation procedure is conducted as depicted in Figure 17 A similar 125-fold cross-validation protocol is employed within the context of the IIT Delhi ear database (see Figure 18. The proposed data partitioning protocol for the evaluation individuals, within the context of the Rank-1 scenario and the AMI ear database is depicted in Figure 19 A similar data partitioning protocol is followed within the context of the evaluation individuals for the IIT Delhi ear database.

 

 

 

 

 

 

Experiment 1B (Optimal ranking scenario): In this scenario, the system is rendered more lenient such that a questioned ear is accepted when it has a ranking that is better than or equal to a very specific optimal ranking, which may be greater than one. The optimal ranking is estimated by considering a suitable data partitioning protocol. This is referred to as the optimal ranking scenario. Within the context of this experiment both of the ear databases are partitioned into a ranking set, an optimisation set and an evaluation set. The data partitioning and cross-validation protocol for both databases is presented in Figure 20

 

 

As is the case for Experiment 1A, a 100-fold and 125-fold cross validation procedure are conducted for the AMI and IIT Delhi ear databases respectively. For a specific fold, cross-validation is conducted across the respective optimisation individuals according to the protocol depicted in Figure 19 The estimated optimal ranking based on both the average error rate (AER) and the equal error rate (ERR) is then employed to authenticate the ears associated with the evaluation individuals.

Experiment 2 (Automated ROI detection): In this experiment the manually selected (specified) ROI serves as a ground truth for evaluating the proposed automated ROI-detection protocol. For both of the databases, the data is partitioned as depicted in Figure 21

 

 

Experiment 3 (Fully automated ear-based authentication): This experiment evaluates the proficiency of the proposed fully automated ear-based authentication system, where a suitable ROI is automatically detected through deep learning. Both of the ear databases are partitioned into four sets, that is a training set, a validation set, a ranking set and an evaluation set. Within the context of this experiment both the "Rank-1" and "Optimal ranking" scenarios are investigated in Experiment 3A and Experiment 3B, respectively.

C. Results

In this section, the performance of the proposed systems is reported and a comprehensive analysis of the results is presented. Table Ilists the statistical measures employed for the purpose of quantifying the proficiency of the proposed systems. The statistical measures employed in this study constitute the most frequently utilised performance parameters for the purpose of evaluating ear-based biometric systems and include the accuracy (ACC), the false acceptance rate (FAR) and the false rejection rate (FRR) [21]. The results presented in Tables II, III, IV, V, and VI constitute averages across the relevant folds.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Experiment 1A (Rank-1 scenario): The results are presented in Table II. It is clear that the proposed semi-automated system is more proficient in the case of the AMI ear database. This is presumably due to the fact that these images have a higher resolution.

Experiment 1B (Optimal ranking scenario): The AER and EER were investigated as optimisation criteria for selecting the optimal ranking. The results are presented in Table III. For the AMI and IIT Delhi ear databases, only questioned ear images with a ranking of 5 (or better) and a ranking of 7 (or better) are accepted respectively.

Experiment 2 (Automated ROI detection): In Table IV the results for the AMI and IIT Delhi ear databases are presented. The precision, recall, accuracy, and F1 score are employed as performance measures. In order to visually compare the manually selected and automatically detected ROIs, a few examples within the context of the AMI and IIT Delhi ear databases are presented in Figures 22 and 23 respectively.

 

 

 

 

Experiment 3A (Rank-1 scenario): The results for the proposed fully automated system within the context of the Rank-1 scenario are presented in Table V. The low FAR and high FRR are not unexpected.

Experiment 3B (Optimal ranking scenario): When optimal rankings (which do not necessarily correspond to a ranking of one) are investigated within the context of the proposed fully automated system, a significant improvement in proficiency is achieved (see Table VI). Only questioned ear images with a ranking of 7 (or better) and a ranking of 10 (or better) are accepted within the context of the AMI and IIT Delhi ear databases respectively. A similar improvement in proficiency is evident when the results presented in Table III are compared to those presented in Table II within the context of the proposed semi-automated system.

Table VIIplaces the proficiency of the proposed optimised semi-automated ear-based authentication system into perspective by comparing it to a number of recently developed systems that were also evaluated on either the AMI or IIT Delhi ear databases.

D. Software and hardware employed

The systems proposed in this paper were developed in MATLABTM (versions R2017b and R2018a). The following toolboxes were employed:

Image Processing Toolbox (version R2017b);

Neural Network Toolbox (version R2018a); and

Statistics and Machine Learning ToolboxTM (version R2018a).

The algorithms were implemented on an 8th Generation Intel® Core i5 workstation with 8 GB RAM.

E. Conclusion

In the case of the proposed semi-automated system AERs of 2.4% and 6.59% are reported for the AMI and IIT Delhi ear databases respectively within the context of the Rank-1 scenario. These AERs are reduced to 1.10% and 4.52% respectively by employing optimal rankings. Accuracies of 91% and 88% are reported for the proposed CNN-based ROI detection protocol within the context of the AMI and IIT Delhi ear databases respectively. Within the context of the fully automated system AERs of 10.81% and 20.85% are reported for the AMI and IIT Delhi ear databases respectively within the context of the Rank-1 scenario. These AERs are significantly improved upon to 6.67% and 10.46% respectively by employing optimal rankings.

The proficiency of the proposed fully automated end-to-end system, in which the ROI is automatically detected, followed by feature extraction, feature matching, and verification is significantly lower than that of the semi-automated system in which case the ROI is manually specified for both the rank-1 and optimal ranking scenarios.

F. Contribution

The semi-automated and fully automated systems proposed in this paper employ an ensemble of pattern recognition techniques that has not been employed for the purpose of ear-based biometric authentication on any previous occasion and may therefore be considered novel. It is therefore reasonable to assume that either of the aforementioned systems will be complementary to any existing state-of-the-art system that invariably extracts different features or employ different feature matching techniques. It is therefore very likely that, when any of the proposed systems is combined with an existing system of comparable proficiency (see Table VII, a superior combined performance will be attained.

 

IV. FUTURE WORK

Avenues for future research include an investigation into the feasibility of an end-to-end deep learning-based approach for ear-based biometric authentication, as well as an investigation into the feasibility of another machine learning-based approach, like a suitable support vector machine, for the second part of the fully automated system proposed in this paper. The proposed semi-automated and fully automated systems should also be evaluated on other databases that may become publicly available in the near future.

 

REFERENCES

[1] L. Yuan, and Z. Mu, "Ear recognition based on Gabor features and KFDA," The Scientific World Journal, 2014.

[2] J. F. Velez, A. Sanchez, B. Moreno, and S. Sural, "Robust ear detection for biometric verification," IADIS International Journal on Computer Science and Information Systems, vol. 8, no. 1, pp. 31-46, 2013.         [ Links ]

[3] R. N. Othman, F. Alizadeh, and A. Sutherland, "A novel approach for occluded ear recognition based on shape context," in Proceedings of the International Conference on Advanced Science and Engineering (ICOASE), 2018, pp. 93-98, IEEE.

[4] K. Annapurani, M. A. K. Sadiq, and C. Malathy, "Fusion of shape of the ear and tragus-a unique feature extraction method for ear authentication system," Expert Systems with Applications, vol. 42, no. 1, pp. 649-656, 2015.         [ Links ]

[5] I. Omara, F. Li, H. Zhang, and W. Zuo, "A novel geometric feature extraction method for ear recognition," Expert Systems with Applications, 2016, vol. 65, pp. 127-135.         [ Links ]

[6] A. S. Anwar, K. K. A. Ghany and H. ElMahdy, "Human ear recognition using SIFT features," Third World Conference on Complex Systems (WCCS), 2015, pp. 1-6, IEEE.

[7] P. L. Galdamez, A. G. Arrieta, and M. R. Ramon, "Ear recognition using a hybrid approach based on neural networks," in Proceedings of the 17th International Conference on Information Fusion, (FUSION), 2014, pp. 1-6, IEEE.

[8] S. M. Jiddah and K. Yurtkan, "Fusion of geometric and texture features for ear recognition," in Proceedings of the 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), 2018, pp. 1-5, IEEE.

[9] D. Querencias-Uceta, B. Rios-Sanchez, and C. Sanchez-Avila, "Principal component analysis for ear-based biometric verification," in Proceedings of the International Carnahan Conference on Security Technology (ICCST), 2017, pp. 1-6, IEEE.

[10] T. Ying, Z. Debin, and Z. Baihuan, "Ear recognition based on weighted wavelet transform and DCT," in Proceedings of the 26th Chinese Control and Decision Conference, (CCDC), 2014, pp. 4410-4414, IEEE.

[11] S. Dodge, J. Mounsef, and L. Karam, "Unconstrained ear recognition using deep neural networks," IET Biometrics, vol. 7, no. 3, pp. 207-214, 2018.         [ Links ]

[12] U. Kacar and M. Kirci, "ScoreNet: deep cascade score level fusion for unconstrained ear recognition," IET Biometrics, vol. 8, no. 2, pp. 109-120, 2018.         [ Links ]

[13] E. E. Hansley, M. P. Segundo, and S. Sarkar, "Employing fusion of learned and handcrafted features for unconstrained ear recognition," IET Biometrics, vol. 7, no. 3, pp. 215-223, 2018.         [ Links ]

[14] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, "Deep learning", vol. 1, no. 2, Cambridge: MIT press, 2016.

[15] J. Wu, "Introduction to convolutional neural networks", National Key Lab for Novel Software Technology, Nanjing University, China, vol. 5, p.23, 2017.         [ Links ]

[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, 2012, pp. 1097-1105.

[17] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015.         [ Links ]

[18] J. Coetzer, B. M. Herbst and J. A. du Preez, "Offline signature verification using the discrete radon transform and a hidden Markov model," EURASIP Journal on Applied Signal Processing, vol. 2004, no. 4, pp. 559-571, 2004, Special issue: Biometric Signal Processing.         [ Links ]

[19] E. Gonzalez, L. Alvarez, and L. Mazorra, "AMI ear database," 2012, http://ctim.ulpgc.es/research_works/ami_ear_database/

[20] A. Kumar, "IIT Delhi ear database version 1.0," 2007, http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Ear.htrn

[21] A. Abaza, A. Ross, C. Hebert, M. A. F. Harrison, and M. S. Nixon, "A survey on ear biometrics," ACM computing surveys (CSUR), vol. 45, no. 2, pp. 1-35, (2013).         [ Links ]

 

 

Based on "On automated ear-based authentication", by A. Kohlakala and J. Coetzer which was published in the Proceedings of the SAUPEC/RobMech/ PRASA 2020 Conference held in Cape Town, South Africa from 29 to 31 January 2020. ©2020 SAIEE. "We would like to express our sincere gratitude to the Ball family for funding this research."

 

 

Aviwe Kohlakala is a Ph.D. student in the Department of Mathematical Sciences at Stellenbosch University. She received a B.Sc. in Mathematics and Applied Mathematics from the University of South Africa in 2015 and an M.Sc. in Applied Mathematics from Stellenbosch University in 2019. Her research interests include machine learning, deep learning, biometric authentication and pattern recognition.

 

 

Johannes Coetzer was born in Bloemfontein, South Africa in 1971. He received an M.Sc. in Applied Mathematics from the University of the Free State in 1996 and a Ph.D. in Applied Mathematics from Stellenbosch University in 2005. From 1997 to 1998, he was a Junior Lecturer, and from 1999 to 2001, a Lecturer in Applied Mathematics at the University of the Free State. From 2002 to 2008, he was a Lecturer, and since 2009, a Senior Lecturer in Applied Mathematics at Stellenbosch University. His research interests include machine learning, biometric authentication and classifier combination.

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License