Abstract
Multi-layer perceptron (MLP) neural network training can be seen as a special case of function approximation, where no explicit model of the data is assumed. In its simplest form, it corresponds to finding an appropriate set of weights that minimize the network training and generalization errors. Various methods can be used to determine these weights, from standard optimization methods (e.g., gradient-based algorithms) to bio-inspired heuristics (e.g., evolutionary algorithms). Focusing on the problem of finding appropriate weight vectors for MLP networks, this paper proposes the use of an immune algorithm and a second-order gradient-based technique to train MLPs. Results are obtained for classification and function approximation tasks and the different approaches are compared in relation to the types of problems they are more suitable for.