Optimization of the Architecture of Feed-forward Neural Networks with Hidden Layers by Unit Elimination
Anthony N. Burkitt
Physics Department, University of Wuppertal,
Gauss-Strasse 20, D-5600 Wuppertal 1, Germany
Abstract
A method for reducing the number of units in the hidden layers of a feed-forward neural network is presented. Starting with a net that is oversize, the redundant units in the hidden layer are eliminated by introducing an additional cost function on a set of auxiliary linear response units. The extra cost function enables the auxiliary units to fuse together the redundant units on the original network, and the auxiliary units serve only as an intermediate construct that vanishes when the method converges. Numerical tests on the Parity and Symmetry problems illustrate the usefulness of this method in practice.