Learning by Minimizing Resources in Neural Networks
Pál Ruján
Mario Marchand
Institut für Festkörperforschung der Kernforschungsanlage,
Jülich, Postfach 1913, D-5170 Jülich, Federal Republic of Germany
Abstract
We reformulate the problem of supervised learning in neural nets to include the search for a network with minimal resources. The information processing in feedforward networks is described in geometrical terms as the partitioning of the space of possible input configurations by hyperplanes corresponding to hidden units. Regular partitionings introduced here are a special class of partitionings. Corresponding architectures can represent any boolean function using a single layer of hidden units whose number depends on the specific symmetries of the function. Accordingly, a new class of plane-cutting algorithms is proposed that construct in polynomial time a "custom-made'' architecture implementing the desired set of input/ouput examples. We report the results of our experiments on the storage and rule-extraction abilities of three-layer perceptrons synthetized by a simple greedy algorithm. As expected, simple neuronal structures with good generalization properties emerge only for a strongly correlated set of examples.