Maximum Entropy: A Special Case of Minimum Cross-entropy Applied to Nonlinear Estimation by an Artificial Neural Network
Joseph C. Park
Electronic mail address: josephpark@mindspring.com
2d3D Incorporated,
2003 North Swinton Avenue,
Delray Beach, FL 33444
Salahalddin T. Abusalah
Electronic mail address: sabusala@uwf.edu
University of West Florida,
Department of Electrical and Computer Engineering,
Pensacola, FL 32514
Abstract
The application of cross-entropy information processing optimizations to artificial neural network (ANN) training can provide decreased sensitivity to accelerated learning rates as well as insights into the information processing structure of the network. In order to assess the cross-entropy between the desired training goal and the evolving state of network information at each training step, the probability distributions of the network output at each step, as well as that of the desired network output must be available. However, if the input training data is not expressible as a closed form function, analytic representation of the network output distribution may be impossible, excluding the application of cross-entropy measures to many higher-dimensional, real-world problems. In such cases the network may be trained according to entropy maximization of the output training distribution. To illustrate this, a perceptron is detailed which estimates orthogonal basis function coefficients of a highly nonlinear set of oceanographic data based on entropy maximization. Use of the maximum entropy cost-function obviates the need for explicit determination of the network output probability distributions, while retaining the desirable functionality of information-theoretic network organizations.