Gain Variation in Recurrent Error Propagation Networks
Steven J. Nowlan
Computer Science Department, University of Toronto,
10 Kings College Road, Toronto, Ontario M5S 1A4, Canada
Abstract
A global gain term is introduced into recurrent analog networks. This gain term may be varied as a recurrent network settles, similar to the way temperature is varied when "annealing" a network of stochastic binary units. An error propagation algorithm is presented which simultaneously optimizes the weights and the gain schedule for a recurrent network. The performance of this algorithm is compared to the standard back propagation algorithm on a difficult constraint satisfaction problem. An order of magnitude improvement in the number of learning trails required is observed with the new algorithm. This improvement is obtained by allowing a much larger region of weight space to satisty the problem. The simultaneous optimization of weights and gain schedule leads to a qualitatively different region of weight space than that reached by optimization of the weights alone.