Optimum structure of feed forward neural networks by SOM clustering of neuron activations
Citations
Altmetric:
Authors
Date
2007-12
Type
Conference Contribution - published
Collections
Fields of Research
Abstract
Neural Networks have the capability to
approximate nonlinear functions to a high degree
of accuracy owing to its nonlinear processing in
the hidden layer neurons. However, the optimum
network structure that is required for solving a
particular problem is still an active area of
research. In this paper, a new method based on the
correlation of the weighted activation of the hidden
neurons combined with the Self Organisation
Feature Maps is presented for obtaining the
optimum network structure efficiently. In an
extensive search for internal consistency of hidden
neuron activation patterns in a network, it was
found that the weighted hidden neuron activations
feeding the output neuron(s) displayed remarkably
consistent patterns. Specifically, redundant hidden
neurons exhibit weighted activation patterns that
are highly correlated. Therefore, the paper
proposes identifying hidden neurons with weighted
activation patterns that are highly correlated and
using one neuron to represent a group of correlated
neurons. The paper proposes to automate this
process in two steps: 1) Map the correlated
weighted hidden neuron activation patterns onto a
self organising map; and 2) Form clusters of SOM
neurons themselves to find the maximum likely
number of clusters of correlated activity patterns.
The likely number of clusters on the map indicates
the required number of hidden neurons to model
the data.
The paper highlights the approach using an
example and demonstrates its application to
solving two problems including a realistic problem
of predicting river flows in a catchment in New
Zealand.
Permalink
Source DOI
Rights
Copyright © The Authors. The responsibility for the contents of this paper rests upon the authors and not on the Modelling and Simulation Society of Australia and New Zealand Inc.