File(s) not publicly available

Combination strategies for finding optimal neural network architecture and weights

chapter
posted on 06.12.2017, 00:00 by Brijesh Verma, R Ghosh
The chapter presents a novel neural learning methodology by using different combination strategies for finding architecture and weights. The methodology combines evolutionary algorithms with direct/matrix solution methods such as Gram-Schmidt, singular value decomposition, etc., to achieve optimal weights for hidden and output layers. The proposed method uses evolutionary algorithms in the first layer and the least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons and weights using hierarchical combination strategies. The chapter explores all the different facets of the proposed method in tenns of classification accuracy, convergence property, generalization ability, time and memory complexity. The learning methodology has been tested using many benchmark databases such as XOR, 10 bit odd parity, handwriting characters from CEDAR, breast cancer and heart disease from DCI machine learning repository. The experimental results, detailed discussion and analysis are included in the chapter.

Funding

Category 1 - Australian Competitive Grants (this includes ARC, NHMRC)

History

Editor

Rajapakse JC; Wang L

Parent Title

Neural information processing : research and development

Start Page

294

End Page

319

Number of Pages

26

ISBN-10

3540211233

Publisher

Springer

Place of Publication

Germany

Open Access

No

Era Eligible

Yes

Number of Chapters

25