Abstract
Incremental extreme learning machine (I-ELM) randomly obtains the input weights and the hidden layer neuron bias during the training process. Some hidden nodes in the ELM play a minor role in the network outputs which may eventually increase the network complexity and even reduce the stability of the network. In order to avoid this issue, this paper proposed an enhanced method for the I-ELM which is referred to as the improved incremental extreme learning machine (II-ELM). At each learning step of original I-ELM, an additional offset k will be added to the hidden layer output matrix before computing the output weights for the new hidden node and analysed the existence of the offset k. Compared with several improved algorithms of ELM, the advantages of the II-ELM in the training time, the forecasting accuracy, and the stability are verified on several benchmark datasets in the UCI database.
Original language | English (US) |
---|---|
Pages (from-to) | 308-317 |
Number of pages | 10 |
Journal | Systems Science and Control Engineering |
Volume | 8 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2020 |
Keywords
- Extreme learning machine
- incremental algorithm
- random hidden nodes
- single-hidden layer feedforward neural networks
ASJC Scopus subject areas
- Control and Systems Engineering
- Control and Optimization
- Artificial Intelligence