A deep residual compensation extreme learning machine and applications

AuthorXiaoliang Xie,Tianle Zhang,Muzhou Hou,Jiaxian Bai,Yinghao Chen
Date01 September 2020
Published date01 September 2020
DOIhttp://doi.org/10.1002/for.2663
RESEARCH ARTICLE
A deep residual compensation extreme learning machine
and applications
Yinghao Chen
1
| Xiaoliang Xie
2
| Tianle Zhang
1
| Jiaxian Bai
3
|
Muzhou Hou
1
1
College of Mathematics and Statistics,
Central South University, Changsha,
China
2
School of Mathematics and Statistics,
Hunan University of Technology and
Business, Changsha, China
3
College of Finance and Statistics, Hunan
University, Changsha, China
Correspondence
Muzhou Hou, College of Mathematics and
Statistics, Central South University.
Changsha 410083, China.
Email: houmuzhou@sina.com
Funding information
The Projects of the National Social Science
Foundation of China, Grant/Award
Number: 19BTJ011
Abstract
The extreme learning machine (ELM) is a type of machine learning algorithm
for training a single hidden layer feedforward neural network. Randomly ini-
tializing the weight between the input layer and the hidden layer and the
threshold of each hidden layer neuron, the weight matrix of the hidden layer
can be calculated by the least squares method. The efficient learning ability in
ELM makes it widely applicable in classification, regression, and more. How-
ever, owing to some unutilized information in the residual, there are relatively
huge prediction errors involving ELM. In this paper, a deep residual compen-
sation extreme learning machine model (DRC-ELM) of multilayer structures
applied to regression is presented. The first layer is the basic ELM layer, which
helps in obtaining an approximation of the objective function by learning the
characteristics of the sample. The other layers are the residual compensation
layers in which the learned residual is corrected layer by layer to the predicted
value obtained in the previous layer by constructing a feature mapping
between the input layer and the output of the upper layer. This model is
applied to two practical problems: gold price forecasting and airfoil self-noise
prediction. We used the DRC-ELM with 50, 100, and 200 residual compensa-
tion layers respectively for experiments, which show that DRC-ELM does bet-
ter in generalization and robustness than classical ELM, improved ELM
models such as GA-RELM and OS-ELM, and other traditional machine learn-
ing algorithms such as support vector machine (SVM) and back-propagation
neural network (BPNN).
KEYWORDS
airfoil self-noise, deep residual compensation extreme learning machine, extreme learning
machine, gold price forecasting, regression problem
1|INTRODUCTION
Single hidden layer feedforward neural networks
(SLFNs) have been proved to approximate nonlinear fea-
ture mapping with any precision (Hornik, 1991). How-
ever, the traditional single hidden layer neural network
based on the gradient descent algorithm can easily fall
into local minimum and bring greater computational
time complexity. In 2005 Huang et al. released a new
machine learning algorithm to solve several problems
(Huang, Zhu & Siew, 2004). As an improvement to the
training method of the single hidden layer neural
Received: 13 August 2019 Accepted: 12 January 2020
DOI: 10.1002/for.2663
986 © 2020 John Wiley & Sons, Ltd. Journal of Forecasting. 2020;39:986999.wileyonlinelibrary.com/journal/for

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT