TY - GEN

T1 - Notice of Removal

T2 - 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015

AU - Nagata, Fusaomi

AU - Inoue, Shota

AU - Fujii, Satoru

AU - Otsuka, Akimasa

AU - Watanabe, Keigo

N1 - Publisher Copyright:
© 2015 The Society of Instrument and Control Engineers-SICE.

PY - 2015/9/30

Y1 - 2015/9/30

N2 - Generally, in making a neural network learn nonlinear relations properly, desired training set are used. The training set consists of multiple pairs of an input vector and an output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Also, weights are updated using a back propagation algorithm in backward calculation. The time required for the learning process of the neural network depends on the number of total weights in the neural network and the one of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed e.g., 200 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors reduces to a level. After the learning conducted using the temporary set, the original set is applied again instead of the temporary set. It is expected by alternately applying the above two types of sets for iterative learning that the convergence time can be efficiently reduced. The effectiveness is proved through simulation experiments using a kinematic model of a leg with four-DOFs.

AB - Generally, in making a neural network learn nonlinear relations properly, desired training set are used. The training set consists of multiple pairs of an input vector and an output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Also, weights are updated using a back propagation algorithm in backward calculation. The time required for the learning process of the neural network depends on the number of total weights in the neural network and the one of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed e.g., 200 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors reduces to a level. After the learning conducted using the temporary set, the original set is applied again instead of the temporary set. It is expected by alternately applying the above two types of sets for iterative learning that the convergence time can be efficiently reduced. The effectiveness is proved through simulation experiments using a kinematic model of a leg with four-DOFs.

KW - Efficient weights tuning

KW - Inverse kinematics

KW - Leg with multi-DOFs

KW - Neural network

KW - Temporary training set

UR - http://www.scopus.com/inward/record.url?scp=84960153753&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84960153753&partnerID=8YFLogxK

U2 - 10.1109/SICE.2015.7285331

DO - 10.1109/SICE.2015.7285331

M3 - Conference contribution

AN - SCOPUS:84960153753

T3 - 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015

SP - 1042

EP - 1046

BT - 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 28 July 2015 through 30 July 2015

ER -