Reinforcement learning based on movement primitives for contact tasks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Young-Loul | - |
dc.contributor.author | Ahn, Kuk-Hyun | - |
dc.contributor.author | Song, Jae-Bok | - |
dc.date.accessioned | 2021-08-31T04:50:50Z | - |
dc.date.available | 2021-08-31T04:50:50Z | - |
dc.date.created | 2021-06-19 | - |
dc.date.issued | 2020-04 | - |
dc.identifier.issn | 0736-5845 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/56796 | - |
dc.description.abstract | Recently, robot learning through deep reinforcement learning has incorporated various robot tasks through deep neural networks, without using specific control or recognition algorithms. However, this learning method is difficult to apply to the contact tasks of a robot, due to the exertion of excessive force from the random search process of reinforcement learning. Therefore, when applying reinforcement learning to contact tasks, solving the contact problem using an existing force controller is necessary. A neural-network-based movement primitive (NNMP) that generates a continuous trajectory which can be transmitted to the force controller and learned through a deep deterministic policy gradient (DDPG) algorithm is proposed for this study. In addition, an imitation learning algorithm suitable for NNMP is proposed such that the trajectories similar to the demonstration trajectory are stably generated. The performance of the proposed algorithms was verified using a square peg-in-hole assembly task with a tolerance of 0.1 mm. The results confirm that the complicated assembly trajectory can be learned stably through NNMP by the proposed imitation learning algorithm, and that the assembly trajectory is improved by learning the proposed NNMP through the DDPG algorithm. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | PERGAMON-ELSEVIER SCIENCE LTD | - |
dc.subject | ROBOTICS | - |
dc.title | Reinforcement learning based on movement primitives for contact tasks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Song, Jae-Bok | - |
dc.identifier.doi | 10.1016/j.rcim.2019.101863 | - |
dc.identifier.scopusid | 2-s2.0-85072603131 | - |
dc.identifier.wosid | 000501405400003 | - |
dc.identifier.bibliographicCitation | ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, v.62 | - |
dc.relation.isPartOf | ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING | - |
dc.citation.title | ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING | - |
dc.citation.volume | 62 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Engineering, Manufacturing | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordPlus | ROBOTICS | - |
dc.subject.keywordAuthor | AI-based methods | - |
dc.subject.keywordAuthor | Force control | - |
dc.subject.keywordAuthor | Deep Learning in robotics and automation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.