Impedance Learning for Robotic Contact Tasks Using Natural Actor-Critic Algorithm
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Byungchan | - |
dc.contributor.author | Park, Jooyoung | - |
dc.contributor.author | Park, Shinsuk | - |
dc.contributor.author | Kang, Sungchul | - |
dc.date.accessioned | 2021-09-08T04:03:21Z | - |
dc.date.available | 2021-09-08T04:03:21Z | - |
dc.date.created | 2021-06-11 | - |
dc.date.issued | 2010-04 | - |
dc.identifier.issn | 1083-4419 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/116674 | - |
dc.description.abstract | Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | REINFORCEMENT | - |
dc.subject | PARAMETERS | - |
dc.subject | TORQUE | - |
dc.title | Impedance Learning for Robotic Contact Tasks Using Natural Actor-Critic Algorithm | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Park, Jooyoung | - |
dc.contributor.affiliatedAuthor | Park, Shinsuk | - |
dc.identifier.doi | 10.1109/TSMCB.2009.2026289 | - |
dc.identifier.scopusid | 2-s2.0-77949776001 | - |
dc.identifier.wosid | 000275665300013 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, v.40, no.2, pp.433 - 443 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | - |
dc.citation.title | IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | - |
dc.citation.volume | 40 | - |
dc.citation.number | 2 | - |
dc.citation.startPage | 433 | - |
dc.citation.endPage | 443 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Automation & Control Systems | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Cybernetics | - |
dc.subject.keywordPlus | REINFORCEMENT | - |
dc.subject.keywordPlus | PARAMETERS | - |
dc.subject.keywordPlus | TORQUE | - |
dc.subject.keywordAuthor | Contact task | - |
dc.subject.keywordAuthor | equilibrium point control | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | robot manipulation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.