Multi-Agent Deep Reinforcement Learning for Distributed Resource Management in Wirelessly Powered Communication Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hwang, Sangwon | - |
dc.contributor.author | Kim, Hanjin | - |
dc.contributor.author | Lee, Hoon | - |
dc.contributor.author | Lee, Inkyu | - |
dc.date.accessioned | 2021-08-30T09:47:00Z | - |
dc.date.available | 2021-08-30T09:47:00Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2020-11 | - |
dc.identifier.issn | 0018-9545 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/52026 | - |
dc.description.abstract | This paper studies multi-agent deep reinforcement learning (MADRL) based resource allocation methods for multi-cell wireless powered communication networks (WPCNs) where multiple hybrid access points (H-APs) wirelessly charge energy-limited users to collect data from them. We design a distributed reinforcement learning strategy where H-APs individually determine time and power allocation variables. Unlike traditional centralized optimization algorithms which require global information collected at a central unit, the proposed MADRL technique models an H-AP as an agent producing its action based only on its own locally observable states. Numerical results verify that the proposed approach can achieve comparable performance of the centralized algorithms. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | ALLOCATION | - |
dc.subject | MAXIMIZATION | - |
dc.subject | INFORMATION | - |
dc.subject | NOMA | - |
dc.title | Multi-Agent Deep Reinforcement Learning for Distributed Resource Management in Wirelessly Powered Communication Networks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Inkyu | - |
dc.identifier.doi | 10.1109/TVT.2020.3029609 | - |
dc.identifier.scopusid | 2-s2.0-85096222511 | - |
dc.identifier.wosid | 000589638700143 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, v.69, no.11, pp.14055 - 14060 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY | - |
dc.citation.title | IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY | - |
dc.citation.volume | 69 | - |
dc.citation.number | 11 | - |
dc.citation.startPage | 14055 | - |
dc.citation.endPage | 14060 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalResearchArea | Transportation | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Transportation Science & Technology | - |
dc.subject.keywordPlus | ALLOCATION | - |
dc.subject.keywordPlus | MAXIMIZATION | - |
dc.subject.keywordPlus | INFORMATION | - |
dc.subject.keywordPlus | NOMA | - |
dc.subject.keywordAuthor | Resource management | - |
dc.subject.keywordAuthor | Interference | - |
dc.subject.keywordAuthor | Optimization | - |
dc.subject.keywordAuthor | Uplink | - |
dc.subject.keywordAuthor | Wireless communication | - |
dc.subject.keywordAuthor | Downlink | - |
dc.subject.keywordAuthor | Wireless sensor networks | - |
dc.subject.keywordAuthor | Wireless powered communication networks | - |
dc.subject.keywordAuthor | multi-agent deep reinforcement learning | - |
dc.subject.keywordAuthor | actor-critic method | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.