TY - GEN
T1 - Reinforcement learning of millimeter wave beamforming tracking over COSMOS platform
AU - Nasim, Imtiaz
AU - Skrimponis, Panagiotis
AU - Ibrahim, Ahmed S.
AU - Rangan, Sundeep
AU - Seskar, Ivan
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/17
Y1 - 2022/10/17
N2 - Communication over large-bandwidth millimeter wave (mmWave) spectrum bands can provide high data rate, through utilizing high-gain beamforming vectors (briefly, beams). Real-time tracking of such beams, which is needed for supporting mobile users, can be accomplished through developing machine learning (ML) models. While computer simulations were used to show the success of such ML models, experimental results are still limited. Consequently in this paper, we verify the effectiveness of mmWave beam tracking over the open-source COSMOS testbed. We particularly utilize a multi-armed bandit (MAB) scheme, which follows reinforcement learning (RL) approach. In our MAB-based beam tracking model, the beam selection is modeled as an action, while the reward of the algorithm is modeled through the link throughput. Experimental results, conducted over the 60-GHz COSMOS-based mobile platform, show that the MAB-based beam tracking learning model can achieve almost 92% throughput compared to the Genie-aided beams after a few learning samples.
AB - Communication over large-bandwidth millimeter wave (mmWave) spectrum bands can provide high data rate, through utilizing high-gain beamforming vectors (briefly, beams). Real-time tracking of such beams, which is needed for supporting mobile users, can be accomplished through developing machine learning (ML) models. While computer simulations were used to show the success of such ML models, experimental results are still limited. Consequently in this paper, we verify the effectiveness of mmWave beam tracking over the open-source COSMOS testbed. We particularly utilize a multi-armed bandit (MAB) scheme, which follows reinforcement learning (RL) approach. In our MAB-based beam tracking model, the beam selection is modeled as an action, while the reward of the algorithm is modeled through the link throughput. Experimental results, conducted over the 60-GHz COSMOS-based mobile platform, show that the MAB-based beam tracking learning model can achieve almost 92% throughput compared to the Genie-aided beams after a few learning samples.
KW - COSMOS testbed
KW - beamforming tracking
KW - millimeter wave
KW - multi-armed bandit
KW - reinforcement learning
KW - wireless experimentation
UR - http://www.scopus.com/inward/record.url?scp=85144041988&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144041988&partnerID=8YFLogxK
U2 - 10.1145/3556564.3558242
DO - 10.1145/3556564.3558242
M3 - Conference contribution
AN - SCOPUS:85144041988
T3 - WiNTECH 2022 - Proceedings of the 2022 16th ACM Workshop on Wireless Network Testbeds, Experimental evaluation and CHaracterization, Part of MobiCom 2022
SP - 40
EP - 44
BT - WiNTECH 2022 - Proceedings of the 2022 16th ACM Workshop on Wireless Network Testbeds, Experimental evaluation and CHaracterization, Part of MobiCom 2022
PB - Association for Computing Machinery, Inc
T2 - 16th ACM Workshop on Wireless Network Testbeds, Experimental evaluation and CHaracterization, WiNTECH 2022 - Part of MobiCom 2022
Y2 - 17 October 2022
ER -