Reconfigurable Intelligent Surface-Assisted Aerial Nonterrestrial Networks: An Intelligent Synergy With Deep Reinforcement Learning

Muhammad Umer, Muhammad Ahmed Mohsin, Aryan Kaushik, Qurrat Ul Ain Nadeem, Ali Arshad Nasir, Syed Ali Hassan

Research output: Contribution to journalArticlepeer-review

Abstract

Reconfigurable intelligent surface (RIS)-assisted aerial non-terrestrial networks (NTNs) offer a promising paradigm for enhancing wireless communications in the era of 6G and beyond. By integrating RIS with aerial platforms such as unmanned aerial vehicles (UAVs) and high-altitude platforms (HAPs), these networks can intelligently control signal propagation, extending coverage, improving capacity, and enhancing link reliability. This article explores the application of deep reinforcement learning (DRL) as a powerful tool for optimizing RIS-assisted aerial NTNs. We focus on hybrid proximal policy optimization (H-PPO), a robust DRL algorithm well-suited for handling the complex, hybrid action spaces inherent in these networks. Through a case study of an aerial RIS (ARIS)-aided coordinated multi-point non-orthogonal multiple access (CoMPNOMA) network, we demonstrate how H-PPO can effectively optimize the system and maximize the sum rate while adhering to system constraints. Finally, we discuss key challenges and promising research directions for DRL-powered RIS-assisted aerial NTNs, highlighting their potential to transform nextgeneration wireless networks.

Original languageEnglish (US)
Pages (from-to)55-64
Number of pages10
JournalIEEE Vehicular Technology Magazine
Volume20
Issue number1
DOIs
StatePublished - 2025

Keywords

  • Heuristic algorithms
  • Internet of Things
  • Optimization
  • Reflection
  • Reliability
  • Resource management
  • Satellites
  • Trajectory
  • Vehicle dynamics
  • Wireless networks

ASJC Scopus subject areas

  • Automotive Engineering

Fingerprint

Dive into the research topics of 'Reconfigurable Intelligent Surface-Assisted Aerial Nonterrestrial Networks: An Intelligent Synergy With Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this