Emergent Communication in Multi-Agent Reinforcement Learning for Future Wireless Networks

Marwa Chafii, Salmane Naoumi, Reda Alami, Ebtesam Almazrouei, Mehdi Bennis, Merouane Debbah

Research output: Contribution to journalArticlepeer-review


In different wireless network scenarios, multiple network entities need to cooperate in order to achieve a common task with minimum delay and energy consumption. Future wireless networks mandate exchanging high dimensional data in dynamic and uncertain environments, therefore implementing communication control tasks becomes challenging and highly complex. Multi-agent reinforcement learning with emergent communication (EC-MARL) is a promising solution to address high dimensional continuous control problems with partially observable states in a cooperative fashion where agents build an emergent communication protocol to solve complex tasks. This article articulates the importance of EC-MARL within the context of future 6G wireless networks, which imbues autonomous decision-making capabilities into network entities to solve complex tasks such as autonomous driving, robot navigation, flying base stations network planning, and smart city applications. An overview of EC-MARL algorithms and their design criteria are provided while presenting use cases and research opportuni-ties on this emerging topic.

Original languageEnglish (US)
Pages (from-to)18-24
Number of pages7
JournalIEEE Internet of Things Magazine
Issue number4
StatePublished - Dec 1 2023

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications
  • Computer Science Applications
  • Hardware and Architecture
  • Information Systems


Dive into the research topics of 'Emergent Communication in Multi-Agent Reinforcement Learning for Future Wireless Networks'. Together they form a unique fingerprint.

Cite this