Abstract
NOMA is a radio access technique that multiplexes several users over the frequency resource and provides high throughput and fairness among different users. The maximization of the minimum the data-rate, also known as max-min, is a popular approach to ensure fairness among the users. NOMA optimizes the transmission power (or power-coefficients) of the users to perform max-min. The problem is a constrained non-convex optimization for users greater than two. We propose to solve this problem using the Double Deep Q Learning (DDQL) technique, a popular method of reinforcement learning. The DDQL technique employs a Deep Q- Network to learn to choose optimal actions to optimize users' power-coefficients. The model of the Markov Decision Process (MDP) is critical to the success of the DDQL method, and helps the DQN to learn to take better actions. An MDP model is proposed in which the state consists of the power-coefficients values, data-rate of users, and vectors indicating which of the power-coefficients can be increased or decreased. An action simultaneously increases the power-coefficient of one user and reduces another user's power-coefficient by the same amount. The amount of change can be small or large. The action-space contains all possible ways to alter the values of any two users at a time. DQN consists of a convolutional layer and fully connected layers. We compared the proposed method with the sequential least squares programming and trust-region constrained algorithms and found that the proposed method can produce competitive results.
Original language | English (US) |
---|---|
Article number | 9262953 |
Pages (from-to) | 211235-211247 |
Number of pages | 13 |
Journal | IEEE Access |
Volume | 8 |
DOIs | |
State | Published - 2020 |
Keywords
- deep reinforcement learning
- double deep Q learning
- non-convex optimization
- Non-orthogonal multiplexing
- power-domain NOMA
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering