Skip to content

Releases: ImanRHT/QECO

QECO_V1.0

30 Dec 20:02
db1ad09
Compare
Choose a tag to compare

QECO: A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing

We’re excited to share QECO, a QoE-Oriented Computation Offloading Algorithm that’s designed to provide Mobile Edge Computing (MEC) systems with optimized Quality of Experience (QoE). The repository includes key components like training scripts, the MEC environment simulation, and the D3QN-based network model, all implemented in Python using TensorFlow.

Overview

QECO is designed to balance and prioritize QoE factors based on individual mobile device requirements while considering the dynamic workloads at the edge nodes. The QECO algorithm captures the dynamics of the MEC environment by integrating the Dueling Double Deep Q-Network (D3QN) model with Long Short-Term Memory (LSTM) networks. This algorithm address the QoE maximization problem by efficiently utilizing resources from both MDs and ENs.

  • D3QN: By integrating both double Q-learning and dueling network architectures, D3QN overcomes overestimation bias in action-value predictions and accurately identifies the relative importance of states and actions. This improves the model’s ability to make accurate predictions, providing a foundation for enhanced offloading strategies.

  • LSTM: Incorporating LSTM networks allows the model to continuously estimate dynamic work- loads at edge servers. This is crucial for dealing with limited global information and adapting to the uncertain MEC environment with multiple MDs and ENs. By predicting the future workload of edge servers, MDs can effectively adjust their offloading strategies to achieve higher QoE.

Citation

I. Rahmati, H. Shahmansouri, and A. Movaghar, "QECO: A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing".

@article{rahmati2024qeco,
  title={QECO: A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing},
  author={Rahmati, Iman and Shah-Mansouri, Hamed and Movaghar, Ali},
  journal={arXiv preprint arXiv:2311.02525},
  year={2024}
}

Future Directions

  • Addressing single-agent non-stationarity issues by leveraging multi-agent DRL.
  • Accelerating the learning of optimal offloading policies by taking advantage of Federated Learning techniques in the training process. This will allow MDs to collectively contribute to improving the offloading model and enable continuous learning when new MDs join the network.
  • Addressing partially observable environment issues by designing a decentralized Partially Observable Markov Decision Process (Dec-POMDP).
  • Extending the Task Models by considering interdependencies among tasks. This can be achieved by incorporating a Task Call Graph Representation.
  • Implementation of the D3QN algorithm using PyTorch, focusing on efficient parallelization and enhanced model stability.

Get Involved

We welcome contributions! For bug reports, feature requests, or contributions, please follow the instructions in the repository’s Contributing section.

Check it out: QECO GitHub Repo

New Contributors

Full Changelog: https://github.com/ImanRHT/QECO/commits/QECO