Published January 1, 2023 | Version v1
Journal article Open

Energy-Efficient RL-Based Aerial Network Deployment Testbed for Disaster Areas

  • 1. Istanbul Tech Univ, Dept Comp Engn, Istanbul, Turkiye

Description

Rapid deployment of wireless devices with 5G and beyond enabled a connected world. However, an immediate demand increase right after a disaster paralyzes network in-frastructure temporarily. The continuous flow of information is crucial during disaster times to coordinate rescue operations and identify the survivors.

Communication infrastructures built for users of disaster areas should satisfy rapid deployment, increased coverage, and avail-ability. Unmanned air vehicles (UAV) provide a potential solution for rapid deployment as they are not affected by traffic jams and physical road damage during a disaster. In addition, ad-hoc WiFi communication allows the generation of broadcast domains within a clear channel which eases one-to-many communications. Moreover, using reinforcement learning (RL) helps reduce the computational cost and increases the accuracy of the NP-hard problem of aerial network deployment.

To this end, a novel flying WiFi ad-hoc network management model is proposed in this paper. The model utilizes deep-Q-learning to maintain quality-of-service (QoS), increase user equipment (UE) coverage, and optimize power efficiency. Fur-thermore, a testbed is deployed on Istanbul Technical Univer-sity (ITU) campus to train the developed model. Training results of the model using testbed accumulates over 90% packet delivery ratio as QoS, over 97% coverage for the users in flow tables, and 0.28 KJ/Bit average power consumption.

Files

bib-728ea4a6-a27c-4c35-aa71-7662d52b9b66.txt

Files (215 Bytes)

Name Size Download all
md5:08178cc48157fea373201b2920d11f7c
215 Bytes Preview Download