dc.description |
The integration of highly mobile unmanned aerial vehicles (UAVs) in wireless networks offers many advantages such as on-demand deployment and dynamic movement in 3D to enhance the response time of search and rescue. However, with the rapid uptake of demands and applications over wireless networks, unprecedented communication burdens have been posed on the networks due to the scarce channel resource and the significantly large amount of data to be transmitted.
Visual-enabled sensors have been mounted on UAVs and can collaboratively collect huge amount of data that would have to be processed or transmitted within the network. Extracting semantic information from a high volume of raw data is computationally intensive. In addition, the emerging over-the-air deep learning models introduce more challenges and opportunities for practical applications. Due to the limit on battery capacity of UAVs, researches have been conducted to maximize the operation time of the UAVs, including optimized data transmission, computing models and power allocation strategies. Unlike the traditional wireless sensor network in which communication usually consumes the majority energy, the onboard sensor data processing can consume a large portion of the energy reserve.
Herein, this motivates us to develop novel energy-efficiency technologies, which jointly consider both computational efficiency and communication cost to meet the distinct characteristics of the UAVs-enabled networks and support more flexibility and intelligence towards practical applications. The abstracts of the key contents in this thesis are listed as follows.
First, we develop an efficient video encoding solution for UAVs using the global motion information to enhance the residual content removal and augment the encoding algorithm in wireless networks. Compared with the existing encoding schemes applied for deterministic networking environments, the dynamic wireless environment and scarce power are impeding the encoding in resource-limited UAV systems.
Next, a visual retrieval system combining multi-object detection and descriptor learning is introduced. To further reap the manoeuvrability of UAVs for real-world applications such as search and rescue, deep learning technologies are introduced aiming at discovering high-level semantic abstractions of visual information.
For UAV systems, massive data is generated at each vehicle. Due to the nature of wireless nodes being dynamic, it is infeasible to share all local data which will cause significantly high traffic and delay. In this respect, we propose a federated learning-based approach that is capable for partial clients participation and model compression to reduce the communication cost and avoid consuming excessive channel bandwidth. |
|