Summary of the project

The project aims to deliver a complete framework for moving people and objects detection and tracking in order to extract evidence data (e.g. photos and videos from specific events) in real-time (when the event occurs), like cinematography tasks, though a reactive Unmanned Aerial Vehicle (UAV). To this end, edgeAI4UAV project will implement an edge computation node for UAVs. The node will be equipped with a stereoscopic camera, which will provide lightweight stereoscopic depth information to be utilized for the evidence detection and UAV locomotion.

To achieve this, the edge node of the UAV will be endorsed with an embedded processor to allow edge capabilities to the UAVs, since the information will be processed locally. The edgeAI4UAV project will develop lightweight computer vision and AI (deep learning) algorithms capable of detecting and tracking moving objects, while at the same time will ensure robust UAV localization and reactive navigation behaviour. The results of the algorithms will be exploited by an embedded decision-making module (edge computation), which will accomplish dedicated navigation missions, like following a specific moving object (e.g. specific actor, animal, etc), turning at a specific angle of view (e.g. side face, front face, etc.), getting closer or farther to it, etc. Thus, the embedded decision-making module will have the ability to define the navigation behaviour of the UAV in a dynamic manner in real-time, in order to accomplish the envisioned tasks. The UAV will be programmed with the main mission (e.g. to follow the main actor), which can be temporarily interfered with by the subordinate missions (e.g. a closer photograph from the front or side face of the main moving object), and then the main mission will be dynamically re-adjusted in order to smoothly continue its main navigation plan without disturbances, offering smooth data continuation.

Furthermore, the UAV will be equipped with a WiFi module, providing the ability to send specific photographs to a server during the flight. Thus, photographs will be sent to a centralized platform in real-time, without the need to land the UAV in order to upload the photographs and/or video to a remote data storage (like USB stick, memory card, etc.), connect the USB stick (or memory card) to a PC and upload them to the server.

The project will fuse the already mature technology of industrial UAV applications with the edge computing advantages to extract scene semantic information through reactive mission planning to be researched and implemented as an adaptive decision-making system, constituting the UAV’s cognitive functionalities.

see website