Open MULTIDRONE Software
This is a webpage dedicated to publicly available software developed
within the MULTIDRONE project. In order to access this software,
please contact Prof. Ioannis Pitas.
The following sections describe the core functionalities of the MULTIDRONE system, including Visual Analysis
Modules (both onboard & ground station), Gimbal & Camera control, as well as functionalities regarding the
setup of autonomous flight missions.
Visual Analysis Modules
Object Detection & Tracking
The system allows for autonomous target tracking based on object detection. Specifically, the Master Visual
Analysis node provides a FollowTarget service. When called, a Detect service call is made to the detector node,
which returns candidate bounding boxes. After choosing which candidate to follow, the tracker is initialized
and the tracking process begins.
- Detector: Can be any object detector so long as it can provide bounding boxes. Implemented are YOLO
(based on the original darknet implementation), SSD (based on Tensorflow Object Detection API), as
well as our lightweight pyramid-based detector.
- Tracker: Can be any generic visual object tracker. A bounding box and video frame are required for
initialization, and a prediction for the target bounding box is made given subsequent frames. Currently
the MULTIDRONE system offers implementation of the following state-of-the-art 2D visual target
tracking algorithms:
-
KCF
- STAPLE
- SiamFC
- SiamRPN
-
Verifier: The verifier periodically checks whether the tracked target is correct, i.e., by comparing against
the initialization box, or classifying the depicted target, etc. Implemented is a tracker-specific MLP
classifier which classifies a bounding box as corresponding to the target or not.
How to launch:
- Provide an Image stream as /drone_n/shooting_camera
- Launch Visual Analysis Modules roslaunch drone_visual_analysis
face_detection_tracking.launch
- /drone_n/follow_target request must be called to begin tracking.
Heatmap-based Crowd Detection
The MULTIDRONE system offers heatmap-based crowd detection, the results of which can be used for safety
reasons during UAV missions (e.g., crowd avoidance). The crowd detection node is meant to be executed on a
ground computer (NVIDIA 1080 or better is recommended) and utilizes a fully convolutional NN (requires
Caffe) in order to detect crowd in the images provided by the UAV streams.
How to launch:
- Provide an Image stream as /drone_n/ground_shooting_camera
- roslaunch ground_visual_analysis crowd_detection.launch
Semantic Map Manager
The MULTIDRONE system utilizes the generated heatmaps from the crowd detection node in order to
determine the location of the detected crowd and can create an octomap with appropriate marking of the
crowd location
How to launch:
- Launch crowd detection node - roslaunch ground_visual_analysis
semantic_map_manager.launch
Visualization tools
- Object detection and tracking visualization
rosrun drone_visual_analysis visualizer.py
- Auto-focus assist (Focus peaking)
- Begin video streaming in drone_n
- roslaunch ground_visual_analysis focus_n.launch
Autonomous Mission Planning Modules
Mission Controller
This is the core of the MULTIDRONE system, receiving the missions from the Director’s Dashboard,
planning the mission and sending
the corresponding tasks to each drone while monitoring the execution. It can be divided into different parts:
- Interface with Dashboard: The Mission Controller periodically sends a system status message to the Dashboard, and it is able to receive the following commands:
- Event enrolment
- Mission enrolment
- Validate missions
- Select mission and sequence roles
- Trigger events
- Clear events and missions
- Abort mission
- High-level planning: The received mission needs to be transformed into specific tasks for each drone. The high-level planning performs this task allocation taking into account drone constraints (e.g. battery, starting position...) and scenario constraints (e.g. safe landing sites, no-fly zones...). It will try to maximize the shooting time.
- Mission and events manager: This module receives the events from the Dashboard and sends them to the drones. It also monitors the status of the drones, looking at their battery level, the actions they are performing and their position.
How to launch:
- Launch Mission Controller:
- roslaunch multidrone_planning mission_controller.launch
- Arguments to define: list of active drones, geodesic coordinates of the origin.
- Launch ROStful to interface with Dashboard through REST web services:
- roslaunch dashboard_interface rostful.launch
Onboard Scheduler
The Onboard Scheduler receives the list of actions corresponding to the drone from the Mission Controller.
Then, it is in charge of executing them sequentially, synchronizing their start and end and calling the Action Executer for the actual
execution of the different navigation or shooting actions. This module reacts to different alarms and emergencies, such as low drone battery,
being able to command a safe path to a landing site thanks to an included path planner.
It also reports about the drone status to the Mission Controller.
How to launch:
- Launch Onboard Scheduler for drone X:
- roslaunch onboard_scheduler onboard_scheduler.launch drone_id:=X
UAV Abstraction Layer
The UAL (UAV Abstraction Layer) is in charge of abstracting the user from the drone hardware, interfacing with the autopilot commands.
This module is able to provide drone positioning in a standard format regardless of the underlying autopilot,
as well as to receive and execute navigation commands such as land, take-off, go to waypoint, set velocity, etc.
This module is available in a different public repository: https://github.com/grvcTeam/grvc-ual
How to launch:
- Launch MAVROS for drone X:
- roslaunch multidrone_experiments drone_X_mavros.launch
- Launch UAL for drone X:
- roslaunch multidrone_experiments drone_X_ual.launch
- Argument to define: geodesic coordinates of the origin.
Drone, Gimbal and Camera Control
The modules listed below are executed on-board each drone of the MULTIDRONE system during a shooting mission.
They are responsible for controlling the drone,
the gimbal and the camera according to the individual shooting mission specifications received from the
Onboard Scheduler. Drone control relies on the underlying UAL module.
Action Executer
The action executer runs onboard each drone and is responsible for executing the navigation and shooting
actions as they are received from the onboard scheduler. It can be divided in three parts:
- Drone Control: According to the type of action (e.g. lateral, flyby, orbit) and based on the estimates of the target’s position,
the Action Executer computes reference trajectories on-the-fly. These are tracked using a hierarchical controller with an inner-outer loop control structure.
The outer-loop controller is implemented on the Action Executer and provides linear velocity references sent to the UAL to feed the inner-loop control system
- Gimbal Control: When the shooting action is specified, the user determines whether the gimbal tracks a GPS or a visual target.
In GPS tracking, position measurements for both the drone and target together with attitude measurements for the gimbal are used to compute the relative attitude error,
whereas tracking based on vision uses the image error directly. For both cases, a feedback law based on the
attitude error is used to compute angular velocity commands sent to the Gimbal Camera Interface module.
- Camera Control: This submodule is responsible for remotely changing the camera focus and zoom settings. Focus updates are triggered by either the Auto-Focus
Assist module or manually by the director through the dashboard. In contrast, the zoom controller is always running when the gimbal is performing visual tracking.
It is a simple on-off controller based on the bounding box size error provided by the Visual Shot Analysis module.
Gimbal Camera Interface
The Gimbal Camera Interface implements the communication bridge between the Action Executer and the gimbal hardware. It connects to the gimbal through a serial communication (UART), sending the velocity commands and retrieving its status: motor angles and speed and gimbal orientation, and as well as other low-level parameters. For compactness, it comprises the interactions with both the gimbal and camera. If desired, two nodes can also be launched separately.
At any moment during the mission, the gimbal backup pilot can get control over the gimbal to change any camera setting or to manually control the gimbal, with the possibility of switching back to automatic mode.
Microcontroller Board for Gimbal Camera Interface
A Teensy LC board provides the core of the hardware pipeline for the camera and gimbal, channeling the commands originated from the onboard computer and the RC command to the gimbal and camera. It is also responsible for swapping between manual control (pilot) and automatic.
How to launch
- Launch Action Executer for drone X:
- roslaunch action_executer action_executer.launch drone_id:=”x”
- Launch Gimbal Camera Interface:
- roslaunch gimbal_camera_interface gimbal_camera_interface.launch drone_id:=”x”.
- Flash code once and power Teensy board through usb.
In order to access this software, please contact Prof. Ioannis Pitas.