Autonomous Drone Software E04: Depth Estimation, Octomap and Path Planning
In this tutorial we will walk through the process of using stereo camera and octomap for environment perception and A* for path finding in an unknown environment.
We suggest developers to install GAAS mirror to use the project:
In the previous tutorials, we have talked about how to control your drone with python and enable your drone to fly in GPS denied environments, which is a big step towards autonomous drone. Now, we will talk about how to enable your drone to fly in an unknown environment from point A to point B, with the help of a pair of stereo camera only, in Gazebo simulator. This tutorial can be organized as follow:
- 1.Obstacle distance estimation with Stereo Camera;
- 2.Octomap as a way to represent the environment;
- 3.A* path finding in 3D space;
- 4.Simple path pruning;
Again, I have to stress that you will need some extra work before going to the field test and the algorithms mentioned here are far from optimal.
Before continuing, remember to update local GAAS and copy related files to corresponding folder:
git pull origin master
cp -r (GAAS_PATH)/simulator/launch/* (PX4_FIRMWARE_PATH)/Firmware/launch
cp -r (GAAS_PATH)/simulator/worlds/* ~/catkin_ws/src/Firmware/Tools/sitl_gazebo/worlds/
I mentioned in the last tutorial that it is not possible to measure absolute scale with only one camera without the help of other sensors. RGBD cameras are prone to sunlight, noises and they are expensive，so we will be using stereo camera as the sensor for distance estimation (for mathematical background, check Reading Materials ).
A C++ based package, based on StereoBM and WLS disparity filter can be found in GAAS/software/Obstacle_Map. We will not go into details, but the workflow can be organized as:
- 1.disparity map generation with opencv provided StereoBM method;
- 2.map disparity map to 3D points in camera frame;
- 3.map 3D points in camera frame to drone body frame.
Before the build:
sudo apt-get install libpopt-dev
To build it:
After building it, in order to visualize how it works, let's first launch the simulation by:
roslaunch px4 path_planning.launch
Gazebo window will pop out and you can see a brick wall of size 3x3 meters located before the drone, which is the obstacle you want to avoid while flying to the target defined as 70 meters in front of the drone.
Fig 1, Gazebo Simulation Environment
Next, you can use:
to generate disparity map and then use:
to convert point cloud in camera frame to drone body frame. You can visualize the result in RVIZ by selecting "add"->"by topic"->"/cloud_in" in RVIZ.
Fig 2, select topic for RVIZ visualization
Fig 3, Point Cloud in RVIZ
You can see a dense point cloud appeared, although not a perfect square as the brick wall itself. You can modify parameters found in config.yaml to tune the performance and try to find a better result, some of which are:
SADWindowSize: 9 #Matched block size. It must be an odd number >=1
MinDisparity: 3 #Minimum possible disparity value.
Next, we can move on to Octomap.
Octomap is a Octree based package to model the environment, and it can drastically decrees the number of points you need to save for memory efficiency and real time performance. We will use Octomap to represent the environment. For more information, check Reading Materials  and .
To use Octomap:
sudo apt-get install ros-kinetic-octomap-*
Before launching Octomap, use rosed to modify resolution to 0.1, max range to 10, remap from="cloud_in" to="cloud_in" and change frame_id to "map":
rosed octomap_server octomap_mapping.launch
or if you feel like using gedit:
roscd octomap_server && cd launch
sudo gedit octomap_mapping.launch
so your launch file looks like:
Example launch file for octomap_server mapping:
Listens to incoming PointCloud2 data and incrementally builds an octomap.
The data is sent out in different representations.
Copy this file into your workspace and adjust as needed, see
www.ros.org/wiki/octomap_server for details
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<param name="resolution" value="0.1" />
<!-- fixed map frame (set to 'map' if SLAM or localization running!) -->
<param name="frame_id" type="string" value="/map" />
<!-- maximum range to integrate (speedup!) -->
<param name="sensor_model/max_range" value="10" />
<!-- data source to integrate (PointCloud2) -->
<!-- <remap from="cloud_in" to="/narrow_stereo/points_filtered2" /> -->
<remap from="cloud_in" to="cloud_in" />
Next, start point cloud generation depicted in the last section and launch Octomap by:
roslaunch octomap_server octomap_mapping.launch
in RVIZ, choose to visualize the following topic:
Fig 4, topic for Octomap
and the corresponding Octomap representation of the environment can be shown as:
Fig 5, Generated Octomap
You can tune parameters such as Octomap resolution (the size of each cell) and Max range by:
rosed octomap_server octomap_mapping.launch
Next, we will talk about how to find a path with Navigator.
Navigator provides a 3D A* path finding algorithm for path finding, it has a poor performance in 3D space, but we will use it as a good starting point. For more information, check Reading Material .
In this part, you will need to keep the first two parts up and running and don' t forget to set the following two topics in RVIZ so you can visualize drone current pose and the path found:
Fig 6, topic for drone position
Fig 7, topic for planned path
Navigator is located at:
to use it, you will need to specify a target position in Navigator.py and current target position it set to 70 meters in front of the drone starting point. After setting up the starting point, you will first take off the drone using:
Next, starting path finding by:
A red flight path will appear and the drone will fly according to the flight path.
Fig 8, A* planned Path
Another thing we need to take a look at is path pruning. The path A* generated is a series of way points with incremental step size, and usually the path would be too complex, meaning there are too many extra and unnecessary waypoints. We implemented two path pruning methods:
- 1.collinearity check: for a path from A to B, if any two points of it are collinear, remove extra ones and keep only the first and last one;
- 2.bresenham3D obstacle detection: for a path from A to B, create lines between A and A+1, A and A+2, check if each line intersect with obstacle, if yes, keep the last point and start overt.
You can visualize the result by:
Fig 9, path pruning algorithms
In the image, the red cube represents an obstacle, the bottom red path is the raw path, consisted of many redundant way points; the green line is the path generated by collinearty check, only corner points are preserved; the blue line is generated by bresenham3D obstacle check, points are preserved according to a preset obstacle distance.
After path pruning, a more efficient and concise path is generated and the drone is able to flly from A to B more efficiently.