The detect-and-avoid problem is the “holy grail” for small aircrafts and drones that need to fly beyond line-of-sight. Delivery drones in particular need to ensure self-separation from other aircraft to ensure safety. While it may seem that aircrafts could be detected via transponders, they are often not available on many aircrafts and even if they are, the rules and regulations do not make it necessary for them to be switched on at all times. Additionally, other flying objects such as birds, balloons, and other drones don’t have transponders. Therefore it is necessary to detect and avoid these objects for fully autonomous flights. Currently, the only effective sensor for aircraft detection is radar, but it is too heavy and expensive for small drones which have size, weight, and power (SWaP) constraints. These constraints even limit LiDAR ranges to be around 100m. For high-speed obstacle avoidance in dynamic environments, objects must be detected at long ranges (>= 500m) to allow sufficient reaction time. Thus, the aim of this project is to create a vision-based aircraft detection and tracking system that focuses primarily on long-range detection.
Model-based control techniques for systems such as legged robots and unmanned aerial vehicles have the ability to explicitly reason about the nonlinearity and uncertainty in the robots' dynamics and potentially even provide guarantees on their safety. However, a fundamental and outstanding challenge is their limited ability to reason about rich sensory inputs such as depth images or vision. Model-based control techniques often treat the robot's perceptual system as a black box and make unrealistic assumptions about the perceptual system's output. The goal of this project is to address these challenges by leveraging data-driven approaches for learning dynamical models of task-relevant perceptual features extracted from rich sensory inputs and using these models for agile and safe robot navigation.
p-ACE: a probabilistic extension to ACE is a light-weight state estimation algorithm for planetary rovers with kinematically constrained articulated suspension systems. ACE's conservative safety check approach can sometimes lead to over-pessimism: feasible states are often reported as infeasible, thus resulting in frequent false positive detection. p-ACE estimates probability distributions over states instead of deterministic bounds to provide more flexible and less pessimistic worst-case evaluation with probabilistic safety guarantees.
ACE is a light-weight collision detection algorithm for motion planning of planetary rovers with articulated suspension systems. Solving the exact collision detection problem for articulated suspension systems requires simulating the vehicle settling on the terrain, which involves an inverse-kinematics problem with iterative nonlinear optimization under geometric constraints. We propose the Approximate Clearance Evaluation (ACE) algorithm, which obtains conservative bounds on vehicle clearance, attitude, and suspension angles without iterative computation.
We introduce an approach to Joint Perception and Planning (JPP) using stereo vision, which performs disparity checks on demand, only as necessary while searching on a planning graph. Furthermore, obstacle checks for navigation planning do not require full 3D reconstruction: we present in this paper how obstacle queries can be decomposed into a sequence of confident positive stereo matches and confident negative stereo matches, which are significantly faster to compute than the exact depth of points.