Publications

Publications / Journal Article

Adversarial Sampling-Based Motion Planning

Nichols, Hayden; Jimenez, Mark; Goddard, Zachary; Sparapany, Michael J.; Boots, Byron; Mazumdar, Anirban

There are many scenarios in which a mobile agent may not want its path to be predictable. Examples include preserving privacy or confusing an adversary. However, this desire for deception can conflict with the need for a low path cost. Optimal plans such as those produced by RRT∗ may have low path cost, but their optimality makes them predictable. Similarly, a deceptive path that features numerous zig-zags may take too long to reach the goal. We address this trade-off by drawing inspiration from adversarial machine learning. We propose a new planning algorithm, which we title Adversarial RRT*. Adversarial RRT∗ attempts to deceive machine learning classifiers by incorporating a predicted measure of deception into the planner cost function. Adversarial RRT∗ considers both path cost and a measure of predicted deceptiveness in order to produce a trajectory with low path cost that still has deceptive properties. We demonstrate the performance of Adversarial RRT*, with two measures of deception, using a simulated Dubins vehicle. We show how Adversarial RRT∗ can decrease cumulative RNN accuracy across paths to 10%, compared to 46% cumulative accuracy on near-optimal RRT∗ paths, while keeping path length within 16% of optimal. We also present an example demonstration where the Adversarial RRT∗ planner attempts to safely deliver a high value package while an adversary observes the path and tries to intercept the package.