top of page

Maze Escaping Mobile Robot

PID Control

As an introduction to building robot simulators, I implemented a physical dynamics and servo controller for a simple one degree-of-freedom frictionless pendulum with a rigid massless rod and idealized motor. For physical simulation, I developed several numerical integrators including Euler’s Method and Velocity Verlet. For motion control, I employed a proportional-integral-derivative (PID) controller to command the system's motor to a desired angle. The same logic for this PID controller was used to build a velocity controller for the turtlebot.

A-Star Search

The a-star search algorithm is a derivative of dijkstra's shortest path algorithm that uses a heuristic to improve convergence time. This a-star implementation considers locations in a uniformly spaced, 8-connected grid. To avoid planning paths near obstacles, I create an obstacle distance grid using a brushfire algorithm. I include the obstacle distance of a node in its path cost. To keep the heuristic admissible, I use Euclidean distance as the heuristic cost. Here is a demonstration of my a-star algorithm in a 2D simulated environment. This same algorithm is used to navigate the turtlebot to unexplored frontiers.

Differential Drive Odometry

Odometry is the use of data from motion sensors to estimate change in position over time. It is used by wheeled robots to estimate their position relative to a starting location. If we approximate robot motion as an arc over a small period of time and assume negligible slipping, the change in the robot pose can be expressed as a few transformations composed of the distances traveled by each wheel. To test the odometry model, I made the turtle robot drive in a perfect square and measured the final pose error. To accomplish this, I had to add another layer of PID controllers to help the turtlebot navigate to specified setpoints. Once properly tuned, I used this odometry model as the action model for the particle filter to follow.

SLAM

MAPPING: 

For the mapping stage of the pipeline, I used an occupancy grid to represent the environment. I implemented Bresenham's algorithm, an efficient way to trace a ray along a grid, to update the map based on Lidar scans. 

​

PARTICLE FILTER:

To simultaneously localize the robot, I implemented a particle filter. In a particle filter, the belief distribution of the robot pose is represented by a series of particles, each an individual hypothesis. Ideally, these particles should be sampled from the actual belief distribution. However, doing this directly is not possible. Thus, first they are sampled from our proposal distribution, the odometry model with normally distributed noise. Then, the proposal distribution is corrected according to the Bayesian filter. In practice, this is done by weighting each particle according to the sensor model and then resampling the particles based on their weight. For my sensor model, I used a simplified likelihood field. In other words, for each particle, the end points of the Lidar rays are checked to see if they actually correspond to boundaries. The stronger the correspondence, the stronger the weight. 

​

FRONTIER EXPLORATION:

To fully explore the maze, I used a frontier exploration policy. Frontiers are defined as the boundary between free space and unexplored space. The robot plans paths to nearby frontiers using a-star search.

​

Below is the full demo. The robot is marked by the green arrow. The particles are marked in red (they are partially obscured by the green Lidar rays). The path returned by a-star is also marked in green. The map consists of three colors: white for free space, black for boundaries, and purple for frontiers.

bottom of page