Q is an unmanned ground vehicle that competes in the annual Intelligent Ground Vehicle Competition. The robot is required to navigate outdoor environments through GPS waypoints and designated pathways while avoiding obstacles and unsurpassable terrain. It does so using CPU and FPGA processors, a GPS, image and orientation sensors and a laser proximity scanner. As an undergrad, I developed navigation, feedback control and pattern recognition algorithms for the robot (see here for details). I completed:
- Path planning algorithms using vector field histograms
- Image processing algorithms for edge detection and object recognition
- Hardware-Software interfaces
- Integration of the sensory system: GPS and compass, a CMUcam and a SICK laser scanner
My engineering senior research project involved the development and implementation of a stereo vision (3D computer vision) algorithm that efficiently tackles the correspondence problem (i.e. what are the correspondences between pixels of two different images of the same scene?). Solving the correspondence problem allows the computation of stereoscopic depth from the images of two different cameras located at separate vantage points. We implemented the algorithm on a Valde Systems Image Processor and used a Point Grey Bumblebee stereo vision camera to produce dense and accurate disparity maps in real time. Q can use this data to create a 3D model of its navigating environment.
Here's a description of our techniques:
- Stereo Vision System for 3D Surface Reconstruction [pdf]
Another line of research that I pursued was the use of orthogonal transform algorithms for time series analysis and system identification. Specifically, I used the fast orthogonal search (FOS) method to predict the boundaries of a fast changing, narrow navigational boundry. For an irregular time series, FOS identifies frequencies much more accurately than discrete Fourier analysis.
cornell dot edu