Skip to content

chc170/CarND-Advanced-Lane-Lines

 
 

Repository files navigation

Advanced Lane Finding Project

Overview

The goal of this project is to identify lane lines in the video using computer vision techeniques and mark them in the output video. The pipline built for tackling the problem is the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Use color transforms to create a thresholded binary image.
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Rubric Points

###Here I will consider the rubric points individually and describe how I addressed each point in my implementation.


Camera Calibration

Calculate Camera Matrix and Distortion Coefficients Using Chessboard Image

Multiple the chessboard pictures taken by the same camera from different angles are prepared in camera_cal directory. We will process them to get a camera matrix and distortion coefficients to undistort our images. In the process I first prepare a (6 x 9) x 3 matrix obj_pt to represent a calibration pattern point in 3D space. We will store one of the copy of obj_pt to obj_pts when chessboard corners are detected in the processed images and store the corners to img_pts. The corners are found by cv2.findChessboardCorners function. We then feed these two sets of points to cv2.calibrateCamera function for calculating our matrix and coefficients. The result will be stored in a pickle file for future use.

alt text

Pipeline (single images)

1. Distortion Correction

With the camera matrix and distortion coefficients from the previous step, we can simply apply the to an image using cv2.undistort function.

alt text

2. Perspective Transform to Birds-eye View

This is a manual process. I eyeballed two points each of the two lines in a straight lane image, then fixed the x axis positions to the same for points on the same line. Here we will get four new points, which will be the points in the destination image. Use cv2.getPerspectiveTransform to get transform matrix then feed the image and the matrix to cv2.warpPerspective, we will get a warped image.

alt text

3. Thresholded Binary Image

This process is for creating a binary image which tries to set only pixel of interest values to 1. The pixels we are interested are of course the lane lines.

Histogram Equalization

To improve the thresholding result, histrogram equalization could be very useful in certain situation (e.g. surface too dark or too bright). This process emphasizes the visual changes in the image. (http://docs.opencv.org/3.2.0/d5/daf/tutorial_py_histogram_equalization.html)

alt text

Threshold Using Different Color Spaces

In this process, I used 3 different color spaces to threshold the colors. HLS can identify yellow and white by setting different ranges of values. L channel in LUV color space performs well on picking up white color. B channel in LAB color space performs well on identify yellow color. I discarded Sobel graident threshold because the effect is not good.

The following code filter the image by given ranges of each channel.

color_min = np.array([channel1_min, channel2_min, channel3_min])
color_max = np.array([channel1_max, channel2_max, channel3_max])
color_mask = cv2.inRange(image, color_min, color_max)

alt text

4. Find Lane Lines From Binary Image

There are two ways of finding points from binary image. One is starting from scratch. The other is starting with previous found line.

Find Points From Scratch

To begin this process, we create a histogram for the bottom half of the image counting the ones in each x value. The index with maximum count on the left half of the histogram will become our starting point of left line. The index with maximum count on the right half of the histogram will become the starting point of right line. Then, we split the image horizontally into 7 bars and find possible regions of lane lines in each bar. The starting bar is the bottom bar. The next bar we will look for index of maximum count in a range not too far from the previous bar. All the pixels with value 1 in the regions will be used to run a second order polynomial fit. The result looks like the following:

alt text

Find Points From Previous Found Line

In order to reduce the processing time, we can use the previous found polynomail line and buffer it to create an area that the points of the next line might be in.

alt text

5. Draw Boundary and Reversed Perspective Transform

Boundary can be drawn by applying all y values (from 0 to image height) to the fitted polynomail equation to get all the corresponding x values. Reversed perspective transform is simplely reversing the order of the 4 points to create get the matrix. The result will look like the following:

alt text

6. Calculate radius of curvature and the position of the vehicle with respect to center

First, we have to meature the length in meters per pixel in y and x dimensions. Y dimension factor is measured with the length of a dashed lane line, while x dimension factor is measured with the width of the lane.

ym_per_pix = 3 / 115
xm_per_pix = 3.7 / 700

Radius of curavture is calculated by the following formula:

x = a*y^2 + b*y + c
y_eval = y * ym_per_pix
Radius of curvature = ((2*a*y_eval + b)^2 + 1)^1.5 / (2*a)

Center position of the camera is assumed to be at the center of the image. Center position of the lane is the average position between two detected lane lines. The difference between these two numbers is the distance the vehicle off the center of the lane.

Vehicle position = ((line1_pos + line2_pos)/2 - image_width/2) * xm_per_pix

Pipeline (video)

Line validation

  1. Coefficient comparison: I compare the second order coefficient between frames. The difference should be smaller than 0.0005 according to my experiment. Otherwise, we flag this frame as lane line not detected.
  2. Distance between two lines: Distance between two lines should always be in the range of 3.7 +/- 0.5. The line position is calculated by applying the bottom of the image (y=720) to the fitted polynomail.
  3. Curvature comparison: This has the same issue as 2.,

Smoothing

Smoothing is done by averaging last 10 polynomail coefficients if the lines are detected.

Here's a link to my video result and challenge video.


###Discussion

From my perspective, creating binary image and fitting line are two most difficult and important process. If the threshold performs well, we can easily fit the points to the polynomial, but it is very hard when the environment has much noise like the challenge videos. The noise pushes the difficulties to line fitting process. Small amount of outliers can affect the result very much, so we might have to apply additional technique like RANSAC to ease the affect of outliers. However, most of the computer vision algorithms are not time efficient, so I didn't use them in the project.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Jupyter Notebook 99.2%
  • Python 0.8%