Skip to content

sturzl/mvav

Repository files navigation

MVAV

Bringing autonomous vehicles to students!

The Minimum Viable Autonomous Vehicle project is just starting! Please reach out to me at [email protected] if you are interested or have suggestions!

This repository has code and instructions for building a $50 autonomous vehicle. This car can be used for courses and hobbyists focusing on control systems, mobile robotics, machine vision, autonomous robotics and other concepts.

Several labs have been developed which teach these concepts (several do not require the car). See https://mvav.github.io for more information.

Thanks for stopping by! - Avery Sturzl

Setting up your environment

Pre requisites

  • Python3
  • pip3
  • OpenCV
    sudo apt update
    sudo apt install python3-pip
    sudo apt install python3-opencv
    sudo -H pip3 install bpython

Set up

  • Get the code
    git clone git clone https://github.com/sturzl/mvav.git
  • Test your setup
    python3 setup_test.py
  • You should see an image of a mock track and it should print some text. The script contains the following code:
    import cv2 as cv
    image = cv.imread('images/straight_nobackground.jpg')
    cv.imshow('image',image)
    cv.waitKey(0)
    cv.destroyAllWindows()
    print('Great Success!')

Generating Control Signals from images

  • The general idea is to find the color for your track, then determine if you are on the track, or if the car has moved left or right. We'll use red for the target track
    • Read the image into python
    • Remove all colors except for red
    • Find the center of the red blob in the image
    • Determine how far the red blob's center is from the center of the image
  • The images folder contains the following images. Open them to see what they have. The naming is as follows:
    • straight.jpg - car should got straight
    • left.jpg - car should turn left
    • right.jpg - car should turn right
  • Open which_way.py to see some example code. Edit the image being used to see different output. Run it as follows (hit enter to close the image window):
    python3 which_way.py

TODO

  • Verify if any of these steps are worth it, i.e. start with working real world data/car and experiment. Calibration may be necessary for good performance. It may be that it is difficult to threshold well without calibration. Maybe we could just get better at thresholding? Consider processing performance/delay in the performance trade off as well as the actual car/sontrol signal performance
  • Investigate ways to smooth noise/predict control based n images/control signal output with less image processing (maybe right/left/straight is good enough with a kalman or something)?
  • Camera Calibration
  • Color Correction
  • Color & Contrast Correction
  • More contrast nehancement etc

Reading images from the raspi

  • Google for reading images from an ip cam
  • Needs to be a jpeg stream? or .mjpg for the easiest version
  • check out this code sample that uses OpenCV
    cap = cv2.VideoCapture("http://192.168.18.37:8090/test.mjpeg")

Sources

About

Minimum Viable Autonomous Vehicle

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages