Table of contents



Intersection ControllerInterim ReportSoutheastern Louisiana UniversityET 493Fall 2018 By: Anthony Sanchez and Hayley GilsonAdvisor: Dr. KoutsougerasProfessor: Dr. KoutsougerasTable of contentsAbstract3Introduction3Materials3Design4Progress and Accountability8Self-Reflection9Works Cited10Form11Appendix12Abstract:The overall goal of our project is to develop a general-purpose intersection controller that is intended to be low-cost and based on microcontroller and imaging technology. Ideally, it should be able to function with both a 4-way intersection and an intersection with a train, reducing wait times while still maintaining safety by assigning right-of-way to lanes as needed.Introduction:The main objective of this project is to create a low-cost sensor that will compose of a microcontroller and a camera, capable of controlling traffic lights for a 4-way intersection based on the images that it receives through the camera to allow for the best flow of traffic. To make the sensor for the intersection we will be using a Raspberry Pi 3 microcontroller and a Raspberry Pi camera and using OpenCV's libraries to make the best use of the camera. Using OpenCV we will be able to analyze images to differentiate the object such as telling the difference between a car and a pedestrian. Image processing will be used via OpenCV libraries to be able to look at pictures and tell which lanes are of high priority to be passed along.A separate system, however, will be needed to deal with an intersection between a railway and the road. For this, the OpenCV libraries will be used like with the four-way intersection, but this time using an Arduino microprocessor and an Arduino-compatible camera instead. As trains for obvious reasons cannot stop at an intersection, this system will require that a train is spotted with the camera, and recognized as such, with enough time to stop cars from crossing before the train arrives.MaterialsFor the sake of the project, two systems are to be set up. For the most part, the systems will be near-identical, consisting of a camera connected to a microprocessor. As mentioned earlier, the biggest difference between the two systems will be the microprocessors used. One system will be set up using a Raspberry Pi Microprocessor, while the other will use an Arduino Uno. Aside from the microprocessors, the goal of each system is the same, meaning any differences that arise in the function will solely come from the capabilities of each one is the same. Both systems will have a serial camera that will be set up with the microprocessor, sending a stream of images to the microprocessor.The piece that plays the largest role in the project, however, is the image processing software used for the project. As the goal is to get a traffic light system to react with regards to traffic within the lanes, image processing software is necessary for the ability to identify cars without outside input. With that in mind, we did research regarding such and ended up finding OpenCV, a program with an extensive image processing library that is compatible with Arduino and Raspberry Pi. OpenCV was chosen for this project instead of other computer vision libraries because it has all the algorithms and methods needed to detect objects and such. It is reliably fast which is an important trait for a sensor to work well. OpenCV also had the best examples of code to help do coding on our own and manipulate existing codes to do what is required of the project. Having such a huge library enables for different methods to be able to reach the required goal at the endDesignRaspberry PiFigure 1-1 - Diagram of Raspberry Pi connected with the Raspberry Pi cameraThe Raspberry Pi 3 was outfitted with Raspbian then updated and upgraded to make sure that the Pi was up to date. Afterward the camera was installed into the Pi like in Figure 1-1 where the end of the ribbon cable from the camera connected to the CSI Connector on the Pi. To enable the camera on the Pi the config must be changed to enable and then rebooted. This is all that is needed to have the camera working on the Pi. It can take a picture and save it using the command raspistill -o image.jpg where image.jpg is the name given to the image. To do image processing with the Pi the OpenCV program must be installed so that the libraries can be used. Before being able to install OpenCV many steps must be done prior.Before installing OpenCV the following packages must be installed to the Raspberry Pi:pkg-config,libjpeg8-dev, libtiff4-dev, libjasper-dev, libpng12-dev, libgtk2.0-dev, libavcodec-dev, libavformat-dev, libswscale-dev, libv4l-dev, python2.7-dev, and libatlas-base-dev gfortran. Once these packages are installed the pip must be installed using the command, “wget ” and then “sudo python get-pip.py” and the pip install imutils using the pip install command. Afterwards numpy can be pipinstalled using “pip install numpy”. Finally, OpenCV can be installed*1. Afterwards cv2.so and cv.py files must be made inside the directory /usr/local/lib/python2.7/site-packages/. Now OpenCV is functional and ready. The next important step was installing a GUI so that images can be shown from the Raspberry Pi onto the screen. After researching I found that Xming was the best launcher for this. No settings needed to be changed and after installation, it is ready to run. To activate Xming, PortForwarding11 on Putty must be turned on.The basic image processing I decided to do was to dictate the center of a shape using the contour and side of the shape to find out the center. A python script with the code to do this was placed into the Raspberry Pi. Also, a picture with shapes that the center is to be found is placed in the Raspberry Pi as a .PNG file. The file was named shapes_and_colors.png and placed into the Pi. To run the script using the GUI the command “gksudo” must be run first to activate the Xming GUI. Then the command “python center_of_shape.py --image shapes_and_colors.png” is ran which center_of_shape.py is the python script and shapes_and_colors.png is the name of the picture that is taken in the python script. The script takes the picture and then the center of each shape is found using a keystroke and figure 1-2 shows the result after finding the center of all the shapes.Figure 1-2: Center of shape found using Raspberry PiArduinoHardware SystemFor the Arduino end, meanwhile, we already had most of the components. While we started with an Arduino Uno, and Arduino Yun was picked further along in development due to its better processors and built-in memory. Fortunately, access to breadboards and wires were quickly found, and the Arduino program was previously installed on a computer, so once the system was set up testing of the program could happen quickly. However, the camera – an Adafruit TTL Serial camera, picked due to its high ratings and extensive tutorials available for it – still needed to be ordered and delivered. During the wait for this camera, the software ends of the project – as well as the installation of the VC0706 software -- got more attention so that the wait at least would not be wasted.Once the camera arrived, however, it came with no wires installed, so the next step was to solder some in. The cables are soldered in the order shown in Figure 2-1, with each wire representing the input of data, the output of data, ground, and power, respectively. The first two holes are designated for NTSC output functionality, and isn’t as necessary to the function of the camera as the other four. Keep in mind that the wires needed are quite thin, and if you’re not used to soldering – or doing detailed soldering – you may want to ask someone more experienced for help on this step to avoid potential damage to the cameraThe camera can then be connected to the Arduino as shown in Figure 2-2. The resistors are attached to the circuit because while the Arduino typically has a 5V output, the camera can’t handle a voltage of that level. Therefore, the resistors are attached to make the voltage a much more manageable 3.3V.Figure 2-1: Diagram of TTL Serial camera, with wires solderedlefttopFigure 2-2: TTL Serial Camera, wired to Arduino and breadboardOnce the camera was connected to the Arduino, a test needed to be done to verify the camera was functioning as intended. For this part, no code is needed for the Arduino – just upload a blank sketch and keep it plugged into the computer. First, VC0706 needs to be launched, although keep in mind that as the software was last updated in 2008, you most likely will need to run it in compatibility mode to get it functioning properly. From there, you’ll need to open the port the Arduino is using (if you’re not sure which port it is, check the Arduino software). The quickest way to check if this is functioning properly is by clicking the “Get Version” button. If the camera is functioning properly with the rest of the system, your should get the version of the program you’re using; otherwise (such as if your camera’s wiring is shoddy and a steady current isn’t going through), you’ll get “error: cmd time out.”Once you’ve confirmed the system is functional, you should be free to take some test pictures with the camera. Click on “FBUF Ctrl”, and then click “Sel File” to make or designate a file you want to save the camera feed to. From there, clicking “Stop FBuf” and “Read” will stop the framerate buffer to save it to the file and show the image, respectively.SoftwareFor the software part, keep in mind that OpenCV is a library mainly built for C++ and Python. While there are libraries for Arduino that use a simplified form of the OpenCV software, these libraries aren’t compatible with an Arduino Yun. Therefore, while code to run the camera could be developed via Arduino, a separate code using C++ would be needed, both to receive images to be processed, and to send commands back to the Arduino. To get this to work, we picked Visual Studio 2017, a IDE that has support for C++ and Arduino code.For the sake of the project, we only needed the free version of Visual Studio, Visual Studio Community, so thankfully set-up of the software was quick, although support for C++ and Arduino had to be downloaded separately. Next, OpenCV had to be set up with Visual Studio. Overall, this meant opening command prompt to add the library to the computer as a environment variable. Typing “setx -m OPENCV_DIR C:\Users\hayle\Documents\opencv\build\x64\vc15” did the trick, though the file location should be changed to reflect where the bin folder is located. For each C++ project using OpenCV, however, three settings need to be changed for the program to run properly, all of which can be found in the Project properties section. First under the C++ options, the library’s include files (“C:\Users\hayle\Documents\opencv\build\include,” in my case) need to be added to “Additional Include Directories”. Secondly, under Linker options, Visual Studio’s library files (“C:\Users\hayle\Documents\opencv\build\x64\vc15\lib”) need to be added to Additional Directories. Finally, still under the linker section, the library dependency file (should be opencv_world400d.lib) needs to be added to Additional Dependencies.Once those options are changed, setting up the code should be easy. Before jumping into the code itself, though, it’s advised that you should first read and execute the tutorials on the OpenCV website, listed in the sources section. The tutorial codes have been set up and run by the developers, and at the very least they make a good way to verify that errors won’t be as a result of improper set-up of the OpenCV library. Once that’s done, the project should be ready to run the code, as found in the appendix. For the sake of the project, an unedited form of Figure 1-2 was saved to the project folder and will be accessed by the program to show the results (Figure 2-3) would be similar, if not outright identical, to the Python results. Figure 2-3: The image used (left), and the results from the C++ results (right)IssuesIn summary, we have made less progress overall on the project than we desired at this point, though much of this wait was out of our control. Among other things, cameras that could be connected to microprocessor was vital to the project, so a lack of progress, while those were ordered and brought in, was inevitable. The Raspberry Pi installation of the camera went almost perfect as it was just a plug into the Raspberry Pi deal except for that the initial camera received was faulty and could not make a connection. After another camera was received, it worked fine. The biggest issue remaining on the Arduino side at the time of the report is, unfortunately, one of the most vital pieces of the project. While much of the other issues with the hardware side (including improper wiring and VC0706 needing to be run in compatibility mode to get it functioning) had been resolved relatively quickly, the wiring of the camera has continued to be an issue that continually arises. After numerous weeks stuck with resoldering the wires in hopes that the camera would come online and stay online, the best that could be accomplished was functioning for a few minutes before error messages and faulty signals pop up – obviously, not an ideal situation for a camera that should theoretically be running 24/7. In the end, while we still tried to troubleshoot the camera, ultimately attention had to turn solely to OpenCV if only to have visible resultsMeanwhile, the main issue with the Raspberry Pi side after having received a new camera was with installing the OpenCV program. The Raspberry Pi can do basic image processing on photos taken with the camera, which is easy to be done using OpenCV version 2.4.10. However, as the current goal of the project is to have it do object detection, OpenCV version 3.3.0 had to be downloaded to meet the requirements. Unfortunately, the installation of OpenCV 3.3.0 is not picking up the import CV in the python script code and still needs a lot of troubleshooting, which is being worked on.Progress and AccountabilityThe progress achieved on the Raspberry Pi hardware wise is that the camera works fine on the Raspberry Pi and can take pictures normally just by itself. The coding on the camera is now able to take a picture and save it to the microSD card and then using the picture it just took it will then do basic image processing. The basic image processing being done is that the Raspberry Pi uses the contours of shapes in the picture to be able to point out the middle of it. To do this it turns the picture to greyscale and then using the different shades of contrasts it highlights the shape and then labels the center of the shape. On the other hand, the Arduino system was less successful, though that was less to operator error and more due to camera problems. As mentioned in the issues section above, the camera wires were quite mercurial, prompting a lot of work on hardware with nothing to show. The coding, thankfully, went much better, with an image processing code that found the contours and centroid of the shapes that – centroid of triangles aside – was quite successful.Self-ReflectionAnthony:By doing this Raspberry Pi project I was able to better my skills with both Raspberry Pi and Python coding. Getting to use a wider library for python to do more things. Also, was my first-time using computer vision in both coding and just in general. Seeing the huge support available for the Pi makes me realize how great the microcontroller is and how many things you are able to do with it if you just attempt to. Microcontrollers are extremely helpful, and I would not mind being able to use them in my career further on and same with Python codingHayley:By doing this Arduino project, I was able to improve in the use of Arduino and C++. At the start of the project, my skill with coding was mediocre due to switching from mechanical to computer engineering so late in my time at school, and I feared that I would not be able to contribute much on the project, never mind do well with it. Thankfully, as the project went on, I found a lot of libraries, hardware, and other helpful things that made things easier. I certainly didn’t expect to do so much regarding images in C++, and I’m happy to have been proven wrong on that front. Overall, I’m feeling more confident in my capability to be an engineer, and look forward to further both my learning and my career.Works Cited:Sources regarding project and goals: and How-To:OpenCV website: TTL serial camera tutorial: to set up OpenCV with Visual Studio 2017: to set up OpenCV on the Raspberry Pi: Pi Information1.Raspberry Pi input for OpenCV$ wget -O opencv-2.4.10.zip $ unzip opencv-2.4.10.zip$ cd opencv-2.4.102. Python code for center_of_shape.py script# import the necessary packagesimport argparseimport imutilsimport cv2from picamera.array import PiRGBArrayfrom picamera import PiCameraimport time #take and save the picture to the directory# initialize the camera and grab a reference to the raw camera capturecamera = PiCamera()rawCapture = PiRGBArray(camera) # allow the camera to warmuptime.sleep(0.1) # grab an image from the cameracamera.capture(rawCapture, format="bgr")image = rawCapture.array # display the image on screen and wait for a keypresscv2.imshow("Image", image)cv2.imwrite('shapes_and_colors.png',image)cv2.waitKey(0) # construct the argument parse and parse the argumentsap = argparse.ArgumentParser()ap.add_argument("-i", "--image", required=True, help="/home/pi/shapes_and_colors.png")args = vars(ap.parse_args()) # load the image, convert it to grayscale, blur it slightly,# and threshold itimage = cv2.imread(args["image"])gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)blurred = cv2.GaussianBlur(gray, (5, 5), 0)thresh = cv2.threshold(blurred, 60, 255, cv2.THRESH_BINARY)[1] # find contours in the thresholded imagecnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)cnts = cnts[0] if imutils.is_cv2() else cnts[1] # loop over the contoursfor c in cnts: # compute the center of the contour M = cv2.moments(c) cX = int(M["m10"] / (M["m00"] + 1e-7)) cY = int(M["m01"] / (M["m00"] + 1e-7)) # draw the contour and center of the shape on the image cv2.drawContours(image, [c], -1, (0, 255, 0), 2) cv2.circle(image, (cX, cY), 7, (255, 255, 255), -1) cv2.putText(image, "center", (cX - 20, cY - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2) # show the image cv2.imshow("Image", image) cv2.waitKey(0)C++ Code#include "pch.h"#include "opencv2/imgcodecs.hpp"#include "opencv2/highgui.hpp"#include "opencv2/imgproc.hpp"#include <iostream>using namespace cv;using namespace std;Mat src_gray;int thresh = 100;RNG rng(12345);void thresh_callback(int, void*);int main(int argc, char** argv){Mat src = imread("Shapes.jpeg");if (src.empty()){cout << "Could not open or find the image!\n" << endl;cout << "Usage: " << argv[0] << " <Input image>" << endl;return -1;}cvtColor(src, src_gray, COLOR_BGR2GRAY);blur(src_gray, src_gray, Size(3, 3));const char* source_window = "Source";namedWindow(source_window);imshow(source_window, src);const int max_thresh = 255;createTrackbar("Canny thresh:", source_window, &thresh, max_thresh, thresh_callback);thresh_callback(0, 0);waitKey();return 0;}void thresh_callback(int, void*){Mat canny_output;Canny(src_gray, canny_output, thresh, thresh * 2);vector<vector<Point> > contours;vector<Vec4i> hierarchy;findContours(canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);Mat drawing = Mat::zeros(canny_output.size(), CV_8UC3);for (size_t i = 0; i < contours.size(); i++){Scalar color = Scalar(rng.uniform(0, 256), rng.uniform(0, 256), rng.uniform(0, 256));drawContours(drawing, contours, (int)i, color, 2, LINE_8, hierarchy, 0);Moments mu = moments(contours[i]);Point2f mc(static_cast<float>(mu.m10 / (mu.m00 + 1e-5)),static_cast<float>(mu.m01 / (mu.m00 + 1e-5)));circle(drawing, mc, 4, color, -1);}imshow("Contours", drawing);} ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download