Table of Contents - Southeastern Louisiana University



Computer Vision: Human Motion Detection63564770028 November 2017Jackie DavisMichael LedetAdvisor: Cris KoutsougerasCollege of Computer Science and Industrial TechnologySoutheastern Louisiana University500 W University Ave, Hammond, LA 70402Table of ContentsAbstract............................................................................................................................................2Introduction…………………………………………………………………………………….….3Objective………… ……………………………………………………………………………… 4Progress……………………………………………………………………………….….…....5-19Deliverables……………………………………………………………………….……….…… 20References ………………………………………………………………………………….…....21AbstractThe concept for our project was to design a device that detects human presence using a small, inexpensive microcontroller with a camera and open-source computer vision libraries. The purpose of this project is to incorporate concepts from previous courses and to demonstrate skills gained at Southeastern Louisiana University.IntroductionSurveillance is becoming a more popular topic every day. Thinking about the protection of our families, our valuables, and ourselves is a daily occurrence for most of us. However, security options for home and office are not always affordable or convenient. Most of these security cameras need to have a human to watch the feed to detect any prohibited object or person. In contrast, automated systems can run up to thousands of dollars after equipment cost, installation and monthly fees for monitoring service charges. With the development of open-source computer vision libraries, it is possible to build a human-motion detection device on a budget cost. This is exactly what our solution does. Our design achieves human-motion detection for a fraction of the cost of most advanced security cameras. Utilizing the Raspberry Pi Zero model in junction with OpenCV, an open-source library for computer vision functions, allowed us to create a very small device that can detect human presence and automatically notify you. To complete this project, our team had to overcome a wide variety of challenges. It is of great importance to note that all our knowledge in Computer Vision concepts and Linux commands are self-taught. Additionally, all the environments that we used were all new to us including the operating system on the Pi Zero, using the OpenCV library, and using Python for programming. Gaining knowledge of Linux systems, including commands for operation and control, was necessary to complete this project. This was the primary requirement for working in the operating system on the Raspberry Pi Zero: Debian 8 (commonly referred to as “Jessie”). Additionally, to use the OpenCV library, we had to learn the fundamentals of Computer puter vision is a very specific field of computing in which devices are designed to process and analyze digital images and videos. In short, these devices are trying to automate tasks that the human visual system can accomplish. Some examples of this are video tracking and object recognition. In our case, we focused on learning about human-motion detection. ObjectiveThe objective is to make an optical, motion-detecting device that is small, efficient, and cost-effective, using a single-chip microcontroller. The purpose of this project is to use the knowledge we have gained to create a low-cost motion detection device that will detect human presence. The components of our project are a Raspberry Pi Zero W, a Raspberry Pi camera module, and a 16GB microSD card that serves as the mounting point for the Zero’s operating system. After running into several issues regarding various operating systems, our team eventually settles on using Debian 8: Jessie Lite. Jessie is one version behind the most recent version of the Debian operating system. The lite designates that this edition of Jessie contains no graphical interface and is controlled purely through the command line interface. The key to creating this project was using OpenCV, an open-source computer vision library. This library has many different features regarding real-time computer vision applications, such as: facial recognition, object identification, motion tracking, and augmented reality. OpenCV is written in C++, but has bindings in Python, Java, and MATLAB.As far as responsibilities are concerned, we have divided the work equally amongst ourselves. Jacaqueta will oversee practical interfacing of the raspberry pi zero and learning its capabilities. She will also oversee capturing test images using the camera module with and without OpenCV. Michael will oversee deciding on the optical sensor for the raspberry pi, and learning the capabilities and limitations of the camera. He will also oversee installing all the dependencies for the OpenCV library compilation. Lastly, he will oversee setting up the live feed for the camera. Most of the other responsibilities will be shared. As a team, we came to a decision on what microcontroller to use. Ultimately, it was decided that using the Raspberry Pi Zero W in junction with OpenCV was the best option to complete the project and this decision was made unanimously. Furthermore, both team members did interfacing with the optical sensor. Obviously, we both needed to know how the Raspberry Pi Zero W works in unison with the Pi camera, since this is the root of our entire project. Together we could successfully install OpenCV, program for motion detection, and optimize our system. 37623754133215Figure 1. Raspberry Pi Zero, Pi Camera v2.1 and microSD card.Figure SEQ Figure \* ARABIC 1. Raspberry Pi Zero, Pi Camera v2.1 and microSD card.3762375117094037592003987800Figure SEQ Figure \* ARABIC 1. Raspberry Pi Zero W, Camera module and microSD card. This is the base of our system.Figure SEQ Figure \* ARABIC 1. Raspberry Pi Zero W, Camera module and microSD card. This is the base of our system.ProgressionMicrocontrollers and Camera Analysis30511753171190Figure 2. Raspberry Pi Zero board00Figure SEQ Figure \* ARABIC 2. Raspberry Pi Zero boardAfter laying out the concept of our project, we needed to decide on the materials to use. The first objective was to decide on what microprocessor and optical sensor we wanted to use. It was important that we decided on these two simultaneously, because they needed to be compatible with each other. After a great deal of time researching all our options, we had narrowed it down to three, which all had similar features. These features are embedded in the underlying concept of our project: small, cost effective, and sufficient. Originally, we decided to go with the Adafruit Trinket-Mini and an Adafruit mini-spy cam, mainly because they were the most cost efficient for the project. However, they did not have the computational power we needed to process images. Ultimately, we decided on the Raspberry Pi Zero W (figure 1) due to its higher processing power and adaptability.304800091440030480003149600Figure SEQ Figure \* ARABIC 2. Raspberry Pi Zero W modelFigure SEQ Figure \* ARABIC 2. Raspberry Pi Zero W modelThe Raspberry Pi Zero W, as with all Raspberry Pi systems, was running a lot of programs and background processes we did not have use for. We uninstalled the Debian 9: Stretch operating system and install the older Debian 8: Jessie Lite operating system. Jessie Lite only runs command line, python 2.7 or 3, and wireless network connectivity. This cut the amount of RAM used to process basic functions by 82%. The Pi Zero W has a total weight of 0.3 ounces and small dimensions of 2.6 in x 1.2 in x 0.2 in. It also has 512MB of RAM and single-core 1GHz processor chip costing only $10.00. Ultimately, for being such a small component, this micro-controller really packages a lot of power and features. In comparison to the much bigger Raspberry Pi 3, the Pi Zero W was an excellent choice for this project. The Pi 3, costing $35.00, does not have the built in wireless networking capabilities that the Pi Zero W has. It weighs 1.58 ounces, has dimensions of 3.4 in x 2.2 in x 0.7 in, and is about three times the size of the Pi Zero W. The Pi 3 has a quad-core 1.2 GHz processor chip and 1 GB of RAM. It has HDMI and USB hook-ups whereas the Pi Zero W has mini-HDMI and micro-USB hook-ups. The Wi-Fi adapter for the Pi 3 costs anywhere from $5.00 to $25.00. Both microcontrollers have a micro-SD card slot, micro-USB power source, and a videoCore IV GPU.There are several reasons why this microcontroller was chosen over other potential candidates. Firstly, the controller is simple in design and smaller than most of the other microcontrollers we evaluated, however, it was significantly bigger than the Trinket-Mini we originally planned to use. This controller had enough functionality to work within the parameters of our concept design. It also had enough computing power to deal with the image analysis, which is the root of our project, yet the controller is not so powerful that it would be considered overkill. Furthermore, this specific controller is very compatible with the optical sensor that was chosen. The dimensions of the casing for the sensor is 0.98 in x 0.90 in x 0.35 in. The total weight of the sensor is 3.4 grams. It has a high-resolution module with up to 1080p video quality and up to 3280 x 2464 pixels for photos costing $21.00 bringing our overall project cost to $31.00. The purpose for this optical sensor was to record video to be analyzed using the OpenCV installed on our Pi Zero W. Overall, and like our chosen microcontroller, this camera has a decent variety of options for such a small package.35528252390775Figure 3. Pi Camera v2.1Figure SEQ Figure \* ARABIC 3. Pi Camera v2.13553416622300There are several reasons why we ultimately decided to go with this optical sensor as opposed to the other potential candidates. Firstly, it was the smallest and the most flexible. It can be arranged in a variety of different ways and it can fit into a very small space which is perfect for the parameters of our project. Also, it is very simplistic which is good for us because it typically equates to maximum control through our own program design. It uses JPEG, which is a common and simple file format, to store its photos as well as raw video feed. This makes it great for our team because we will be able to design a program, with much more available resources as compared to if the optical sensor used a different format. 34290002413000Figure SEQ Figure \* ARABIC 3. Raspberry Pi Camera v2.1Figure SEQ Figure \* ARABIC 3. Raspberry Pi Camera v2.1Initial Testing and PreparationNext, we were ready to begin working with the Zero and Pi camera. The first thing we needed to do on the zero was to expand the filesystem to use all the available memory on the microSD card. This was done by accessing the main configuration menu via the command: sudo raspi-config. In this command, sudo is our elevated privileges, like “run as administrator” in windows. Next, we enable the pi camera module which is located within the “interfacing options” of the configuration menu. From here, we wanted to test the camera module before proceeding any further. Using the built-in commands for the camera module on the raspberry pi, we could capture some images. The command that was used for image capture is: sudo raspistill –o image01.jpg, where image01.jpg is the name of the picture file. The picture file format is jpeg for this process. Before expanding the filesystem, however, we had to format the SD card and wipe any data from it that was not related to our raspberry pi or python. To complete this, we needed a micro-SD to SD card converter to connect the micro-SD card to a computer. From there, we the standard SD card editor provided by Windows to format the SD card to FAT32. Then, we used a program called Windows Flash Tool to write the image file containing the Jessie Lite OS to the SD card for use on the Pi Zero W. Once the OS was installed onto the SD card, we booted the Pi Zero W and installed all the updates and upgrades we were missing using the commands “sudo apt-get install update” and “sudo apt-get install upgrade”, respectively, into the command line.Installing OpenCVThe next objective was to begin prepping the system for the OpenCV installation. OpenCV has a ton of dependencies, which means the setup must be very precise or failure will ensue. The required packages for OpenCV are as follows:GCC 4.4.x or laterCMake 2.6 or higherGitGTK+2.x or higher, including headers (libgtk2.0-dev)pkg-configPython 2.6 or later and Numpy 1.5 or later with developer packages (python-dev, python-numpy)ffmpeg or libav development packages: libavcodec-dev, libavformat-dev, libswscale-dev[optional] libtbb2 libtbb-dev[optional] libdc1394 2.x[optional] libjpeg-dev, libpng-dev, libtiff-dev, libjasper-dev, libdc1394-22-devIt is worth noting that these can be installed using the basic sudo apt-get update/upgrade commands available by default on the raspberry pi. Additionally, it is equally important to recognize that all this software is open-source and free for anyone to use for educational and personal usage. We can start at the top by talking about GCC. GCC is the GNU Compiler Collection. A compiler is software that allows the source code that was written by a programmer, to be translated into assembly or machine language. The GCC collection contains compiler for C, C++, Objective C, Fortran, Java, and Ada programming languages. In our case, this is necessary because OpenCV was design in the C++ language. Therefore, we will need this to compile the library, as well as the scripts we write for OpenCV. The next primary dependency for OpenCV is CMake. CMake is a cross-platform open-source software that handles the building process. The key feature of CMake is its ability to build a directory tree outside of the source tree. This means that we can delete builds without removing the source files. CMake will be used in junction with GCC creating a great starting point for installing and running OpenCV. The only required dependency for CMake is a C++ compiler, which is GCC in our case. Git is primarily used as a source code manager in software development, but it can be used to keep track of variations within any file system. GTK+, or the GIMP Toolkit, is a multi-platform toolkit used for creating graphical interfaces. Pkg-config is used for querying the system for installed libraries. It essentially tells the user what is on the system and if all dependencies are present. There are many different versions of Python, but OpenCV requires 2.6, or a newer version, to be installed. Python is a high-level programming language that is widely used for a variety of applications. People like to use it because it is user-friendly and uses a syntax that can accomplish multiple tasks within just a few lines. It is more efficient that the Java or C++ languages in this way. Python is the language in which our scripts for OpenCV programs will be written. Ultimately, we decided to go with Python 2.7 because it was much more lightweight than newer systems, yet it had the capabilities that we needed. Creating a lightweight environment was of critical importance because OpenCV is process intensive. It is a big job for the Pi Zero just to compile and install OpenCV. Similarly, we decided to install OpenCV 3.1 instead of 3.3.0 because the difference was significant in terms of size, yet the newer version did not contain anything extra that we needed. After covering all the dependencies for OpenCV, we were ready to begin installing them on our machine to prep for the OpenCV installation. We began by installing the developer tools: CMake, Git, and pkg-config. Then, we installed the image I/O packages that we thought we might use: libjpeg-dev, libtiff5-dev, libjasper-dev, libpng12-dev. These are all libraries that allows us to capture and manipulate different image formats. Likewise, we needed to install I/O packages for video: libavcodec-dev, libavformat-dev, libswscale-dev, libv4l-dev, libxvidcore-dev, libx264-dev. Next, we installed our GTK development library: libgtk2.0-dev. As stated previously, this is used for OpenCV’s GUI interface. Then, the Python 2.7 was installed. At this point, we were ready to grab the OpenCV source code from GitHub using:wget -O opencv.zip opencv.zipThen, we began to set up Python to be used for the OpenCV build. This required the installation of pip, a python package manager:wget python get-pip.pyOnce this was done, we needed to set up to create a virtual environment for our images and videos to be processed and fed. This Python virtual environment was installed to keep the dependencies in separate places by creating independent Python environments for each one: sudo pip install virtualenv virtualenvwrappersudo rm -rf ~/.cache/pipWe needed to update our main file, ~/.profile, to include all the libraries and files for the virtual environments as shown:# virtualenv and virtualenvwrapperexport WORKON_HOME=$HOME/.virtualenvssource /usr/local/bin/virtualenvwrapper.shecho -e "\n# virtualenv and virtualenvwrapper" >> ~/.profileecho "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.profileecho "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.profileNext, we reloaded the main file with the command source ~/.profile to make sure we were within it.Now, we would create the virtual computer vision environment by entering:mkvirtualenv cv -p python2.7If you see the (cv) preceding your prompt statements, you are now working in the cv virtual environment. If not, enter the following:source ~/.profileworkon cvSince we were now fully working in our cv virtual environment, we needed to install a Python dependency called NumPy which is the fundamental package for scientific computation in Python. We only entered the line:pip install numpyFrom here, we were in the final stages of the OpenCV installation where we installed the OpenCV. Up until this point, we were install dependencies that would support the OpenCV installation. We needed to make sure we were in the cv virtual environment (workon cv) otherwise the system would install incorrectly and crash. The process was as follows:cd ~/opencv-3.1.0/mkdir buildcd buildmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=OFF \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules \ -D BUILD_EXAMPLES=ON ..The installation took about five to six hours, so time and patience was key during this portion of the installation. The very last step in the installation was to create a symbolic link with the OpenCV bindings and our cv virtual environment for Python 2.7 and then to test that it is installed correctly:cd ~/.virtualenvs/cv/lib/python2.7/site-packages/ln -s /usr/local/lib/python2.7/site-packages/cv2.so cv2.soAfter this, we exited the rebooted the system and entered the following code to test to make sure our OpenCV download was complete and correctly installed:33248602037715Figure 4. Output from checking installed OpenCV versionFigure SEQ Figure \* ARABIC 4. Output from checking installed OpenCV version332489043195source ~/.profile workon cvpython>>> import cv2>>> cv2.__version__'3.1.0'>>>317500076200Figure SEQ Figure \* ARABIC 5. Confirming OpenCV installationFigure SEQ Figure \* ARABIC 5. Confirming OpenCV installationAnalysis of resolved issues with OpenCVIt took three tries to complete the installation. We got a fail error at 82%, twice after nineteen hours, which stopped the installation. After running into several issues regarding various operating systems, our team eventually settles on using Debian 8: Jessie Lite. Jessie is one version behind the most recent version of the Debian operating system. The lite designates that this edition of Jessie contains no graphical interface and is controlled purely through the command line interface. 107378594297500 Figure 5. Output from OpenCV compileHowever, this solution was not apparent at first. In fact, we went through about a month worth of trails before figuring out the solution to our problems. One of our failed trials included trying to VisualGDB, a cross-platform compiler. In this effort, we had planned to install the VisualGDB add-ons within Visual Studios. The goal was to compile OpenCV 3.1.0 on a standard windows operating system and then place the binaries onto the Pi Zero. After several fruitless tries in this environment, we decided to try other solutions. Another one of our failed trials includes trying to compile various versions of OpenCV on the original operating system: Stretch. The idea here was to free up as much of the system as possible to devote for the compilation of OpenCV. This meant minimizing all our other processes and freeing up memory. These trials were of course failures though because the variable that we were overlooking was the operating system itself. However, after searching for many potential solutions, our team came to realize that running a lighter version of the operating system could be the potential solution. This is how we stumbled across Debian 8: Jessie Lite.34385253867150Figure 6. Successful Compilation of OpenCVFigure SEQ Figure \* ARABIC 6. Successful Compilation of OpenCV3438525594360It is critically important to note that this error that we were running into was during the compilation step, and not the installation step. This essentially allowed the operating system to be less taxing on the physical system, in comparison to a heavier operating system like Debian 9: Stretch. With the removal of the GUI, in the lite versions of the operating systems, we could minimize the footprint of our entire system. The “lite” version of Debian comes as barebones as possible in terms of software. This allows us to start with a clean slate in an operating system with a minimized footprint. 34163001320800Figure SEQ Figure \* ARABIC 7. Successfully compiling OpenCVFigure SEQ Figure \* ARABIC 7. Successfully compiling OpenCVTesting the live feedAfter installing OpenCV, the first thing that we checked was to make sure that, our camera was still working properly with the Raspberry Pi. Upon confirmation of the camera still being able to take pictures, it was time to begin writing some python code to test the live video stream. It is important to remember that this video stream is being generated through OpenCV. It also uses a PiCamera array to store the raw image data. After this test is successfully completed, we will have confirmed that OpenCV is working properly and can capture and display a live video feed.Figure 7. Test_video.pyThe first thing we need to do in our video testing code is import our necessary packages. This can be seen in lines 3-5, figure 8. If you have been following our documentation so far, you should already have PiCamera installed. From PiCamera.array package, we specifically import the function PiRGBArray. This will allow us to store our colored video capture information, hence RGB, within an array that is designed for handling raw images. Lastly, but most important, we import the packages from cv2. This package contains all the functions from OpenCV that will be needed to complete this project.Next, we need to initialize our camera and create a reference to the video capture. For context, refer to lines 8-11 in figure 8. The first thing we do is create an instance of the PiCamera class. In our video testing code, we just refer to this as “camera”. Then in the next few lines, we set our framerate and resolution. It is best to adjust these very low to start and then work upward until you reach a limit with the hardware. For instance, our team was able to show a video feed in high quality, but it is important to remember that we will be putting more strain on our system after we complete our program for human detection. In line 14, we allow our system to sleep for half a second. This reduces the process strain on the hardware. After initializing our variables, it is time to create a continuous for loop (lines 17-31, figure 8). Here we will use camera.capture_continous, which is a part of the PiCamera package. This loops over the camera feed continous until the loop is broken. The parameters pass to camera.capture_continous is camCapture, which refers to our PiRBGArray. Again, this is our array which holds the information for the video stream. Also, the format = “bgr” argument is passed to the camera.capture_continous, which is similar to RGB the standard for color images. Here we use BGR instead because this is how OpenCV was designed to work with raw images. Next, we create an instance of our frame.array called image. This is the parameter that will be passed to cv2.imshow, which uses a GUI interface to display the video feed. Since we mostly accessed our Raspberry Pi through PuTTY, we used Xming as our GUI overlay. Xming is designed to process graphical interfaces over SSH connections. This is used here because PuTTY does not have a GUI by default. However, using PuTTY in junction with Xming server allows us to display our live video feed. This video will continuously be streamed to the Xming client until a keypress breaks the loop. In this case we used the letter “q” to quit our video streaming process. Additionally, it is important to mention that if you are doing this over PuTTY, as we have done so here, then you will need to make sure that your computer and Raspberry Pi are both on the same network. Remember that in large networks, wireless is often separated from the wired network.Figure 8. Results from test_video.pyDeveloping code for Human DetectionNow that we have completed the testing on our video feed, we now know that we have OpenCV working properly. In addition, we can state that we can successfully capture a live video feed using OpenCV. This is the foundation necessary to begin developing a program for human detection. It is important to note that this is the simplest program that can be created to detect human motion. This is a “bare-bones” program for human detect using OpenCV. As a result, the system will not be flawless. Now, let us look at lines 1-50 of the program used to detect human motion: Figure 9. Lines 1 - 50. Program for Human detectionOther than a few new imported packages, our program does not vary much from our test_video.py, until you get inside the loop of camera.capture_continous(lines 1-8, figure 9). Although, there are some important new changes prior to the camera.capture loop. The new packages here include datetime and imutils. This first is self-explanatory, but imutils is not. Imutils is a package created by Adrian Rosebrock. He designed this package for convenience purposes exclusively. This package includes functions that allow you achieve things simplier than just using OpenCV on its own. For instance, imutils allows for easy to understand functions for rotation, resizing, sorting contours and detecting edges within OpenCV. The functions Adrian has created are much more convenient than trying to manipulate the images without the imutils package. Now let us look at some of our new variables that are incredibly important when trying to optimize your system (lines 12-17, figure 9). The first, and possibly most important, is delta_thresh. The delta_thresh variable that holds the difference between the current frame and our average running frame. The changes depicted within the threshold image are the pixels changes. Essentially, our threshold image is detecting motion within our program. Smaller values of delta_thresh will allow for more motion to be detected, and the opposite is also true. The next important variable is min_area. This is the minimum area, in pixels, that motion will be detected. This allows for the filtering of small changes in the environment. Therefore, the wind may be blowing the leaves in your camera frame, but these changes will go unnoticed by the program because small movement is filter out. Smaller values in min_area will allow for more areas of the frame to be “marked for motion”, where the opposite is also true. Figure 10. Lines 50-98. Program for Human DetectionFrom here, we can skip down to our camera.capture_continous loop because the rest of the code prior to this point is nearly identical to the video testing code (lines 31-94, figure 9 & 10). This portion of the program will be the largest and contains all the information that allows for human detection. Let us start at line 40 to begin explaining the process for human detection. We use the imutils package to resize the image to a width of 500. On line 41, we convert our frame to grayscale using OpenCV’s function cv2.cvtcolor. Which, if you have not guessed already, is a color converted. These things are required by OpenCV to put a Gaussian Blur on our image. If you know anything about Computer Vision, then you will know that the Gaussian Blur is very commonly place in image analysis. A Gaussian Blur reduces detail, but by doing so it also reduces noise within the image. Figure SEQ Figure \* ARABIC 11. Grayscale image processed with Gaussian BlurNext, we will look at lines 45-49. Here we create a gray copy of our feed to be the basis for comparison and store this in the variable named avg. If we jumped down to lines 53 and 54, we can see that we are using cv2.accumulateWeighted to compare “gray” and “avg”. “Gray” represents our image that has been converted to grayscale and has the Gaussian Blur applied. “Avg” is our basis for comparison and contains a gray copy of our image. The function cv2.accumulateWeighted that calculates a weighted sum of the input image, “gray”, and the basis for comparison, “avg”. The cv2.absdiff function generates an image that is the difference between the two images passed: the frame delta. The outputs from cv2.accumulateWeighted and cv2.absdiff are used as parameters in our cv2.threshold function. Thresholding in image analysis is a straightforward concept. If the pixel value is greater than a threshold value, then the pixel receives a “1” or a white dot in the instance of an image. The first argument of the cv2.threshold function will be the source image and the second is the threshold value. The threshold value is delta_thresh, which is a vary we declared, and talked about, previously in this document. Again, smaller values for delta_thresh allows for more motion to be detected. The results of lines 53-63 can be seen in figure 12. Figure SEQ Figure \* ARABIC 12. Frame DeltaHere we can jump down to lines 66-69, which loops over the contours within the image. The contours are found in OpenCV by using cv2.findContours. Contours are defined as the curve of a boundary of an object with the same color and intensity. Contours are used in Computer Vision to recognize or categorize objects. In our program, if these contour areas are smaller than our min_area value, then they will be ignored. This filters out small contours within our image. If these contours are greater than our min_area then the program will recognize them as motion detected.967105-195580004129132592085Figure SEQ Figure \* ARABIC 13. Threshold ImageFigure 13. Threshold ImageThe next thing that needs to be done is to draw a box around the contour image. Here we use cv2.boundingRect to draw a rectangle around the contour (lines 78-82). In addition, our team decided to put some text on the final output frame for motion detection and universal time. From here, the program continues to loop over this process until a key, q, is pressed to quit the loop (lines 92-94). After the break from our loop, we destroy all windows so that our GUI displayed on Xming can successfully close and we can return to the Raspberry Pi Zero command line. DeliverablesDescriptionStartFinishResponsibilityStatusFinish Research and decide on a specific microcontrollerAugustSeptember Jacaqueta, MichaelCompletePractical interfacing with Raspberry Pi Zero; learning its capabilities and limitationsAugustSeptemberJacaquetaCompleteDecide on Optical Sensor for Motion DetectionAugustSeptember MichaelCompleteInterfacing with Optical SensorSeptemberOctoberJacaqueta, MichaelCompleteSuccessfully capture images using Raspberry Pi Zero and CameraSeptemberOctoberJacaquetaCompleteInstall dependencies and prep system for OpenCV compilationSeptemberOctoberMichaelCompleteSuccessfully complete installation of OpenCVOctoberNovemberJacaqueta, MichaelCompleteTest Camera interfacing options through OpenCVOctoberNovemberJacaquetaCompleteEstablish live feed for video streamingOctoberNovemberMichaelCompleteProgramming for Motion DetectionNovemberDecemberJacaqueta, MichaelCompleteSetup Wireless AccessNovemberDecemberMichaelCompleteFinalize code and optimize systemNovemberDecemberJacaquetaCompleteReferencesAdafruit: 2.7: Requirements for OpenCV: Pi Forums: Pi Model Comparison: Documentation: ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download