Aosminiproject.files.wordpress.com



DRIVER SLEEP DETECTION USING IMAGE PROCESSINGA Project Report Submitted to the SCISin Partial Fulfillment and completion of the AOS course in the Degree ofMaster of TechnologyinArtificial IntelligenceBy Rishav Paudel19MCMI23Krishna Shah19MCMI35School of Computer and Information SciencesUniversity of HyderabadGachibowli, Hyderabad - 500 046Telangana, IndiaJune , 2019CERTIFICATEThis is to certify that the Thesis entitled “Driver Sleep Detection using Image Processing” submitted by Rishav Paudel and Krishna Shah, bearing Reg. No. 19MCMI23 and 19MCMI35 respectively, in partial fulfillment of the requirements for the award of Master of Technology in Artificial Intelligence is a bonafide work carried out by him under my supervision and guidance.The Thesis has not been submitted previously in part or in full to this or any other University or Institution for the award of any degree or diploma.M. NagamaniProf. Kavi Narayana MurthySupervisorDeanSchool of CISSchool of CISUniversity of HyderabadUniversity of HyderabadACKNOWLEDGEMENTSA year long effort, rummaging the trails of research, has finally come to a close. Therefore let metake this opportunity to remember and thank those who made this possible.This has been a transformative period in my life. A substantial part of it can be attributed to influence of one person - my project guide M. Nagamani. I thank her for all the gems I gathered just by being in her vicinity. I am grateful to our Dean, Prof. Kavi Narayana Murthy for providing us the required research environment, and especially for allowing 24/7 access to lab facilities. i am also thankful to our Image Processing faculty Mr. Arun Agrawal for his help in academic and administrative work.The thesis in your hand is result of weeks of endless toil. While caffeine was a regular companion through those sleepless nights, there were people willing to extend help.Let me also remember my friends for all the days we spent together in the labs.Finally the three people in my life without whom this wouldn’t have happened in the first place -my mom, dad and sisters.19MCMI23 RISHAV PAUDEL 19MCMI35 KRISHNA SHAHABSTRACTDrivers who do not take regular breaks when driving long distances run a high risk of becoming drowsy, a state which they often fail to recognize early enough according to the experts. The system uses a small security camera that points directly towards the driver’s face and monitors the driver’s eyes in order to detect fatigue. In such a case when fatigue is detected, a warning signal is issued to alert the driver. The system deals with using information obtained for image to find the edges of the face, which narrows the area of where the eyes may exist. Once the face area is found, the eyes are found by computing the horizontal averages in the area. Taking into account the knowledge that eye regions in the face present great intensity changes, the eyes are located by finding the significant intensity changes in the face. Once the eyes are located, measuring the distances between the intensity changes in the eye area determine whether the eyes are open or closed. A large distance corresponds to eye closure. If the eyes are found closed for 5 consecutive frames, the system draws the conclusion that the driver is falling asleep and issues a warning signal. The system is also able to detect when the eyes cannot be found, and works under reasonable lighting conditions.ContentsChapter 1: Introduction11.1 Abstract11.2 Scope11.3 Purpose11.4 Tools and Technologies2Chapter 2: Literature review32.1 Project Planning and Scheduling3Chapter 3: Theory43.1 User Characteristics43.2 Hardware and Software Requirements43.3 Intended Use43.4 System Requirements4Chapter 4: Research Design54.1 Feasibility Study54.2 Functions of System54.2.1 Use Case Diagram64.3 Data Modeling74.3.1 Sequence Diagram74.4 Functions and Behavior Modeling 84.4.1 System Flow Diagram 8 Chapter 5: Analysis95.1 Techniques for detecting drowsiness95.2 System Configuration95.3 Eye Detection Function105.4 Drowsiness Detection Function145.5 Test Cases175.6 Code22Chapter 6: Conclusion26List of FiguresFigure noFigure NamePage No___________________________________________Figure 4.1Use Case Diagram6Figure 4.2 Sequence Diagram7Figure 4.3 System Flow Diagram8Figure 5.1Detecting the Eyes10Figure 5.2 Algorithm for eye blink11Figure 5.3 Face top and width Detection 12Figure 5.4 Histogram of Eyes14Figure 5.5 Algorithm Implementation15Figure 5.6 Block Diagram of System16Figure 5.7 Spiral Model18Figure 5.8Awake in High Light19Figure 5.9Awake in Low Light19Figure 5.10Slept in Low Light20Figure 5.11Awake in Mid Light20Figure 5.12Slept in Mid Light21Figure 5.13 Slept in High Light21Chapter 1: IntroductionAbstract:Drivers who do not take regular breaks when driving long distances run a high risk of becoming drowsy, a state which they often fail to recognize early enough according to the experts. The system uses a small security camera that points directly towards the driver’s face and monitors the driver’s eyes in order to detect fatigue. In such a case when fatigue is detected, a warning signal is issued to alert the driver. The system deals with using information obtained for image to find the edges of the face, which narrows the area of where the eyes may exist. Once the face area is found, the eyes are found by computing the horizontal averages in the area. Taking into account the knowledge that eye regions in the face present great intensity changes, the eyes are located by finding the significant intensity changes in the face. Once the eyes are located, measuring the distances between the intensity changes in the eye area determine whether the eyes are open or closed. A large distance corresponds to eye closure. If the eyes are found closed for 5 consecutive frames, the system draws the conclusion that the driver is falling asleep and issues a warning signal. The system is also able to detect when the eyes cannot be found, and works under reasonable lighting conditions.1.2 Scope:Studies show that around one quarter of all serious motorway accidents are attributable to sleepy drivers in need of a rest, meaning that drowsiness causes more road accidents than drink-driving.Driver sleep detection is a car safety technology which prevents accidents when the driver is getting drowsy.The analysis of face images is a popular research area with applications such as face recognition, virtual tools, and human identification security systems. This project is focused on the localization of the eyes, which involves looking at the entire image of the face, and determining the position of the eyes by a self-developed image-processing algorithm. Once the position of the eyes is located, the system is designed to determine whether the eyes are opened or closed, and detect fatigue.1.3 Purpose:Driver fatigue is a significant factor in a large number of vehicle accidents. The development of technologies for detecting or preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems.The aim of this project is to develop a prototype sleep detection system. The focus will be placed on designing a system that will accurately monitor the open or closed state of the driver’s eyes in real-time.By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early enough to avoid a car accident. Detection of fatigue involves a sequence of images of a face, and the observation of eye movements and blink patterns.1.4 Tools & Technology:Language- PythonIDE- PycharmOther technologies- Image ProcessingLibraries – OpencvPygameDlibNumpyImutilsScipy.spatialThreadingChapter 2: Literature review2.1 Project Planning and Scheduling:Project Development Approach: -?Creating a light weight and sleek design.?Creating a logical flow for the system.?Modularizing the system.?Merging the modules to act as a whole system.Project Plan: -?Analysis of the system.?Organizing the project.?Hardware and software requirements of the system.?Implementation of the task.?Monitoring and reporting mechanism.558800297815A video camera placed inside the car is continuously filming the driver’s face during the ride.A detection system analyses the movie frame by frame and determines whether the driver’s eyes are open or shut. If the eyes are shut for more than 1/4 a second (longer than a normal blink period) then the systems beeps to alert the driver.00A video camera placed inside the car is continuously filming the driver’s face during the ride.A detection system analyses the movie frame by frame and determines whether the driver’s eyes are open or shut. If the eyes are shut for more than 1/4 a second (longer than a normal blink period) then the systems beeps to alert the driver.The Basic idea:Chapter 3: Theory3.1 User Characteristics:Customer: - Customer are the users of our system who can use the functions of our system, can buy products from the system and makes payment.Admin: -Admin manages all the functionalities of the system.3.2 Hardware and Software Requirements:Hardware requirement: - Camera, Buzzer.Software requirements: - Windows or Linux.3.3 Intended Use: Nowadays the driver safety in the car is one of the most wanted system to avoid accidents. Our objective of the project is to ensure the safety system. For enhancing the safety, we are detecting eyes blinks of the driver and estimating the driver status and alarm system works accordingly. Our report describes how to find the eyes, and also how to determine if the eyes are open or closed.3.4 System Requirements:The requirements for an effective drowsy driver detection system are as follows:? A monitoring system that will not distract the driver.? A real-time monitoring system, to insure accuracy in detecting drowsiness.? A system that will work in both daytime and nighttime conditions.The above requirements are subsequently the aims of this project. The project will consist of a concept level system that will meet all the above requirements.Chapter 4: Research Design4.1 Feasibility Study:4.1.1 Technical Feasibility: -There are no technical hazards for the development and implementation of thisapplication. All the required tools, technologies fulfilling the basichardware and software requirements are available in usage.4.1.2 Economical Feasibility: -This project is economically feasible. The tools and technology are free and open source respectively.4.1.3 Operational Feasibility: -After conducting a small survey of the possible users of this application it seems that they are quite willing to use this application not only to purchase products but also to view other features and information regarding our system.4.2 Functions of System:4.2.1 Use Case Diagram: -Figure: 4.1 Use Case Diagram4.3 Data Modeling:4.3.1 Sequence Diagram:-Figure 4.2 Sequence Diagram4.4 Functions and Behavior Modeling:4.4.1 System Flow Diagram:-Figure 4.3 System Flow DiagramChapter 5: Analysis5.1 Techniques for detecting drowsiness:Possible techniques for detecting drowsiness in drivers can be generally divided into the following categories: sensing of physiological characteristics, sensing of driver operation, sensing of vehicle response, monitoring the response of driver.A video camera placed inside the car is continuously filming the driver’s face during the ride.A detection system analyses the movie frame by frame and determines whether the driver’s eyes are open or shut. If the eyes are shut for more than ? a second (longer than a normal blink period) then the systems beeps to alert the driver.5.2 System Configuration:Background and Ambient Light:-Because the eye tracking system is based on intensity changes on the face, it is crucial that the background does not contain any object with strong intensity changes. Highly reflective object behind the driver, can be picked up by the camera, and be consequently mistaken as the eyes. Since this design is a prototype, a controlled lighting area was set up for testing. Low surrounding light (ambient light) is also important, since the only significant light illuminating the face should come from the drowsy driver system. If there is a lot of ambient light, the effect of the light source diminishes. The testing area included a black background, and low ambient light (in this case, the ceiling light was physically high, and hence had low illumination). This setup is somewhat realistic since inside a vehicle, there is no direct light, and the background is fairly uniform.Camera:-The drowsy driver detection system consists of a CCD camera that takes images of the driver’s face. This type of drowsiness detection system is based on the use of image processing technology that will be able to accommodate individual driver differences. The camera is placed in front of the driver, approximately 30 cm away from the face. The camera must be positioned such that the following criteria are met:1. The driver’s face takes up the majority of the image.2. The driver’s face is approximately in the center of the image.Light Source:-For conditions when ambient light is poor (night time), a light source must be present to compensate. Initially, the construction of an infrared light source using infrared LED was going to be implemented. It was later found that at least 50 LEDs would be needed so create a source that would be able to illuminate the entire face. To cut down cost, a simple desk light was used. Using the desk light alone could not work, since the bright light is blinding if looked at directly, and could not be used to illuminate the face. However, light from light bulbs and even daylight all contain infrared light; using this fact, it was decided that if an infrared filter was placed over the desk lamp, this would protect the eyes from a strong and distracting light and provide strong enough light to illuminate the face. A wideband infrared filter was placed over the desk lamp, and provides an excellent method of illuminating the face.5.3 Eye Detection Function:Figure 5.1 Detecting the eyesAn explanation is given here of the eye detection procedure.After inputting a facial image, pre-processing is first performed by binarizing the image.The top and sides of the face are detected to narrow down the area of where the eyes exist.Using the sides of the face, the center of the face is found, which will be used as a reference when comparing the left and right eyes.Moving down from the top of the face, horizontal averages (average intensity value for each y coordinate) of the face area are calculated. Large changes in the averages are used to define the eye area.The following explains the eye detection procedure in the order of the processing operations.All images were generating in MATLAB using the image processing toolbox.Algorithm for eye-blink detection :Figure 5.2 Algorithm for eye blinkBinarization:-The first step to localize the eyes is binarizing the picture. Binarization is converting the image to a binary image. A binary image is an image in which each pixel assumes the value of only two discrete values. In this case the values are 0 and 1, 0 representing black and 1 representing white.With the binary image it is easy to distinguish objects from the background. The greyscale image is converting to a binary image via thresholding. The output binary image has values of 0 (black) for all pixels in the original image with luminance less than level and 1 (white) for all other pixels. Thresholds are often determined based on surrounding lighting conditions, and the complexion of the driver. After observing many images of different faces under various lighting conditions a threshold value of 150 was found to be effective. The criteria used in choosing the correct threshold was based on the idea that the binary image of the driver’s face should be majority white, allowing a few black blobs from the eyes, nose and/or lips.Face Top and Width Detection:- Figure 5.3 Face top and width detectionThe next step in the eye detection function is determining the top and side of the driver’s face.This is important since finding the outline of the face narrows down the region in which the eyes are, which makes it easier (computationally) to localize the position of the eyes. The first step is to find the top of the face. The first step is to find a starting point on the face, followed by decrementing the y-coordinates until the top of the face is detected.Assuming that the person’s face is approximately in the center of the image, the initial starting point used is (100,240).The starting x-coordinate of 100 was chosen, to insure that the starting point is a black pixel (no on the face). The following algorithm describes how to find the actual starting point on the face, which will be used to find the top of the face.1. Starting at (100,240), increment the x-coordinate until a white pixel is found. This is considered the left side of the face.2. If the initial white pixel is followed by 25 more white pixels, keep incrementing x until a black pixel is found.3. Count the number of black pixels followed by the pixel found in step2, if a series of25 black pixels are found, this is the right side.4. The new starting x-coordinate value (x1) is the middle point of the left side and right side.Once the top of the driver’s head is found, the sides of the face can also be found. Below are the steps used to find the left and right sides of the face.1. Increment the y-coordinate of the top (found above) by 10. Label thisy1 = y + top.2. Find the center of the face using the following steps:i. At point (x1, y1), move left until 25 consecutive black pixels are found, this is the left side (lx).ii. At point (x1, y1), move right until 25 consecutive white pixels are found, this is the right side (rx).iii. The center of the face (in x-direction) is: (rx – lx)/2. Label this x2.3. Starting at the point (x2, y1), find the top of the face again. This will result in a new y-coordinate, y2.4. Finally, the edges of the face can be found using the point (x2, y2).Removal of Noise:-The removal of noise in the binary image is very straightforward. Starting at the top,(x2, y2), move left on pixel by decrementing x2, and set each y value to white (for 200 y values).Repeat the same for the right side of the face. The key to this is to stop at left and right edge of the face; otherwise the information of where the edges of the face are will be lost.Finding Intensity Changes on the Face:-The next step in locating the eyes is finding the intensity changes on the face. This is done using the original image, not the binary image. The first step is to calculate the average intensity for each y – coordinate. This is called the horizontal average, since the averages are taken among the horizontal values. The valleys (dips) in the plot of the horizontal values indicate intensity changes. When the horizontal values were initially plotted, it was found that there were many small valleys, which do not represent intensity changes, but result from small differences in the averages. To correct this, a smoothing algorithm was implemented.The smoothing algorithm eliminated and small changes, resulting in a more smooth, clean graph.After obtaining the horizontal average data, the next step is to find the most significant valleys, which will indicate the eye area. Assuming that the person has a uniform forehead (i.e.; little hair covering the forehead), this is based on the notion that from the top of the face, moving down, the first intensity change is the eyebrow, and the next change is the upper edge of the eye, as shown below.First significant intensity change Second significant intensity change Top of the headThe valleys are found by finding the change in slope from negative to positive. And peaks are found by a change in slope from positive to negative. The size of the valley is determined by finding the distance between the peak and the valley. Once all the valleys are found, they are sorted by their size.5.4 Drowsiness Detection Function:Figure 5.4 Histogram of eyesDetermining the State of the Eyes:-The state of the eyes (whether it is open or closed) is determined by distance between the first two intensity changes found in the above step. When the eyes are closed, the distance between the y – coordinates of the intensity changes is larger if compared to when the eyes are open. Algorithm Implementation:-Figure 5.5 Algorithm ImplementationThe real-time system includes a few more functions when monitoring the driver, in order to make the system more robust. There is an initialization stage, in which the for the first 4 frames, the driver’s eyes are assumed to be open, and the distance between the y – coordinates of where the intensity changes occur, is set as a reference. After the initialization stage, the distances calculated are compared with the one found in the initialization stage. If the lower distance is found (difference between 5-80 pixels), then the eye is determined as being closed.Another addition to the real-time system is a comparison of the left and right eyes found.The left and right eyes are found separately, and their positions are compared. Assuming that the driver’s head is not at an angle (tilted), the y – coordinates of the left and right eye should be approximately the same. If they are not, the system determines that the eyes have not been found, and outputs a message indicating that the system is not monitoring the eyes. The system then continues to try to find the eyes. This addition is also useful in cases where the driver is out of the camera’s sight. In this case, the system should indicate that no drowsiness monitoring is taking place. In the instance where there are 5 or more consecutive frames where the eye is not found, an alarm goes off. This takes account for the case when the driver’s head has completely dropped down, and hence an alarm is needed to alert the driver.Block Diagram of System :Figure 5.6 Block Diagram of SystemConclusion:-A non-invasive system to localize the eyes and monitor fatigue was developed. Information about the head and eyes position is obtained through various self-developed image processing algorithms. During the monitoring, the system is able to decide if the eyes are opened or closed. When the eyes have been closed for too long, a warning signal is issued. In addition, during monitoring, the system is able to automatically detect any eye localizing error that might have occurred. In case of this type of error, the system is able to recover and properly localize the eyes.The following conclusions were made:Image processing achieves highly accurate and reliable detection of drowsiness.Image processing offers a non-invasive approach to detecting drowsiness without the annoyance and interference.A sleep detection system developed around the principle of image processing judges the driver’s alertness level on the basis of continuous eye closures.5.5 Test casesSoftware testing is a process of identifying the correctness of a software by considering its all attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of software components to find the software bugs or errors or defects.Software testing provides an independent view and objective of the software and gives surety of fitness of the software. It involves testing of all components under the required services to confirm that whether it is satisfying the specified requirements or not. The process is also providing the client with information about the quality of the software.Testing is mandatory because it will be a dangerous situation if the software fails any of time due to lack of testing. So, without testing software cannot be deployed to the end user.Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual requirements. It can be either done manually or using automated tools. Some prefer saying Software testing as a?White Box?and?Black Box Testing.Testing is important because software bugs could be expensive or even dangerous. Software bugs can potentially cause monetary and human loss, and history is full of such examples.Typically Testing is classified into three categories.1. Functional Testing2. Non-Functional Testing or?Performance Testing3. Maintenance (Regression and Maintenance)In simple terms, Software Testing means Verification of Application Under Test (AUT).Various models or approaches are used in the software development process where each model has its own advantages and disadvantages. Choosing a particular model depends on the project deliverables and complexity of the project.1. Waterfall Model2. V Model3. Agile Model4. Spiral Model5. Rapid Application DevelopmentSpiral Model:-The spiral model is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping.This Spiral model is a combination of iterative development process model and sequential linear development model i.e. the waterfall model with a very high emphasis on risk analysis. It allows incremental releases of the product or incremental refinement through each iteration around the spiral.The spiral model has four phases. A software project repeatedly passes through these phases in iterations called Spirals.1. Identification2. Design3. Construct or Build4. Evaluation and Risk AnalysisLife Cycle of Spiral Model :Figure 5.7 Spiral ModelTest case - 1: Driver is awake in high lightFigure 5.8 Awake in High LightTest case - 2: Driver is awake in low lightFigure 5.9 Awake in Low LightTest case - 3: Driver is sleeping in low lightFigure 5.10 Slept in Low LightTest case - 4: Driver is awake in mid lightFigure 5.11 Awake in Mid LightTest case - 5: Driver is sleeping in mid lightFigure 5.12Slept in Mid LightTest case - 6: Driver is sleeping in high lightFigure 5.12 Slept in High LightCode:import cv2import pygame as pgimport dlibimport numpy as npfrom imutils import face_utilsimport cv2from scipy.spatial import distance as distimport threadingface_cascade=cv2.CascadeClassifier('harcascadeface.xml')eye_cascade=cv2.CascadeClassifier('harcascadeeye.xml') def start_sound(): pg.mixer.init() pg.mixer.music.load("alarm.wav") pg.mixer.music.play()def resize(img, width=None, height=None, interpolation=cv2.INTER_AREA): global ratio w, h = img.shape if width is None and height is None: return img elif width is None: ratio = height / h width = int(w * ratio) resized = cv2.resize(img, (height, width), interpolation) return resized else: ratio = width / w height = int(h * ratio) resized = cv2.resize(img, (height, width), interpolation) return resized######def shape_to_np(shape, dtype="int"): coords = np.zeros((68, 2), dtype=dtype) for i in range(36,48): coords[i] = (shape.part(i).x, shape.part(i).y) return coordsdef eye_aspect_ratio(eye): A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return earcamera = cv2.VideoCapture(0)predictor_path = 'shape_predictor_68_face_landmarks.dat_2'detector = dlib.get_frontal_face_detector()predictor = dlib.shape_predictor(predictor_path)(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"](rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]total=0alarm=Falsewhile True:################################################## #Face detection: ret, frame = camera.read() #Display frame by frame gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) faces=face_cascade.detectMultiScale(gray,1.3,5) for(i,j,k,l) in faces: cv2.rectangle(frame,(i,j),(i+k,j+l),(255,255,255),2) gray=gray[j:j+l,i:i+k] color=frame[j:j+l,i:i+k] eyes=eye_cascade.detectMultiScale(gray) for(ei,ej,ek,el) in eyes: cv2.rectangle(color,(ei,ej),(ei+ek,ej+el),(255,255,255),2) ################################################## if ret == False: print('Failed to capture frame from camera. Check camera index in cv2.VideoCapture(0) \n') break frame_grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame_resized = resize(frame_grey, width=200)# Ask the detector to find the bounding boxes of each face. The 1 in the# second argument indicates that we should upsample the image 1 time. This# will make everything bigger and allow us to detect more faces. dets = detector(frame_resized, 1) if len(dets) > 0: for k, d in enumerate(dets): shape = predictor(frame_resized, d) shape = shape_to_np(shape) leftEye= shape[lStart:lEnd] rightEye= shape[rStart:rEnd] leftEAR= eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) ear = (leftEAR + rightEAR) / 2.0 leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) if ear>.25: #print (ear) total=0 alarm=False cv2.putText(frame, "Driver awake ", (10, 30),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) else: total+=1 if total>5: if not alarm: alarm=True d=threading.Thread(target=start_sound) d.setDaemon(True) d.start() print ("Please sleep") cv2.putText(frame, "drowsiness detect" ,(250, 30),cv2.FONT_HERSHEY_SIMPLEX, 1.7, (0, 0, 0), 4) cv2.putText(frame, "Driver Slept".format(total), (10, 30),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) for (x, y) in shape: cv2.circle(frame, (int(x/ratio), int(y/ratio)), int(1), (255,255,255), -1) cv2.imshow("img",frame) if cv2.waitKey(1) & 0xFF == ord('q'): cv2.destroyAllWindows() camera.release() breakChapter 6: ConclusionImplementation of drowsiness detection with Python was done which includes the following steps: Successful runtime capturing of video with camera. Captured video was divided into frames and each frames were analyzed. Successful detection of face followed by detection of eye. If closure of eye for successive frames were detected then it is classified as sleepy condition else it is regarded as normal blink and the loop of capturing image and analyzing the state of driver is carried out again and again. In this implementation during the drowsy state the eye is not surrounded by square or it is not detected and corresponding message is shown. If the driver is not drowsy then eye is identified. Future WorkOur project can be extended as a vehicle automation by detecting accident using various sensors and report to nearest police station and nearest hospital along with location coordinates. Due to this the one who gets accident will get rescued by the nearest rescue team from hospitals and police station. This works efficiently in remote areas at night where one may suffer from death due to lack of rescue. So I have planned to extend this project using IOT and implement the full project as a vehicle automation of driver sleep detection and Accident detection with alert. References: An approach to face detection and recognition Divya Meena ,Ravi SharanIEEE Face Detection using Haar Cascades to Filter Selfie Face Image on InstagramAdri Priadana, Muhammad HabibiThe realization of face detection and fullness detection in medium by using Haar Cascade ClassifiersBurcu K?r Sava?, Sümeyya ?lkin, Ya?ar Becerikli ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download