2.2



CHAPTER 1INTRODUCTION1.1 IMPORTANCE OF FACE RECOGNITION The information age is quickly revolutionizing the way transactions are completed. Everyday actions are increasingly being handled electronically, instead of with pencil and paper or face to face. This growth on electronic transactions has resulted in a greater demand for fast and accurate user identification and authentication. Access codes for buildings, banks accounts and computer systems often use PIN’s for identification and security clearances. Using the proper PIN gain access, but the user of the PIN is not verified. When credit and ATM cards are lost and stolen , an unauthorized user can often come up with the correct personal codes. Despite warning, many people continue to choose easily guessed PIN’s and passwords: birthdays, phone numbers and social security numbers. Recent cases of identity theft have heightened the needs for methods to prove that someone is truly who he/she claims to be. Face recognition technology may solve this problem since a face is undeniably connected to its owner expect in the case of identical twins. Its nontransferable. The system can then compare scans to records stored in a central or local database or even on a smart card. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. 1.2 FACE RECOGNITION TECHNOLOGY 1.2.1 INTRODUCTION TO BIOMETRICS A biometric is a unique, measurable characteristic of a human being that can be used to automatically recognize an individual or verify an individual’s identity. Biometrics can measure both physiological and behavioral characteristics. “Any automatically measurable, robust and distinctive physical characteristics or personal trait that can be used to identify an individual or verify the claimed identity of an individual.This definition requires elaboration:-Measurable means that the characteristic or trait can be easily presented to a sensor, located by it, and converted into a quantifiable, digital format. This measurability allows for matching to occur in a matter of seconds and makes it an automated process.Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics.?Physiological characteristics are related to the shape of the body. Examples include, but are not limited to?fingerprint, palm veins,?face recognition,?DNA,?palm print,?hand geometry,?iris recognition,?retina?and odor/scent. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to?typing rhythm,?gait. CHAPTER 2LITERATURE SURVEY2.1 NEUROSCIENCE ISSUES RELEVANT TO FACE RECOGNITIONHuman recognition processes utilize a broad spectrum of stimuli, obtained from many, if not all, of the senses (visual, auditory, olfactory, tactile, etc.). In many situations, contextual knowledge is also applied, for example, surroundings play an important role in recognizing faces in relation to where they are supposed to be located. It is futile to even attempt to develop a system using existing technology, which will mimic the remarkable face recognition ability of humans. However, the human brain has its limitations in the total number of persons that it can accurately “remember.” A key advantage of a computer system is its capacity to handle large numbers of face images. In most applications the images are available only in the form of single or multiple views of 2D intensity data, so that the inputs to computer face recognition algorithms are visual only. For this reason, the literature reviewed in this section is restricted to studies of human visual perception of faces. Many studies in psychology and neuroscience have direct relevance to engineers interested in designing algorithms or systems for machine recognition of faces. 2.2 FACE RECOGNITION FROM STILL IMAGESAs illustrated in the problem of automatic face recognition involves three key steps/subtasks: (1) detection and rough normalization of faces, (2) feature extraction and accurate normalization of faces, (3) identification and/or verification. Sometimes, different subtasks are not totally separated. For example, the facial features (eyes, nose, mouth) used for face recognition are often used in face detection. Face detection and feature extraction can be achieved simultaneously, as indicated in Figure 1. Depending on the nature of the application, for example, the sizes of the training and testing databases, clutter and variability of the background, noise, occlusion, and speed requirements, some of the subtasks can be very challenging.Though fully automatic face recognition systems must perform all three subtasks, research on each subtask is critical. This is not only because the techniques used for the individual subtasks need to be improved, but also because they are critical in many different applications.2.3 FACE RECOGNITION FROM IMAGE SEQUENCESA typical video-based face recognition system automatically detects face regions, extracts features from the video, and recognizes facial identity if a face is present. In surveillance, information security, and access control applications, face recognition and identification from a video sequence is an important problem. Face recognition based on video is preferable over using still images, since as demonstrated , motion helps in recognition of (familiar) faces when the images are negated, inverted or threshold. It was also demonstrated that humans can recognize animated faces better than randomly rearranged images from the same set. Though recognition of faces from video sequence is a direct extension of still-image-based recognition, in our opinion, true videobased face recognition techniques that coherently use both spatial and temporal information started only a few years ago and still need further investigation. Significant challenges for video-based recognition still exist; we list several of them here.2.4 EVALUATION OF FACE RECOGNITION SYSTEMSGiven the numerous theories and techniques that are applicable to face recognition, it is clear that evaluation and benchmarking of these algorithms is crucial. Previous work on the evaluation of OCR and fingerprint classification systems provided insights into how the evaluation of algorithms and systems can be performed efficiently. One of the most important facts learned in these evaluations is that large sets of test images are essential for adequate evaluation. It is also extremely important that the samples be statistically as similar as possible to the images that arise in the application being considered. Scoring should be done in a way that reflects the costs of errors in recognition. Reject error behavior should be studied, not just forced recognition.In planning an evaluation, it is important to keep in mind that the operation of a pattern recognition system is statistical, with measurable distributions of success and failure. These distributions are very application-dependent, and no theory seems to exist that can predict them for new applications. This strongly suggests that an evaluation should be based as closely as possible on a specific application.CHAPTER 3EXISTING SYSTEM One of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system. The still image problem has several inherent advantages and disadvantages. For applications such as drivers’ licenses, due to the controlled nature of the image acquisition process, the segmentation problem is rather easy. However, if only a static picture of an airport scene is available, automatic location and segmentation of a face could pose serious challenges to any segmentation algorithm. But the small size and low image quality of faces captured from video can significantly increase the difficulty in recognition. Recognizing a 3D object from its 2D images poses many challenges. The illumination and pose problems are two prominent issues for appearance- or image-based approaches. Many approaches have been proposed to handle these issues, with the majority of them exploring domain knowledgeFace recognition has received increased attention and has advanced technically. Many commercial systems for still face recognition are now available. Recently, significant research efforts have been focused on face modeling/tracking, recognition, and system integration. New datasets have been created and evaluations of recognition techniques using these databases have been carried out. It is not an overstatement to say that face recognition has become one of the most active applications of pattern recognition, image analysis and understanding.DISADVANTAGES:Duplications are possible.Low image quality will increase difficulty in recognition.Changes in illumination or pose is a big and unsolved problem.Detecting faces during live is difficult. CHAPTER 4PROPOSED SYSTEMA new technique called Hull point analysisis used to detect and recognize the faces efficiently than other recognition techniques. Proposing new algorithms and build more systems, measuring the performance of new systems and of existing systems becomes very important. Face recognition has received significant attention during the past several years. It has wide range of commercial and law enforcement applications. Hence they should be highly efficient and secured. Even though the current recognition system has reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. Paper provides an up-to-date critical survey of face recognition research. It eliminates the unsolved problem of recognition of face images acquired in outdoor environment with changes in illumination or pose. Face recognition here it is done through hull point analysis for accuracy. Hull point analysis is a method for plotting six points in shape of hexagon with a center point. Then these points are connected with lines and the values are stored in database along with their names and we have also added live recognition where the stored images can be identified in live can also be deleted and the album can be reset.To get more accuracy this technique calculates smile value and eye blink value to avoid duplications thus this applications can be used for security purposes. In a face Human recognition processes and utilize a broad spectrum of stimuli, obtained from many, if not all, of the senses (visual, auditory, olfactory, tactile, etc.). In many situations, contextual knowledge is also applied, for example, surroundings play an important role in recognizing faces in relation to where they are supposed to be located. It is futile to even attempt to develop a system using existing technology, which will mimic the remarkable face recognition ability of humans. However, the human brain has its limitations in the total number of persons that it can accurately “remember.” A key advantage of a computer system is its capacity to handle large numbers of face images. In most applications the images are available only in the form of single or multiple views of 2D intensity data, so that the inputs to computer face recognition algorithms are visual only. For this reason, the literature reviewed in this section is restricted to studies of human visual perception of faces.In this project we have added live recognition where the saved users details will be displayed in live if the users are in the frame. Many studies in psychology and neuroscience have direct relevance to engineers interested in designing algorithms or systems for machine recognition of faces. For example, findings in psychology about the relative importance of different facial features have been noted in the engineering literature. On the other hand, machine systems provide tools for conducting studies in psychology and neuroscience. For example, a possible engineering explanation of the bottom lighting effects studied in Johnston is as follows: when the actual lighting direction is opposite to the usually assumed direction, a shape-from-shading algorithm recovers incorrect structural information and hence makes recognition of faces harder.CHAPTER 5COMPONENTS OF SYSTEM5.1 QUALCOMM SNAPDRAGONSnapdragon?is a suite of?System-on-Chip?(SoC) semiconductor products designed and marketed by Qualcomm?for mobile devices. The Snapdragon?central processing unit?(CPU) uses the?ARM?RISC instruction set, and a single SoC may include multiple CPU cores, a?graphics processing unit?(GPU), a wireless modem, and other software and hardware to support a smartphone's?global positioning system(GPS), camera,?gesture recognition?and video. Snapdragon semiconductors are embedded in devices of various systems, including?Google Android mobile?and?Windows Phone?devices.Benchmark tests of the Snapdragon 800's processor by?PC Magazine?found that its processing power was comparable to similar products from Nvidia. Benchmarks of the Snapdragon 805 found that the Adreno 420 GPU resulted in a 40 percent improvement in graphics processing over the Adreno 330 in the Snapdragon 800, though there was only slight differences in processor benchmarks.Snapdragon processors enable next-level user experiences. Each one is a comprehensive all-in-one system, specifically designed to enable best-in-class mobile experiences with all-day battery life. Snapdragon processors also enable advanced connectivity, jaw-dropping graphics, and powerful and efficient processing and multitasking.5.1.1 FEATURESCPUUp to 2.3 GHz quad-core(4x Qualcomm? Krait? 400)GPUQualcomm? Adreno? 330 GPUUp to OpenGL ES 3.0DSPQualcomm? Hexagon? DSPCameraUp to 21 MP cameraDual Image Sensor Processor (ISP)VideoUp to 4K Ultra HD capture and playbackH.264 (AVC)DisplayUp to 2K display on device1080p and 4K external display supportChargingQualcomm??Quick Charge? 2.0LTE Connectivity4G LTE Advanced World ModeLTE Category 4:Up to 150 Mbps DLUp to 50 Mbps ULDownlink Features:2x10 MHz carrier aggregation64-QAMUplink Features1x20 MHz16-QAMGlobal ModeLTE FDD and TDDWCDMA (DC-HSDPA, DC-HSUPA) TD-SCDMAEV-DO and CDMA 1xGSM/EDGEAdditional features include:LTE BroadcastHD Voice over VoLTE and 3GWi-Fi1-stream 802.11n/acFig 5.1 Snapdragon 805 Processor5.2 CAMERAA?front-facing camera?is a feature of cameras, mobile phones and similar mobile devices that allows taking a self-portrait photograph or video while looking at the display of the device, usually showing a?live preview?of the image.A?facial recognition system?is a?computer application?capable of?identifying?or?verifying?a person from a?digital image?or a video frame?from a?video?source. One of the ways to do this is by comparing selected?facial features?from the image and a facial?database.It is typically used in?security systems?and can be compared to other?biometrics?such as?fingerprint?or eye?iris recognition systems.CHAPTER 6SYSTEM DESIGN ANALYSIS AND MODULES6.1 IMAGE CAPTURING Camera which is in our Smartphone is used to capture images live and used to recognize faces and store in database. The camera input is fed into Snapdragon processor 6.2 QUALCOMM SDK MODULEThe processor formulates the values with high processing power and GPU. The face in that live preview excluding the background and noise is done by Qualcomm SDK which basically uses Hull point Analysis algorithm.6.3 USE OF HULL POINT ANALYSIS ALOGORITHMHull point analysis is an algorithm where is marks the points on the outer surface of the face and each point distance is measured in pixels for accuracy and most deviated angle difference is noted and points are marked and connected to circum point of the face which is in middle and the distance are calculated and sent to our Android App for ease of use for user. 6.4 PARSING MODULEThe data from android app are parsed and values are stored in SQlite under corresponding user with help of unique ID. Updating user takes the same step and the data obtained are updated with help of ID. \s Fig 6.1: Flow Diagram CHAPTER 7DEVELOPMENT ENVIRONMENT AND FEATURES7.1 ANDROID STUDIOAndroid Studio is the official IDE for Android app development, based on ?IntelliJ IDEA. On top of IntelliJ's powerful code editor and developer tools, Android Studio offers even more features that enhance your productivity when building Android apps, such as:A flexible Gradle-based build systemBuild variants and multiple APK file generationCode templates to help you build common app featuresA rich layout editor with support for drag and drop theme editingLint tools to catch performance, usability, version compatibility, and other problemsCode shrinking with ProGuard and resource shrinking with GradleBuilt-in support for?Google Cloud Platform, making it easy to integrate Google Cloud Messaging and App Engine7.1.1 DEVELOPER SERVICES Android Studio supports enabling these developer services in your app:Ads using?AdMobAnalytics?Google AnalyticsAuthentication using?Google Sign-inNotifications using?Google Cloud MessagingEnabling a developer service adds the required dependencies and, when applicable, also modifies the related configuration files. To activate the service, you must perform service-specific updates, such as loading an ad in the?MainActivity?class for ad display.To enable an Android developer service, select the?File > Project Structure?menu option and click a service under theDeveloper Services?sub-menu. The service configuration page appears. In the service configuration page, click the service check box to enable the service and click?OK. Android Studio updates your library dependencies for the selected service and, for Analytics, updates the?AndroidManifest.xml?and other tracker configuration files.7.1.2 GRADLE Android Studio uses Gradle as the foundation of the build system, with more Android-specific capabilities provided by the Android Plugin for Gradle. This build system runs as an integrated tool from the Android Studio menu and independently from the command line. You can use the features of the build system to:Customize, configure, and extend the build process.Create multiple APKs for your app with different features using the same project and modules.Reuse code and resources across source sets.The flexibility of Gradle enables you to achieve all of this without modifying your app's core source files.7.1.3 MEMORY AND CPU MONITORINGAndroid Studio provides a memory and CPU monitor view so you can more easily monitor your app's performance and memory usage to track CPU usage, find deallocated objects, locate memory leaks, and track the amount of memory the connected device is using. With your app running on a device or emulator, click the?Android?tab in the lower left corner of the runtime window to launch the Android runtime window. Click the?Memory?or?CPU?tab. Fig 7.1: CPU Monitoring7.1.4 MANAGING AVD MANAGERThe AVD Manager is a tool you can use to create and manage Android virtual devices (AVDs), which define device configurations for the?Android Emulator.HARDWARE OPTIONSIf you are creating a new AVD, you can specify the following hardware options for the AVD to emulate:CharacteristicDescriptionPropertyDevice ram sizeThe amount of physical RAM on the device, in megabytes. Default value is "96".hw.ramSizeTouch-screen supportWhether there is a touch screen or not on the device. Default value is "yes".hw.touchScreenTrackball supportWhether there is a trackball on the device. Default value is "yes".hw.trackBallKeyboard supportWhether the device has a QWERTY keyboard. Default value is "yes".hw.keyboardDPad supportWhether the device has DPad keys. Default value is "yes".hw.dPadGSM modem supportWhether there is a GSM modem in the device. Default value is "yes".hw.gsmModemCamera supportWhether the device has a camera. Default value is "no".hw.cameraMaximum horizontal camera pixelsDefault value is "640".hw.camera.maxHorizontalPixelsMaximum vertical camera pixelsDefault value is "480".hw.camera.maxVerticalPixelsGPS supportWhether there is a GPS in the device. Default value is "yes".hw.gpsBattery supportWhether the device can run on a battery. Default value is "yes".hw.batteryAccelerometerWhether there is an accelerometer in the device. Default value is "yes".hw.accelerometerAudio recording supportWhether the device can record audio. Default value is "yes".hw.audioInputAudio playback supportWhether the device can play audio. Default value is "yes".hw.audioOutputSD Card supportWhether the device supports insertion/removal of virtual SD Cards. Default value is "yes".hw.sdCardCache partition supportWhether we use a /cache partition on the device. Default value is "yes".disk.cachePartitionCache partition sizeDefault value is "66MB".disk.cachePartition.sizeAbstracted LCD densitySets the generalized density characteristic used by the AVD's screen. Default value is "160".hw.lcd.density7.2 SQLITE SQLite is a software library that implements a?self-contained,?serverless,?zero-configuration,?transactionalSQL database engine. SQLite is the?most widely deployed?database engine in the world.7.2.1 FEATURES Transactions?are atomic, consistent, isolated, and durable (ACID) even after system crashes and power failures.Zero-configuration?- no setup or administration needed.Full SQL implementation?with advanced features like?partial indexes?and?common table expressions. (Omitted features)A complete database is stored in a?single cross-platform disk file. Great for use as an?application file format.Supports terabyte-sized databases and gigabyte-sized strings and blobs. (See?limits.html.)Small code?footprint: less than 500KiB fully configured or much less with optional features omitted.Simple, easy to use?API.Written in ANSI-C.?TCL bindings?included. Bindings for dozens of other languages available separately.Well-commented source code with?100% branch test coverage.Available as a?single ANSI-C source-code file?that is?easy to compile?and hence is easy to add into a larger project.Self-contained: no external dependencies.Cross-platform: Android, *BSD, iOS, Linux, Mac, Solaris, VxWorks, and Windows (Win32, WinCE, WinRT) are supported out of the box. Easy to port to other systems.Sources are in the?public domain. Use for any es with a standalone?command-line interface?(CLI) client that can be used to administer SQLite databases.7.2.2 USES Database For The Internet Of Things.?SQLite is popular choice for the database engine in cellphones, PDAs, MP3 players, set-top boxes, and other electronic gadgets. SQLite has a small code footprint, makes efficient use of memory, disk space, and disk bandwidth, is highly reliable, and requires no maintenance from a Database Administrator.Application File Format.?Rather than using fopen() to write XML, JSON, CSV, or some proprietary format into disk files used by your application, use an SQLite database. You'll avoid having to write and troubleshoot a parser, your data will be more easily accessible and cross-platform, and your updates will be transactional. (more...)Website Database.?Because it requires no configuration and stores information in ordinary disk files, SQLite is a popular choice as the database to back small to medium-sized websites.Stand-in For An Enterprise RDBMS.?SQLite is often used as a surrogate for an enterprise RDBMS for demonstration purposes or for testing. SQLite is fast and requires no setup, which takes a lot of the hassle out of testing and which makes demos perky and easy to launch.Zero-ConfigurationSQLite does not need to be "installed" before it is used. There is no "setup" procedure. There is no server process that needs to be started, stopped, or configured. There is no need for an administrator to create a new database instance or assign access permissions to users. SQLite uses no configuration files. Nothing needs to be done to tell the system that SQLite is running. No actions are required to recover after a system crash or power failure. There is nothing to troubleshoot.SQLite just works.Other more familiar database engines run great once you get them going. But doing the initial installation and configuration can be intimidatingly complex.SERVERLESSMost SQL database engines are implemented as a separate server process. Programs that want to access the database communicate with the server using some kind of interprocess communication (typically TCP/IP) to send requests to the server and to receive back results. SQLite does not work this way. With SQLite, the process that wants to access the database reads and writes directly from the database files on disk. There is no intermediary server process.There are advantages and disadvantages to being serverless. The main advantage is that there is no separate server process to install, setup, configure, initialize, manage, and troubleshoot. This is one reason why SQLite is a "zero-configuration" database engine. Programs that use SQLite require no administrative support for setting up the database engine before they are run. Any program that is able to access the disk is able to use an SQLite database.On the other hand, a database engine that uses a server can provide better protection from bugs in the client application - stray pointers in a client cannot corrupt memory on the server. And because a server is a single persistent process, it is able control database access with more precision, allowing for finer grain locking and better concurrency.Most SQL database engines are client/server based. Of those that are serverless, SQLite is the only one that this author knows of that allows multiple applications to access the same database at the same time.Single database fileAn SQLite database is a single ordinary disk file that can be located anywhere in the directory hierarchy. If SQLite can read the disk file then it can read anything in the database. If the disk file and its directory are writable, then SQLite can change anything in the database. Database files can easily be copied onto a USB memory stick or emailed for sharing.Other SQL database engines tend to store data as a large collection of files. Often these files are in a standard location that only the database engine itself can access. This makes the data more secure, but also makes it harder to access. Some SQL database engines provide the option of writing directly to disk and bypassing the file system all together. This provides added performance, but at the cost of considerable setup and maintenance pactWhen optimized for size, the whole SQLite library with everything enabled is?less than 500KiB in size?(as measured on an ix86 using the "size" utility from the GNU compiler suite.) Unneeded features can be disabled at compile-time to further reduce the size of the library to under 300KiB if desired.Most other SQL database engines are much larger than this. IBM boasts that its recently released CloudScape database engine is "only" a 2MiB jar file - an order of magnitude larger than SQLite even after it is compressed! Firebird boasts that its client-side library is only 350KiB. That's as big as SQLite and does not even contain the database engine. The Berkeley DB library from Oracle is 450KiB and it omits SQL support, providing the programmer with only simple key/value pairs.CHAPTER 8IMPLEMENTATION OF FACIAL RECOGNITION TECHNOLOGY8.1 FACIAL PROCESSINGTransform your apps with the ability to profile faces. The Snapdragon SDK makes it possible to detect a smile, determine where the eyes are looking, and detect blinking.?Using this, you can create new interactions and enhance the user experience by doing things like integrating with real-time camera preview or analyzing photos in a photo album.You can track a variety of facial properties with each frame:Blink Detection?– measure how open each eye isGaze Tracking?– assess where the subject is lookingSmile Value?– estimate the degree of the smileFace Orientation?– track the Yaw, Pitch and Roll of the headThese capabilities work with both real-time and stored images or videos, so your app can integrate them for different kinds of uses.8.2 FACIAL RECOGNITIONGo beyond face detection and perform real-time face analysis to identify people.? You can use these Snapdragon SDK for Android capabilites to develop apps that can add users to an internal database through face registrion and then identify users based on facial analysis. Thes features do not use any cloud-based recognition and are done entirely offline.These features allow your app to interact with a user in more ways:Profiles?– enable per-user settings and preferencesTurn-based gaming?– allow players to take turns by setting up the UI specific to each playerPhoto apps –?run automated framing or facial processing to set up preferred users for picture takingThis feature set works with both real-time and stored images or videos.RequirementsHardware Requirements:?Snapdragon S4, 200, 400, 600, or 800Software Requirements:?Android 4.0.3 and upSample Devices:?Nexus 4, Samsung Galaxy S4 (Quad-core), LG Optimus G, HTC One, Sony Xperia Z TabletIntroduction Face detection from an image is a key problem in human computer interaction studies and in pattern recognition researches. Many researchers on automatic face detection have been proposed recently. The researchers of face detection are divided into a various of approaches.The feature-based approaches required the detection and measurement of salient facial points used geometrical distances and angles between primary facial features such as eyes, nose and mouth to classify faces using an economic representation of the face where the elements were based on their relative positions and sizes. A template-matching strategy was based on the earlier work of using feature-based templates of the mouth, eyes and nose, in addition to whole face templates. suggested that the expected shape of geometric features could be used to construct deformable templates in which templates could be translated, rotated and deformable to fit the best representation of their shape present in the image. However, low-level computer vision algorithms such as feature-based approaches were not powerful enough to find out all possible face regions and there were not likely to perform well in case of small faces or low quality images. The deformable templates were computationally expensive and not robust to everyday variation. Also, although the PCA was a very efficient designed specifically to characterize face region, it was not invariant to image transformations such as scaling, shift of rotation in its original form and requires complete relearning of the training data to add new individuals to the database. Although performance of pattern method approaches reported was quite well, and some of them could detect non-frontal faces, the approaches were extremely computationally expensive. This approach for face recognition by overcoming disadvantages of existing method and an advanced effective proposal is figured to achieve face recognition through Qualcomm SDK. Since this is a industrial project, we are basically using for attendance and payroll tracking system. The system provides good results at lower computational cost than other detection techniques. We find the face candidate by skin and hair color like and the face by adapting intersection relationship (ICH) between a convex-hull of skin color regions(SCH) and a convex-hull of hair color regions(HCH). Algorithms that construct?hulls?of various objects have a?broad range of applications?in?mathematics?and?computer science.In?computational geometry, numerous algorithms are proposed for computing the hull of a finite set of points, with various?computational complexities. Computing the hull means that a non-ambiguous and efficient?representation?of the required convex shape is constructFig 8.2: Hull Point AnalysisHull Point Analysis 8.3 METHODOLOGY The implementation of face recognition technology include the following four stages: Data acquisition. Input processing. Face image and decision making. 8.3.1 DATA ACQUISITION The input can be recorded video of the speaker or a still image. A sample of 1sec duration consists of a 25 frame video sequence. More than one camera can be used to produce a 3D representation of the face and to protect against the usage of photographs to gain unauthorized access.8.3.2 INPUT PROCESSING: A pre – processing module locates the eye position and takes care of the surrounding lighting condition and colour variance. First the presence of faces or face in a scene must be detected . Once the face is detected , it must be localized and normalization process may be required to bring the dimensions of the live facial sample in alignment with the one on the templates. Some facial recognition approaches uses the whole face while others concentrate on facial components and/ or regions such as lips, eyes etc. The appearance of the face can change considerably during speech and due to facial expressions. In particular the mouth is subjected to fundamental changes but is also very important source for discriminating faces. So an approach to persons recognition is developed based on spatio – terminal modeling of features extracted from talking face. Models are trained specific to a persons speech articulate and the way that the person speaks. Person identification is performed by tracking mouth movements of the talking face and by estimating the likelyhood of each model of having generated the observed sequence of features. The model with the highest likelyhood is chosen as the recognized person. BLOCK DIAGRAM Talking face Lip Tracker Normalization Thresholding Alignment Score and decision Accept / Reject Fig 8.3: Block Diagram 8.3.3 FACE IMAGE AND DECISION MAKING Synergetic computer are used to classify optical and audio features, respectively. A synergetic computer is a set of algorithm that stimulate synergetic phenomena. In training phase the BIOID creates a prototype called faceprint for each person. A newly recorded pattern is preprocessed and compared with each faceprint stored in the database. As comparisons are made, the system assigns a value to the comparison using a scale of one to ten. If a score is above a predetermined threshold, a match is declared. FACE IMAGE FACE EXTRACTION LIP MOVEMENT SYNERGETIC COMPUTER SYNERGETIC COMPUTER DECISION STRATEGY Fig 8.4: Detection Flow From the image of the face, a particular trait is extracted. It may measure various nodal points of the face like the distance between the eyes , width of the nose etc. It is fetched to the synergetic computer which consists of algorithms to capture ,process, compare the sample with the one stored in the database. System can also track the lip movements which is also fed to the synergetic computer. Observing the likelyhood each of the sample with the one stored in the database we can accept or reject the sample. CHAPTER 9REQUIREMENTS9.1 HARDWARE REQUIREMENTS: Processor : Qualcomm Snapdragon of 400MHZ or higher . RAM : Minimum 64MB primary memory. Hard disk : Minimum 1GB hard disk space. Moniter : Preferably color monitor (16 bit color) and above Web camera9.2 SOFTWARE REQUIREMENTS: Operating System : Windows operating system. Languages : JAVA, Structured query Language and PHP. Front – end tool : Android Studio emulator .Back – end to: My SQL Alpha version 6.0;JDK : JDK 1.5 and above. Qualcomm SDK 800 specification : Features MSM8000 Capability Processors Applications Qualcomm SDK 8000 cores upto 1.2GHz. 64 bit processor Quad core, 512 KB L2 Cache Primary boot processor Memory Support System memory via EBI External Memory via SDC1 LPDDR3 SDRAM; 32 bit wide ;upto 533 MHz. Emmc v4.5/SD flash devices Configurable GPIO’s Number of GPIO’s ports Input configurations Output configurations 12 GPIO’s - GPIO_0 to GPIO_121 Pull up , Pull down, keeper or no pull. Programmable drive current. Camera Inerfaces Configurations supported General Camera Features Pixel manipulations ,image effects and post processing techniques,inclcuding detective pixel correction. I2C Conrol Step 1: connect the pc with SDK board via Ethernet cable Step 2: The JTAG is connected to WARP 1 to system USB port Step 3: To open the Android studio software by using this icon- Step 4: Now type the java code for all the modules and check it using android studio emulator Step 5: Download the java net bit streams to SDK Boards. The latest net bit stream is available here. The bit stream can be downloaded either via an external JTAG cable or a Compact Flash Card. Step 6: After downloading the code onto the board now create .APK file by using build option in android studio software. CHAPTER 10CONCLUSIONThus an effective approach for face recognition by overcoming disadvantages of existing method and an advanced effective proposal is figured to achieve face recognition through Qualcomm SDK. Since this is a industrial project, we are basically using for attendance and payroll tracking system. The system provides good results at lower computational cost than other detection techniques. Prisoners photos are taken and stored, in case of any escape we can track them through signal cameras, as this hull point analysis helps in identifying the right person instantly. Hash map technique is used to avoid duplications. Live recognition where the stored images can be identified in live and they can also be deleted and the album can be reset..CHAPTER 11APPENDIX11.1 CODINGpackage com.qualcomm.snapdragon.sdk.recognition.sample;import java.util.Arrays;import java.util.HashMap;import java.util.Map;import com.qualcomm.snapdragon.sdk.face.FacialProcessing;import com.qualcomm.snapdragon.sdk.face.FacialProcessing.FEATURE_LIST;import com.qualcomm.snapdragon.sdk.face.FacialProcessing.FP_MODES;import android.os.Bundle;import android.os.Vibrator;import android.annotation.SuppressLint;import android.app.Activity;import android.app.AlertDialog;import android.content.Context;import android.content.DialogInterface;import android.content.Intent;import android.content.SharedPreferences;import android.util.Log;import android.view.Menu;import android.view.View;import android.widget.AdapterView;import android.widget.GridView;import android.widget.Toast;public class FacialRecognitionActivity extends Activity {private GridView gridView;public static FacialProcessing faceObj;public final String TAG = "FacialRecognitionActivity";public final int confidence_value = 58;public static boolean activityStartedOnce = false;public static final String ALBUM_NAME = "serialize_deserialize";public static final String HASH_NAME = "HashMap";HashMap<String, String> hash;@Overrideprotected void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.activity_facial_recognition);hash = retrieveHash(getApplicationContext()); // Retrieve the previously// saved Hash Map.if (!activityStartedOnce) // Check to make sure FacialProcessing object// is not created multiple times.{activityStartedOnce = true;// Check if Facial Recognition feature is supported in the deviceboolean isSupported = FacialProcessing.isFeatureSupported(FEATURE_LIST.FEATURE_FACIAL_RECOGNITION);if (isSupported) {Log.d(TAG, "Feature Facial Recognition is supported");faceObj = (FacialProcessing) FacialProcessing.getInstance();loadAlbum(); // De-serialize a previously stored album.if (faceObj != null) {faceObj.setRecognitionConfidence(confidence_value);faceObj.setProcessingMode(FP_MODES.FP_MODE_STILL);}} else // If Facial recognition feature is not supported then// display an alert box.{Log.e(TAG, "Feature Facial Recognition is NOT supported");new AlertDialog.Builder(this).setMessage("Your device does NOT support Qualcomm's Facial Recognition feature. ").setCancelable(false).setNegativeButton("OK",new DialogInterface.OnClickListener() {public void onClick(DialogInterface dialog,int id) {FacialRecognitionActivity.this.finish();}}).show();}}// Vibrator for button pressfinal Vibrator vibrate = (Vibrator) FacialRecognitionActivity.this.getSystemService(Context.VIBRATOR_SERVICE);gridView = (GridView) findViewById(R.id.gridview);gridView.setAdapter(new ImageAdapter(this));gridView.setOnItemClickListener(new AdapterView.OnItemClickListener() {public void onItemClick(AdapterView<?> parent, View v,int position, long id) {vibrate.vibrate(85);switch (position) {case 0: // Adding a personaddNewPerson();break;case 1: // Updating an existing personupdateExistingPerson();break;case 2: // Identifying a person.identifyPerson();break;case 3: // Live RecognitionliveRecognition();break;case 4: // Reseting an albumresetAlbum();break;case 5: // Delete Existing PersondeletePerson();break;}}}}/** Method to handle adding a new person to the recognition album */private void addNewPerson() {Intent intent = new Intent(this, AddPhoto.class);intent.putExtra("Username", "null");intent.putExtra("PersonId", -1);intent.putExtra("UpdatePerson", false);intent.putExtra("IdentifyPerson", false);startActivity(intent);}/* * Method to handle updating of an existing person from the recognition * album */private void updateExistingPerson() {Intent intent = new Intent(this, ChooseUser.class);intent.putExtra("DeleteUser", false);intent.putExtra("UpdateUser", true);startActivity(intent);}/* * Method to handle identification of an existing person from the * recognition album */private void identifyPerson() {Intent intent = new Intent(this, AddPhoto.class);intent.putExtra("Username", "Not Identified");intent.putExtra("PersonId", -1);intent.putExtra("UpdatePerson", false);intent.putExtra("IdentifyPerson", true);startActivity(intent);}/* * Method to handle deletion of an existing person from the recognition * album */private void deletePerson() {Intent intent = new Intent(this, ChooseUser.class);intent.putExtra("DeleteUser", true);intent.putExtra("UpdateUser", false);startActivity(intent);}/* * Method to handle live identification of the people */private void liveRecognition() {Intent intent = new Intent(this, LiveRecognition.class);startActivity(intent);}/* * Method to handle reseting of the recognition album */private void resetAlbum() {// Alert box to confirm before reseting the albumnew AlertDialog.Builder(this).setMessage("Are you sure you want to RESET the album? All the photos saved will be LOST").setCancelable(true).setNegativeButton("No", null).setPositiveButton("Yes",new DialogInterface.OnClickListener() {public void onClick(DialogInterface dialog, int id) {boolean result = faceObj.resetAlbum();if (result) {HashMap<String, String> hashMap = retrieveHash(getApplicationContext());hashMap.clear();saveHash(hashMap, getApplicationContext());saveAlbum();Toast.makeText(getApplicationContext(),"Album Reset Successful.",Toast.LENGTH_LONG).show();} else {Toast.makeText(getApplicationContext(),"Internal Error. Reset album failed",Toast.LENGTH_LONG).show();}}}).show();}@Overridepublic boolean onCreateOptionsMenu(Menu menu) {// Inflate the menu; this adds items to the action bar if it is present.getMenuInflater().inflate(R.menu.facial_recognition, menu);return true;}protected void onPause() {super.onPause();}protected void onDestroy() {super.onDestroy();Log.d(TAG, "Destroyed");if (faceObj != null) // If FacialProcessing object is not released, then// release it and set it to null{faceObj.release();faceObj = null;Log.d(TAG, "Face Recog Obj released");} else {Log.d(TAG, "In Destroy - Face Recog Obj = NULL");}}@Overrideprotected void onStop() {super.onStop();}protected void onResume() {super.onResume();}@Overridepublic void onBackPressed() { // Destroy the activity to avoid stacking of// android activitiessuper.onBackPressed();FacialRecognitionActivity.this.finishAffinity();activityStartedOnce = false;}/* * Function to retrieve a HashMap from the Shared preferences. * @return */protected HashMap<String, String> retrieveHash(Context context) {SharedPreferences settings = context.getSharedPreferences(HASH_NAME, 0);HashMap<String, String> hash = new HashMap<String, String>();hash.putAll((Map<? extends String, ? extends String>) settings.getAll());return hash;}/* * Function to store a HashMap to shared preferences. * @param hash */protected void saveHash(HashMap<String, String> hashMap, Context context) {SharedPreferences settings = context.getSharedPreferences(HASH_NAME, 0);SharedPreferences.Editor editor = settings.edit();editor.clear();Log.e(TAG, "Hash Save Size = " + hashMap.size());for (String s : hashMap.keySet()) {editor.putString(s, hashMap.get(s));}mit();}/* * Function to retrieve the byte array from the Shared Preferences. */public void loadAlbum() {SharedPreferences settings = getSharedPreferences(ALBUM_NAME, 0);String arrayOfString = settings.getString("albumArray", null);byte[] albumArray = null;if (arrayOfString != null) {String[] splitStringArray = arrayOfString.substring(1,arrayOfString.length() - 1).split(", ");albumArray = new byte[splitStringArray.length];for (int i = 0; i < splitStringArray.length; i++) {albumArray[i] = Byte.parseByte(splitStringArray[i]);}faceObj.deserializeRecognitionAlbum(albumArray);Log.e("TAG", "De-Serialized my album");}}public void saveAlbum() {byte[] albumBuffer = faceObj.serializeRecogntionAlbum();SharedPreferences settings = getSharedPreferences(ALBUM_NAME, 0);SharedPreferences.Editor editor = settings.edit();editor.putString("albumArray", Arrays.toString(albumBuffer));mit();}}if (result) {int numFaces = faceObj.getNumFaces();if (numFaces == 0) {Log.d("TAG", "No Face Detected");if (drawView != null) {preview.removeView(drawView);drawView = new DrawView(this, null, false);preview.addView(drawView);}} else {faceArray = faceObj.getFaceData();if (faceArray == null) {Log.e("TAG", "Face array is null");11.2 IMAGE PROCESSING AND RESULTS In this project architecture for facial recognition systems is implemented using Qualcomm SDK board . In the proposed system we applied Hull point analysis algorithm method and back end Php database to improve security of the database and produce good quality image Fig 11.1 Adding user using add user module. After saving the face of the person in the add user module, it is saved in the system which is shown in the fig 11.2 Fig 11.2 Update user using Update existing user module. After updating the user identification with changes in database, it can identify the user which is generally used in attendance payroll system shown in fig 11.3. Fig 11.3 Identify persons using identity People module. After identifying the persons , the facial recognition system can undergo live recoginition process which is used in real time system which is shown in fig 11.4 Fig 11.4 Identify persons using Live Reognition module. As the facial recognition application is run by SQL programs at the back end of the modules it is easy to add or delete an user which is shown in fig 11.5 Fig 11.5 Remove user by using Delete Existing User module. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download