DESIGN OF CMOS SCHMITT TRIGGER



CHAPTER 1INTRODUCTIONIntroductionTaking attendance in a class or in an organization is very important. Maintain those attendance plays a crucial role for an institution or an organization. Important thing is how to take Attendance. Conventional way of taking attendance in schools is by calling of names by teacher, students responding on their roll numbers and putting ‘A’ or ‘P’ on log book accordingly. Other methods of taking attendance are RFID cards, biometric identifiers like fingerprint, face recognition, palm print, hand geometry, iris recognition etc. But conventional method looks better as it is cheaper and more reliable as observer is human being itself. Biometric recognition has the potential to become an irreplaceable part of many identification systems used for evaluating the performance of those people working within the organization. Although biometric technologies are being applied in many fields it has not yet delivered its promise of guaranteeing automatic human recognition. Face recognition is a technique of biometric recognition. It is considered to be one of the most successful applications of image analysis and processing; that is the main reason behind the great attention it has been given in the past several years.The human body has the privilege of having features that are unique and exclusive to each individual. This exclusivity and unique characteristic has led to the field of biometrics and its application in ensuring security in various fields with various embedded controllers and embedded computers. Biometrics has gained popularity and has proved itself to be a reliable mode of ensuring privacy, maintaining security and identifying individuals. It has wide acceptance throughout the globe and now is being used at places like airports, hospitals, schools, colleges, corporate offices etc. Biometrics is the study of identifying a person by their physical traits that are inherent and unique to only the person concerned. Biometric identification include fingerprint verification, palm geometry, face recognition, iris recognition, etc. The above mentioned techniques work with different levels of functionality and accuracy. Accuracy and reliability are the two most important parameters when it comes to biometric applications and that too with advanced embedded computers. Fingerprint verification is one of the oldest known biometric techniques known but still is the most widely used because of its simplicity and good levels of accuracy. It’s a well-known fact that every human being is born with a different pattern on the fingers and this feature is exploited to identify and differentiate between two different persons that is what the factor which helped to initiate the model.1.2. Scope of Attendance Management SystemThe proposed system provides alternative to the existing problems. The manual business process of taking class attendance at a university is the scope of the project. This inconsistent process is the area where the proposed system will add benefit to an organization. The automation of this manual process will allow for more class time to be spent on teaching and less on this mandatory process. The opportunity in this project is the ability to start classes on time without delay. The aim of the project is to dedicate more time for the learning.The proposed system will process the following types of data: 1. Student information This will include specific information in regards to each student such as, student id, Name (First, Middle, and last). 2. Class Enrolments information This will include the list of registered students for a specific class.3. ReportsReports can be generated for the organization, instructors, as well as students interested in their personal attendance; how many times have they missed class, or been late. 4. User Interaction: Students (normal operation) -The interface for students will be through facial recognition. This interface for the student will be seamless. They only need to enter the room to be counted as present in the class. Students (reporting operation) –Students would interface with the system via a Web camera. The ability to view their own attendance data will be controlled via account privileges. Schools, companies, organizations and other offices --Since the Attendance tracking system is a location based system using facial recognition, the application can be accessed using the set up. 6. Interface with Other Systems: The Attendance Tracking system will interface with facial recognition of user’s unique verification card to deploy the application. Data will include person’s attendance history, logging in and login out timings, follow up, and other future scope services. Interaction with other systems: There is no need for other systems to be integrated with our system; our system itself is internally integrated by different technologies ranging from hardware to software (backend database system).7. System Interface: Locations served by the system. The locations served by the system are local in nature. The University personnel and students with valid credentials would be able to access system via the set up arranged near them.8. Users served by system: It could be used for organizations ranging from handful to thousands of employees. It can be applied in any field that needs an automated attendance management system.10. Interaction with other systems: There is no need for other systems to be integrated with our system; our system itself is internally integrated by different technologies ranging from hardware (web camera) to software (backend database system).Authentication of student will be done by admin or teacher. Wait to accept attendance for student present in given location i.e. Suppose student is seating in classroom and coordinate already stored for classroom in database with specific range will match with student sending its coordinates. If match found marked as 'Present' or else found and 'Absent'.In any case if teacher need to find that student is not in class then through location one can find the exact location where the student is.Also all 'Absent' student parents will get SMS about its absentee in class.All teachers and admin can generate different results as per requirement.1.3. MotivationTaking attendance in a class or in an organization is very mandatory in almost all institutions. Maintaining those attendance records is a very big part for the organization. Important thing is how to take Attendance. Conventional way of taking attendance in schools is by calling of names by teacher, students responding on their roll numbers and putting ‘A’ or ‘P’ on log book accordingly. Other methods of taking attendance are RFID cards, biometric identifiers like fingerprint, palm print, hand geometry, iris recognition etc. But conventional method looks better as it is cheaper and more reliable as observer is human being itself. Although biometric technologies are being applied in many fields it has not yet delivered its promise of guaranteeing automatic human recognition. Face recognition is a technique of biometric recognition. It is considered to be one of the most successful applications of image analysis and processing; that is the main reason behind the great attention it has been given in the past several years.The human body has the privilege of having features that are unique and exclusive to each individual. This exclusivity and unique characteristic has led to the field of biometrics and its application in ensuring security in various fields with various embedded controllers and embedded computers. Biometrics has gained popularity and has proved itself to be a reliable mode of ensuring privacy, maintaining security and identifying individuals. It has wide acceptance throughout the globe and now is being used at places like airports, hospitals, schools, colleges, corporate offices etc. Biometrics is the study of identifying a person by their physical traits that are inherent and unique to only the person concerned. Biometric identification includes fingerprint verification, palm geometry, face recognition, iris recognition, etc. The above-mentioned techniques work with different levels of functionality. They all have different levels of accuracy. Accuracy and reliability are the most important factors that are to be considered when it comes to biometric applications.Face recognition is a very big sought out for the problem called biometrics. It has got variety of applications in the modern-day life. The problems that are encountered during the research of the biometrics were not addressed properly. They are addressed properly in the case of facial recognition. The problems that are faced in the traditional system paved a way for the facial recognition. Facial recognition is the very crucial authentication system which is very robust. Facial recognition is one of the newly arrived biometric techniques known but still is the most widely used because of its simplicity and good levels of accuracy. It’s a well-known fact that every human being is born with the different features in the face and this feature is exploited to identify and differentiate between two different persons that is what the factor which helped to initiate the model.1.4. Problem DefinitionTaking attendance is a long process and takes lot of effort and time, especially in case of a class with huge number of students. It is also problematic when an exam is held and it also causes a lot of disturbance for the class. Moreover, the attendance sheet may subject to any kind of damage and loss of data may occur while being passed on between different students or teaching staff. And when the number of students enrolled in a certain course is huge, the lecturers tend to call the names of students randomly which is not fair student evaluation process either. This process could be easy and effective with a small number of students but on the other hand dealing with the records of a large number of students often leads to human error. This error is replaced by this technique which is easy and effective for noting down the attendance of the students in the hall. 1.5. ObjectivesThis approach functions based on the uniqueness of each person and it integrating the biometric device to transmit the information obtained in this approach they are using feature extraction and matching algorithm and they maintaining the database to authenticate the person who approaching for the access in the organisations like airports, hospitals, schools, colleges, corporate offices etc. Biometrics is the study of identifying a person by their physical traits that are inherent and unique to only the person concerned. Biometric identification includes fingerprint verification, palm geometry, face recognition, iris recognition, etc. The above-mentioned techniques work with different levels of functionality and accuracy. Accuracy and reliability are the two most important parameters when it comes to biometric applications. Facial recognition is the newly arrived techniques in the biometric techniques known but still is the most widely used because of its simplicity and good levels of accuracy. It’s a well-known fact that every human being is born with the different features on the face and this feature is exploited to identify and differentiate between two different persons that is what the factor which helped to initiate the model.The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Facial recognition based attendance system is one of the solutions to address this problem.This system can be used to take attendance for student in school, college, and university. It also can be used to take attendance for workers in working placesTo replace the current existing student attendance system process to fully-computerized and automated student attendance system. To develop a desktop-based application that obtains the student facial recognition every time they attend the classes for attendance marking purpose. To generate reports regarding to the student attendance in order to assist the lecturer/staff in analyse and tracking the student attendance. To eliminate the chances for student to ask their buddy sign attendance for them through the implementation of fingerprint attendance system. 1.6. Organisation of the reportChapter 1. Introduction- This chapter gives a brief explanation about the introduction of the proposed system, objectives and scope of the system.Chapter 2. Literature survey- This chapter explains about the literature survey done.Chapter 3. Block Diagram and Flow of Execution – This Chapter gives brief description about Block diagram consisting of components and its flow of execution.Chapter 4. Results – This chapter explains about the code output explained through G codeChapter 5. Conclusion and Future Scope – This chapter describes about the conclusion and future scope of the proposed report.Chapter 2LITERATURE SURVEYSonam Shukla, Pradeep Mishra suggested increasing the Accuracy of an Existing Face Recognition System Using Adaptive Technique, in this approach developer mainly focusing on Integrated Automated Face Identification Service (IAFIS) of the most famous police agencies[1]. They extracted fingerprint pattern is characterized by a set of ridgelines that often flow in parallel, but intersect and terminate at some points. The uniqueness of a fingerprint is determined by the local ridge characteristics and their relationships. Main drawback of this model is this approach is not so apt for real time applications but the accuracy of system is highly adaptable. Most automatic systems for fingerprint comparison are based on minutiae matching. Le Hoang Thai and Ha Nhat Tam in 2010 suggested face recognition using standardized face model, now a days, and face recognition is one of the most important biometric technologies based on facial points possessing distinctiveness[2].Jia-Jun Wong and Siu-Yeung Cho in their paper stated that recognizing human emotions from partial facial features is quite hard to achieve reasonable accuracy. In this paper, we propose to use a tree structure representation to simulate as human perceiving the real human face and both the entities and relationship could contribute to the facial expression features. Moreover, a new structural connectionist architecture based on a probabilistic approach to adaptive processing of data structures is presented to generalize the Face emotion tree structures (FEETS)[3]. We demonstrated the robustness of our proposed system in recognizing the correct emotion based on partial face features. The system yields an accuracy of about 90% for subjects with partial face covered by artifacts.In this approach they focused on improving the quality of images acquisition. In face recognition process, the important step which effects on system accuracy is matching between template and query face. This approach functions based on the uniqueness of each person and it integrating the biometric device to transmit the information obtained in this approach they are using face extraction and matching algorithm and they maintaining the database to authenticate the person who approaching for the access through the online web page created in the local server.In the process of system development, literature reviews conducted to understand the theory, methods and technologies associated with systems that have been developed. Background research on the organization and comparative studies of existing systems is also done to understand the system requirements before the system was present.Pengpeng Yu and Yinjie Cao in his paper elevated Computer Vision (CV)-A new research and development tool, appeared for few years, may help to realize the function of face detection and so on[4]. Based on the result of face detection, choose appropriate area to trace using optical flow method, the movement of human head can be recognized.Rupali L. Telgad and Almas Siddiqui explained about the methods of three biometric characteristics are used i.e. Fingerprint, Face, Iris at score level of Fusion. For finger print images two methods are used i.e. Minutiae Extraction and Gabor filter approach. For Iris recognition system Gabor wavelet is used for feature selection[5]. For Face biometric system P.C.A. is used for feature selection. The match count of every trait is calculated. Then the generated result of match and non-match is utilized for the sum score level fusion. Then decision is find out for persons recognition. The system is tested on std. Dataset and KVK data set. On KVK dataset it generates an the results as 99.7 % with FAR of 0.02% and FRR of 0.1% and for FVC 2004 dataset and MMU dataset it gives the result as 99.8 % with FAR of 0.11% and FRR of 0.09%.Pankaj Wasnik and Kiran B. Raja detailed about Applicability of the face recognition for smartphone-based authentication applications is increasing for different domains such as banking and e-commerce[6]. The unsupervised data capture of face characteristics in biometric applications on smartphones presents the vulnerability to attack the systems using artefact samples. The threat of presentation attacks (aka spoofing attacks) need to be handled to enhance the security of the biometric system. In this work, we present a new approach of using the raw sensor data. We first obtain the residual image corresponding to noise by subtracting the median filtered version of raw data and then computing simple energy value to detect the artefact based presentations. The presented approach uses simple threshold and thereby overcomes the need for learning complex classifiers which are challenging to work on unseen attacks. The proposed method is evaluated using a newly collected database of 390 live presentation attempts of face characteristics and 1530 attack presentations consisting of electronic screen attacks and printed attacks on the iPhone 6S smartphone. Significantly lower average classification error (<; 3%) achieved demonstrates the applicability of proposed approach for detecting the presentation attacks.Techniques for Face RecognitionTraditional3-dimensional recognitionSkin texture analysisThermal cameras TraditionalSome face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Popular recognition algorithms include principal component analysis using Eigen faces. 3-dimensional recognitionThree-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin. One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors can be placed on the same CMOS chip—each sensor captures a different part of the spectrum. Even a perfect 3D matching technique could be sensitive to expressions. For that goal a group at the Technical ion applied tools from metric geometry to treat expressions as isometries. A new method is to introduce a way to capture a 3D picture by using three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject’s face in real time and be able to face detect and recognize Skin texture analysisAnother emerging trend uses the visual details of the skin, as captured in standard digital or scanned images. This technique, called skin texture analysis, turns the unique lines, patterns, and spots apparent in a person’s skin into a mathematical space.Tests have shown that with the addition of skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.Thermal camerasA different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or make up. A problem with using thermal pictures for face recognition is that the databases for face recognition are limited. Diego Socolinsky, and Andrea Selinger (2004) research the use of thermal face recognition in real life, and operation sceneries, and at the same time build a new database of thermal face images. The research uses low-sensitive, low-resolution ferro-electric electrics sensors that are capable of acquire long wave thermal infrared (LWIR). The results show that a fusion of LWIR and regular visual cameras has the greater results in outdoor probes. Indoor results show that visual has a 97.05% accuracy, while LWIR has 93.93%, and the Fusion has 98.40%, however on the outdoor proves visual has 67.06%, LWIR 83.03%, and fusion has 89.02%. The study used 240 subjects over the period of 10 weeks to create the new database. The data was collected on sunny, rainy, and cloudy days. Advantages of Facial RecognitionMany published works mention numerous applications in which face recognition technology is already utilized including entry to secured high‐risk spaces such as border crossings as well as access to restricted resources. On the other hand, there are other application areas in which face recognition has not yet been used. The potential application areas of face recognition technology can be outlined as follows: Automated surveillance, where the objective is to recognize and track people.Monitoring closed circuit television (CCTV), the facial recognition capability can be embedded into existing CCTV networks, to look for lost children or other missing persons or tracking known or suspected criminals.Image database investigations, searching image databases of licensed drivers, benefit recipients and finding people in large news photograph and video collections as well as searching in the Facebook social networking web site.Multimedia environments with adaptive human computer interfaces (part of ubiquitous or context aware systems, behavior monitoring at childcare or centers for old people, recognizing customers and assessing their needs).Airplane‐boarding gate, the face recognition may be used in places of random checks merely to screen passengers for further investigation. Similarly, in casinos, where strategic design of betting floors that incorporates cameras at face height with good lighting could be used not only to scan faces for identification purposes, but possibly to afford the capture of images to build a comprehensive gallery for future watch‐list, identification and authentication tasks .Sketch‐based face reconstruction, where law enforcement agencies in the world rely on practical methods to help crime witnesses reconstruct likenesses of faces. These methods range from sketch artistry to proprietary computerized composite systems.Forensic applications, where a forensic artist is often used to work with the eyewitness in order to draw a sketch that depicts the facial appearance of the culprit according to his/her verbal description. This forensic sketch is used later for matching large facial image databases to identify the criminals. Yet, there is no existing face recognition system that can be used for identification or verification in crime investigation such as comparison of images taken by CCTV with available database of mugshots. Thus, utilizing face recognition technology in the forensic applications.Face spoofing and anti‐spoofing, where a photograph or video of an authorized person's face could be used to gain access to facilities or services. Hence, the spoofing attack consists in the use of forged biometric traits to gain illegitimate access to secured resources protected by a biometric authentication system. It is a direct attack to the sensory input of a biometric system, and the attacker does not need previous knowledge about the recognition algorithm. Research on face spoof detection has recently attracted an increasing attention, introducing few number of face spoof detection techniques. Thus, developing a mature anti‐spoofing algorithm is still in its infancy and further research is needed for face spoofing applications].There have been envisaged many applications for face recognition, but most of commercial ones exploit only superficially the great potential of this technology. Most of the applications are notable limited in their ability to handle pose, lighting changes or aging.In reference to access control, face verification during face‐based PC logon has become feasible, but seems to be very limited. Naturally, such PC verification system can be extended in the future for authentic single sign‐on to multiple networked services or transaction authorisation or even for access to encrypted files. For example, banking sector is rather conservative in deploying such a biometrics. They estimated high risk in losing customers disaffected by being falsely rejected than they might gain in fraud prevention. It is the reason for robust passive acquisition systems development with low false rejection.The most of physical access control systems uses face recognition combination with other biometrics, for example speaker identification and lip motion.One of the most interest in face recognition in application domain is associated with surveillance. Regarding to the generous type of information it contains, video is the medium of choice for surveillance. For applications that require identification, face recognition is the best biometric for video data. The biggest advantage of this approach is passive participation of subject (human). The whole process of recognition and identification can be carried out without the person's knowledge.Although the development of face recognition surveillance systems has already begun, the technology seems to not accurate enough. It also brings additional problems concerning highly extensive perception in the data gathering and computing side of such complex solutions.Another future domain, where face recognition is expected to become important, is area of pervasive or ubiquitous computing. Computing devices equipped with sensors become more widespread in reference to together networking. Such approach will allow envisage a future where the most of everyday objects are going to have some computational power, allowing to precisely adapt their behaviour to various factors including time, user, user control or host. This vision assumes easy information exchange, also including images between devices of different types.Currently, the most of devices have simple user interface, controlled only by active commands on the part of the user. Some of the devices are able to sense environment and acquire information about the physical word and the people within their region of interest. One of the crucial part of smart devices of human awareness is knowing the identity of the users close to a device, even currently implemented in several smartphones with different results. It is important when contributed with other biometrics regarding to passive nature of face recognition. Chapter 3BLOCK DIAGRAM AND FLOW OF EXECUTION3.1. IntroductionA facial recognition system is a computer application capable of identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a face database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.[1] Recently, it has also become popular as a commercial identification and marketing tooAmong the different biometric techniques, face recognition has one key advantage that it does not require the cooperation of the test subject to work. Properly designed systems installed in airports, multiplexes, and other public places can identify individuals among the crowd, without passers-by even being aware of the system. Other biometrics like fingerprints, iris scans, and speech recognition cannot perform this kind of mass identification. The rapid development of society and technology, the demand of security insurance brought by the identification is gradually increasing. Faces have uniqueness just as fingerprints, and people can use them to identify a person’s identity accurately. Face recognition has become a technology which is widely applied in many fields. So, the technology of face recognition has great research value and application prospect. This project studied on design scheme for face recognition system. The system uses MATLAB and LabVIEW together to build a software program to achieve the purpose of accurate recognition of human faces. The system combines the advantages of LabVIEW and MATLAB. Matlab code is used as a MathScript node in LabVIEW which enables LabVIEW to compile and execute Matlab code in it. It can detect and recognize the face in real time, and has the advantages of simple structure, friendly interface, easy to use and so on.3.2. Block diagramFig.3.1 shows the block diagram of the Attendance system using Face recognition by LabVIEW. Blocks contained are web camera, laptop, myRIO, GSM and buzzer. These blocks are explained in the following.Fig 3.1 Block diagram3.2.1. Webcam:179895523691850 A webcam is a video camera that feeds or streams its image in real time to or through a computer to a computer network. When “captured" by the computer, the video stream may be saved, viewed or sent on to other networks via systems such as the internet, and emailed as an attachment. When sent to a remote location, the video stream may be saved, viewed or on sent there. Unlike an IP Camera (which connects using Ethernet or Wi-Fi), a webcam is generally connected by a USB cable, or similar cable, or built into computer hardware, such as laptops. The basic webcam is shown below in the fig3.2.Fig 3.2. WebcamThe term "webcam" (a clipped compound) may also be used in its original sense of a video camera connected to the web continuously for an indefinite time, rather than for a particular session, generally supplying a view for anyone who visits its web page over the internet. Some of them, for example, those used as online traffic cameras, are expensive, rugged professional video cameras.3.2.1.1. Characteristics:Webcams are known for their low manufacturing cost and their high flexibility, making them the lowest-cost form of video telephony. Despite the low cost, the resolution offered at present (2015) is rather impressive, with low-end webcams offering resolutions of 320×240, medium webcams offering 640×480 resolution, and high-end webcams offering 1280×720 or even 1920×1080 resolution. 3.2.1.2. Uses:The most popular use of webcams is the establishment of video links, permitting computers to act as videophones or videoconference stations. Other popular uses include security surveillance, computer vision, video broadcasting, and for recording social videos. The video streams provided by webcams can be used for a number of purposes, each using appropriate software:3.2.1.3. Health care:Most modern webcams are capable of capturing arterial pulse rate by the use of a simple algorithmic trick. Researchers claim that this method is accurate to ±5 bpm.3.2.1.4. Video monitoring:Webcams may be installed at places such as childcare centers, offices, shops and private areas to monitor security and general activity.3.2.1.5. Commerce:Webcams have been used for augmented reality experiences online. One such function has the webcam act as a "magic mirror" to allow an online shopper to view a virtual item on themselves. The webcam social shopper is one example of software that utilizes the webcam in this manner. 3.2.1.6. Video calling and videoconferencing:Webcam can be added to instant messaging, text chat services such as AOL instant messenger, and VoIP services such as skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide. Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements (retouching, wrinkle smoothing and vertical stretch), automatic face tracking and autofocus, assist users by providing substantial ease-of-use, further increasing the popularity of webcams.Webcam features and performance can vary by program, computer operating system, and also by the computer's processor capabilities. Video calling support has also been added to several popular instant messaging programs.3.2.1.7. Video security:Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected. These recordings can then be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicized case, a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face even after the computer had been stolen.3.2.1.8. Video clips and stills:Webcams can be used to take video clips and still pictures. Various software tools in wide use can be employed for this, such as Pic master (for use with windows operating systems), photo booth (Mac), or cheese (with UNIX systems). 3.2.1.9. Input control devices:Special software can use the video stream from a webcam to assist or enhance a user's control of applications and games. Video features, including faces, shapes, models and colors can be observed and tracked to produce a corresponding form of control. For example, the position of a single light source can be tracked and used to emulate a mouse pointer, a head-mounted light would enable hands-free computing and would greatly improve computer accessibility. This can be applied to games, providing additional control, improved interactivity and impressiveness.Free track is a free webcam motion-tracking application for Microsoft windows that can track a special head-mounted model in up to six degrees of freedom and output data to mouse, keyboard, joystick and Free-Track support games. By removing the IR filter of the webcam, IR LEDs can be used, which has the advantage of being invisible to the naked eye, removing a distraction from the user. TrackIR is a commercial version of this technology.Small webcam-based PC games are available as either standalone executables or inside web browser windows using Adobe flash. 3.2.1.10. Astro photography:With very-low-light capability, a few specific models of webcams are very popular to photograph the night sky by astronomers and Astor photographers. Mostly, these are manual-focus cameras and contain an old CCD array instead of comparatively newer CMOS array. The lenses of the cameras are removed and then these are attached to telescopes to record images, video, still, or both. In newer techniques, videos of very faint objects are taken for a couple of seconds and then all the frames of the video are "stacked" together to obtain a still image of respectable contrast.3.2.1.11. Laser beam profiling:A webcam's CCD response is linear proportional to the incoming light. Therefore, webcams are suitable to record laser beam profiles, after the lens is removed. The resolution of a laser beam profiler depends on the pixel size. Commercial webcams are usually designed to record color images. The size of a webcam's color pixel depends on the model and may lie in the range of 5 to 10?m. However, a colour pixel consists of four black and white pixels each equipped with a colour filter. Although these colour filters work well in the visible, they may be rather transparent in the near infra-red. By switching a webcam into the Bayer-mode it is possible to access the information of the single pixels and a resolution below 3?m was possible.3.2.2 MY RIO:3.2.2.1. Introduction:MyRIO is a real-time embedded evaluation board made by National Instruments. It is used to develop applications that utilize its on-board FPGA and microprocessor. It requires LabVIEW. Fig 3.3. MyRIOIt features a 667 MHz dual-core ARM Cortex-A9 programmable processor and a customizable Xilinx field programmable gate array (FPGA). The NI myRIO device features the Zynq-7010. All Programmable system on a chip (SoC) to unleash the power of NI LabVIEW. System design software both in a real-time (RT) application and on the FPGA level. Rather than spending copious amounts of time debugging code syntax or developing user interfaces, students can use the LabVIEW graphical programming paradigm to focus on constructing their systems and solving their design problems without the added pressure of a burdensome tool. LabVIEW differs from most other development platforms in the regard that it depends on ‘visual programming’ more than actual coding. In simple words, you use either predefined or custom components to complete programming tasks. For example, to create a reiterating series, all you need to do is drag and drop the corresponding loop functionality onto the block diagram. Add your conditions, outputs, connections, and you’re good to go!The language used in LabVIEW is also called ‘G’, which is completely different from the numeric control programming language G-code or G programming language. G is a dataflow programming language, which means that it models its programs on directed paths. Or simply, it is a dynamic path based language that changes outputs depending on variables in the program flow, which can be changed to reflect output changes. It is a highly versatile programming language that reflects changes in real time, suitable for real world applications.3.2.2.2. Hardware Overview:The NI myRIO-1900 provides analog input (AI), analog output (AO), digital input and output (DIO), audio, and power output in a compact embedded device. The NI myRIO-1900 connects to a host computer over USB and wireless?Fig 3.4. NI myRIO-1900 Hardware Block Diagram3.2.2.3. Connector Pinouts:NI myRIO-1900 Expansion Port (MXP) connectors A and B carry identical sets of signals. The signals are distinguished in software by the connector name, as in Connector A/DIO1 Connector B/DIO1 and using signals. The following figure and table show the signals on MXP connectors A and B. Fig 3.5. Primary and secondary signals of MyRIO3.2.2.4. Analog Input Channels:The NI myRIO-1900 has analog input channels on myRIO Expansion Port (MXP) connectors A and B, Mini System Port (MSP) connector C, and a stereo audio input connector. The analog inputs are multiplexed to a single analog-to-digital converter (ADC) that samples all channels. MXP connectors A and B have four single-ended analog input channels per connector, AI0-AI3, which you can use to measure 0-5 V signals. MSP connector C has two high-impedance, differential analog input channels, AI0 and AI1, which you can use to measure signals up to ±10 V. The audio inputs are left and right stereo line-level inputs with a ±2.5 V full-scale range.3.2.2.5. Analog Output Channels:The NI myRIO-1900 has analog output channels on myRIO Expansion Port (MXP) connectors A and B, Mini System Port (MSP) connector C, and a stereo audio output connector. Each analog output channel has a dedicated digital-to-analog converter (DAC), so they can all update simultaneously. The DACs for the analog output channels are controlled by two serial communication buses from the FPGA. MXP connectors A and B share one bus, and MSP connector C and the audio outputs share a second bus. Therefore, the maximum update rate is specified as an aggregate figure in the MXP connectors A and B have two analog output channels per connector, AO0 and AO1, which you can use to generate 0-5 V signals. MSP connector C has two analog output channels, AO0 and AO1, which you can use to generate signals up to ±10 V. The audio outputs are left and right stereo line-level outputs capable of driving headphones. Fig 3.6. pin configuration in MyRIO3.2.2.6. DIO Lines:The NI myRIO-1900 has 3.3 V general-purpose DIO lines on the MXP and MSP connectors. MXP connectors A and B have 16 DIO lines per connector. On the MXP connectors, each DIO line from 0 to 13 has a 40 k? pullup resistor to 3.3 V, and DIO lines 14 and 15 have 2.2 k? pullup resistors to 3.3 V. MSP connector C has eight DIO lines. Each MSP DIO line has a 40 k? pulldown resistor to ground. DGND is the reference for all the DIO lines. You can program all the lines individually as inputs or outputs. Secondary digital functions include Serial Peripheral.3.2.2.7. Ground Connections: The USB connector shields and the mounting holes are connected together internally to form chassis ground. Chassis ground is shorted to digital ground near the USB Host connector. When connecting the NI myRIO-1950 to external devices, ensure that stray ground currents do not use the NI myRIO-1950 as a return path. Significant stray currents can cause device failure. After final assembly of your system, use a current probe to compare the current flowing out of the power connector with the current flowing into the power connector. Investigate and remove any current differences.3.2.3 GSM Module:GSM is a mobile communication modem; it is stands for global system for mobile communication (GSM). The idea of GSM was developed at Bell Laboratories in 1970. ?It is widely used mobile communication system in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services operates at the 850MHz, 900MHz, 1800MHz and 1900MHz frequency bands.GSM system was developed as a digital system using time division multiple access (TDMA) technique for communication purpose. A GSM digitizes and reduces the data, then sends it down through a channel with two different streams of client data, each in its own particular time slot. The digital system has an ability to carry 64 kbps to 120 Mbps of data rates.There are various cell sizes in a GSM system such as macro, micro, pico and umbrella cells. Each cell varies as per the implementation domain. There are five different cell sizes in a GSM network macro, micro, pico and umbrella cells. The coverage area of each cell varies according to the implementation environment.center23114000 Fig 3.7. GSM module3.2.3.1. Time Division Multiple Access:TDMA technique relies on assigning different time slots to each user on the same frequency. It can easily adapt to data transmission and voice communication and can carry 64kbps to 120Mbps of data rate.3.2.3.2. GSM Architecture:A GSM network consists of the following components:A Mobile Station:? It is the mobile phone which consists of the transceiver, the display and the processor and is controlled by a SIM card operating over the network.Base Station Subsystem: It acts as an interface between the mobile station and the network subsystem. It consists of the Base Transceiver Station which contains the radio transceivers and handles the protocols for communication with mobiles. It also consists of the Base Station Controller which controls the Base Transceiver station and acts as a interface between the mobile station and mobile switching work Subsystem: It provides the basic network connection to the mobile stations. The basic part of the Network Subsystem is the Mobile Service Switching Centre which provides access to different networks like ISDN, PSTN etc. It also consists of the Home Location Register and the Visitor Location Register which provides the call routing and roaming capabilities of GSM. It also contains the Equipment Identity Register which maintains an account of all the mobile equipments wherein each mobile is identified by its own IMEI number. IMEI stands for International Mobile Equipment Identity.3.2.3.3. GSM module Pin diagram:User can power on the GPRS module by pulling down the PWR button or the P pin of control interface for at least 1 second and release. This pin is already pulled up to 3V in the module internal, so external pull up is not necessary. When power on procedure is completed, GPRS module will send following URC to indicate that the module is ready to operate at fixed baud rate.Fig.3.8. GSM module Pin diagram3.2.3.4. Features of GSM Module:Improved spectrum efficiencyInternational roamingCompatibility with integrated services digital network (ISDN)Support for new services.SIM phonebook managementFixed dialing number (FDN)Real time clock with alarm managementHigh-quality speechUses?encryption?to make phone calls more secureShort message service (SMS)The security strategies standardized for the GSM system make it the most secure telecommunications standard currently accessible. Although the confidentiality of a call and secrecy of the GSM subscriber is just ensured on the radio channel, this is a major step in achieving end-to- end security.3.2.4 BuzzerA buzzer or beeper is a signaling device, usually electronic, typically used in automobiles, household appliances such as a microwave oven, or game shows.It most commonly consists of a number of switches or sensors connected to a control unit that determines if and which button was pushed or a preset time has lapsed, and usually illuminates a light on the appropriate button or control panel, and sounds a warning in the form of a continuous or intermittent buzzing or beeping sound. Initially this device was based on an electromechanical system which was identical to an electric bell without the metal gong (which makes the ringing noise). Often these units were anchored to a wall or ceiling and used the ceiling or wall as a sounding board. Another implementation with some AC-connected devices was to implement a circuit to make the AC current into a noise loud enough to drive a loudspeaker and hook this circuit up to a cheap 8-ohm speaker. Nowadays, it is more popular to use a ceramic-based piezoelectric sounder like a Sonalert which makes a high-pitched tone. Usually these were hooked up to "driver" circuits which varied the pitch of the sound or pulsed the sound on and off.The word "buzzer" comes from the rasping noise that buzzers made when they were electromechanical devices, operated from stepped-down AC line voltage at 50 or 60 cycles. Other sounds commonly used to indicate that a button has been pressed are a ring or a beep. Some systems, such as the one used on Jeopardy make no noise at all, instead using light.Fig.3.9. Buzzer3.3 Connection DiagramWebcam is connected through USB to the laptop as acquisition is done using Vision acquisition. Myrio is connected to the laptop using the USB wire for deployment of code. Connections between MyRio and GSM module , Myrio and Buzzer are as shown in the Fig.3.9Fig.3.10 Connection diagramReal connections of Webcam, Laptop, MyRIO, GSM module and Buzzer are shown in the Fig.3.10 and Fig.3.11Fig.3.11. Connection of MyRIO, GSM module and Buzzer3.4 Flow of executionAs each block is explained in the block diagram. Here, in flow of execution, how the code flow goes will be explained.Once the code is started, we set all the controls and indicators in the front panel are all initialized to the set default values. Later face will be acquired from the web camera either for storing images in database or for face recognition. The acquired image is processed using the available predefined or user defined VI’s. Thus image processing will take place.If the image is for face acquisition then image will be stored in the desired location of the database. Else, if the image is for face recognition, the image will be stored in a temporary location. This location is called Test Database. This Test Database serves as a storing location and also as an input to the recognition process. In face recognition, Eigen values for the temporary image are generated along with the generation of the Eigen values for each image in the Train Database, which is the main database. Once the generation of these Eigen values generation is done, temporary image Eigen values are compared with each and every image Eigen values in the Train Database. All the images in Train Database are converted from 2D matrix of images to 1D matrix of Eigen vectors by using a process of Vectorization. For this process of Vectorization and also for face recognition Matlab is used in the form of MathScript Node in Labview. Eigen value of the image from Test Database is matched within a threshold and a limit with the Eigen values of the images in Train database. Output of this will be whether the image is found or not. If the image is found, a phone number or an Email ID linked with that image will be loaded from the database and message will be sent to them to inform that their ward has attended the college. Once all the process is done, the code stops. But this is a loop code, where each loop is executed for a single image. On the other side, to notify the student that their attendance has been posted, a buzzer is used to give a beep signal. The advantage of this code lies in memory management. If the flow of input images is too high, we can store all these temporary images in the test database and once the flow reduces, all the images in the test database undergoes process of face recognition individually. Where the process mentioned above will be repeated for every image. Fig 3.12. FlowchartChapter 4RESULTS4.1 Front panelFront panel for the “Attendance system using Face recognition by LabVIEW” is shown below in Fig.4.1Fig.4.1. Front panel Complete code depends on the enum control shown in the front panel. Enum has five items declared in it. They are waiting state, Acquisition for train database, Acquisition for test database, Recognition state and Msg state. Depending upon the choosen iten in the Enum Other control and indicators work.‘Save?’ control is used in Acquisition for train database and Acquisition for test database cases. When it is pressed one of the stream of incoming images through webcam will be stored to the declared place.‘Roll.no’ control is used in the Acquisition for train database and Acquisition for test database. The input given through this is the name used to store the image in the folder.‘Mobile number’ control is used before recognition state as Msg state follows Recognition state continuously. So once the recognition is done, message will be sent to the mentioned mobile number in the control.‘Capture’ control will be used in the Acquisition for the train database and Acquisition for test database. When the Boolean is given an input the acquisition through camera will be stopped.‘Stop’ is the final control and main one, as its usage is to stop the code at any stage to do modifications or whenever there occurs an error.‘Image out’ is the only indicator which will be given input only in the states Acquisition for train database and Acquisition for test database. Hence before image is going to be stored in the file, it will be displayed through this.4.2. Waiting stageInitially when the program is run, it has to wait until an event is occurred by the user, hence this stage is used for the code to wait for an input. Here we use an event structure, where it depends only on the enum that decides the state to run. So when user selects the state program moves from waiting state to selected state. Fig 4.2. Waiting state in front panel4.2 Acquisition for train databaseFirstly train database is the database, where 2 images of each person are stored. These images are used to identify the person whose attendance has to be registered. Here we acquire images using webcam. When we press the acquisition save button by giving number, with which image has to be saved, then image will be stored in the train database. Here advatage is user can store more than two images of a particular person in the train database, which gives more precise output.Once user decides that storing images is done, he/she can switch to either waiting state or any other remaining states using enum local variable.Fig 4.3. Images in train databaseComing to code we use vision acquisition icon to acquire images. Storing images can be done directly but to have more accurate results of recognition we convert the images into black and white, and then store. To convert image into black and white, we have followed two steps.Step-1: Supply acquired image to NI_Vision_Development_Module.lvlib:IMAQ Cast Image. This converts the current image type to the image type specified by image type. Here choosen image type is RGB(U32).Step-2: Image acquired in step 1 is supplied as input to NI_Vision_Development_Module.lvlib:IMAQ ExtractSingleCilorPlane. It extracts a single plane from a color image, which depends on the input type of color plane. Here color plane used is luminance.All the acquired images go through these steps but only the permitted ones using acquisition save are saved into train database.Fig 4.4. Acquisition for train database state in front panel4.3 Acquisition for test databaseIn this state, image is acquired same as in acquisition for train database, that is we use vision acquisition icon to acquire images. Here only one is saved into test database from a steam of acquired images using recognition save button. Test database is the database which contains images that has to be compared with train database for recognition. Fig 4.5. Images in test databaseAdvantage of the test database used here is if the recognition failed because of any error, we can again start recognition process with the image stored in the test database without causing the person problem of standing there for a long time. Otherwise we can directly start recognition process after image is stored into test database.Image that is stored into test database is also a black and white one. Hence to convert the stream of acquired colour images into black and white, we follow the below two steps.Step-1: Supply acquired image to NI_Vision_Development_Module.lvlib:IMAQ Cast Image. This converts the current image type to the image type specified by image type. Here chosen image type is RGB(U32).Step-2: Image acquired in step 1 is supplied as input to NI_Vision_Development_Module.lvlib:IMAQ ExtractSingleCilorPlane. It extracts a single plane from a colour image, which depends on the input type of colour plane. Here colour plane used is luminance.Another advantage is same as train database, we can store multiple images of single person in test database if necessary. Once user decides that storing images is done, he/she can switch to either waiting state or any other remaining states using enum local variable.Fig 4.6. Acquisition for test database state in front panel4.4 Recognition stateIn the recognition state, we get a popup window to choose the image stored from the test database that has to recognize. Here recognition is done using MATLAB. Once we enter the image name, which we gave during acquisition for test database state, it scans through the train database that matches the image. If the matching image that is face is recognized, then test image and matched image pops up showing them.Once recognition is done, it moves onto the message state. Fig 4.7. Recognition state in front panel4.5 Message stateOnce the face recognition is done, a message is sent to the desired person whose phone number is given through the front panel mobile number control. Here a GSM module is used to send message. Once the message is sent to the person, a buzzer gives small sound as signal indicating recognition is done.Now it’s time to wait for the new user to come. So we go back to waiting state.When message is sent along with that a buzzer at the camera will make sound to notify the person in front of it that the attendance is taken.Fig 4.8. Message state in front panelOnce this state runs a message is sent to the mentioned mobile number as showin in the Fig.4.8Fig.4.9. Message is sent to the mobile CHAPTER 5CONCLUSION AND FUTURE SCOPE5.1 Conclusion:In this project, a face recognition system is accomplished by using the software of virtual instrument software LabVIEW and MATLAB. LabVIEW has simple structure, convenient programming, and friendly interface and is easy to use. MATLAB has rich functions to complete the complex algorithm. The advantages of the two software’s are integrated in this system. This system is simple and reliable to realize the face image recognition. The objectives that are mentioned in the beginning of the report are achieved and results are verified in the real time environment. 5.2 Future scope:Face recognition systems used today work very well under constrained conditions, although all systems work much better with frontal mug-shot images and constant lighting. All current face recognition algorithms fail under the vastly varying conditions under which humans need to and are able to identify other people. Next generation person recognition systems will need to recognize people in real-time and in much less constrained situations.Cameras and microphones today are very small, light-weight and have been successfully integrated with wearable systems. Audio and video based recognition systems have the critical advantage that they use the modalities humans use for recognition. Finally, researchers are beginning to demonstrate that unobtrusive audio-and-video based person identification systems can achieve high recognition rates without requiring the user to be in highly controlled environments.REFERENCESSonam Shukla, Pradeep Mishra (2012),” Increasing the accuracy of an existing recognition system using adaptive technique” International Journal of Advanced Research in Computer science and Engineering Volume 2, Issue 6, PP 52-57Le Hoang Thai ang Ha Nhat Tan(2010), “Fingerprint recognition using standardized fingerprint model” International Journal Of Computer Science Issues Volume 7, Issue 3, No 7Jia-Jun Wang and Siu-Yeung Cho(2009), “Facial Emotion By Adaptive Processing of Tree Structure” International Journal of Advanced Research in Computer Science and Engineering Pengpeng Yu and Yinjie Cao(2014), “Computer Vision CV” IEEE conference on Computer Vision and Pattern Recognition PP 596-603Rupali L.Telgad and Alams Siddiqui(2017), “Development of an efficient and secure biometric system by using Iris recognition” International Conference on intelligent systems and information management(ICISIM) Volume 6, PP 222-270Pankaj Wasnik and Kiran B. Raja(2017), “Robust Face presentation Attack Detection On Smart Phones” IEEE International Joint Conference on Biometrics Volume 7, PP 630-674G.T. Rado and H. Suhl,International Journal of Soft Computing and Artificial Intelligence(1963), ISSN: 2321-404X, I.S. Jacobs and C.P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, Eds. New York: Academic, pp. 271-350.Driver Drowsiness Warning System Using Visual Information for Both Diurnal and Nocturnal Illumination Conditions, (2010). EURASIP Journal on Advances in Signal Processing.APPENDIX MATLABThe name MATLAB stands for Matrix Laboratory. MATLAB was written originally to provide easy access to matrix software developed by the LINPACK (linear system package) and EISPACK (Eigen system package) projects.MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming environment. Furthermore, MATLAB is a modern programming language environment: it has sophisticated data structures, contains built-in editing and debugging tools, and supports object-oriented programming. These factors make MATLAB an excellent tool for teaching and research. MATLAB has many advantages compared to conventional computer languages (e.g., C, FORTRAN) for solving technical problems. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. The software package has been commercially available since 1984 and is now considered as a standard tool at most universities and industries worldwide.The major tools within or accessible from the desktop are:? The Command Window ? The Command History ? The Workspace ? The Current Directory ? The Help Browser ? The Start buttonImread:The imread function supports four general syntaxes, described below. The imread function also supports several other format-specific syntaxes. A = imread(filename,fmt) reads a greyscale or color image from the file specified by the string filename, where the string fmt specifies the format of the file. If the file is not in the current directory or in a directory in the MATLAB path, specify the full pathname of the location on your system. For a list of all the possible values for fmt.If imread cannot find a file named filename, it looks for a file named filename.fmt.imread returns the image data in the array A. If the file contains a grayscale image, A is a two-dimensional (M-by-N) array. If the file contains a color image, A is a three-dimensional (M-by-N-by-3) array. The class of the returned array depends on the data type used by the file format. For most file formats, the color image data returned uses the RGB color space. For TIFF files, however, imread can return color data that uses the RGB, CIELAB, ICCLAB, or CMYK color spaces. If the color image uses the CMYK color space, A is an M-by-N-by-4 array.Syntax:A = imread(filename,fmt)[X,map] = imread(filename,fmt)[...] = imread(filename)[...] = imread(URL,...)[...] = imread(...,idx)?? (CUR, GIF, ICO, and TIFF only)[...] = imread(...,'PixelRegion',{ROWS, COLS})??(TIFF only)[...] = imread(...,'frames',idx) (GIFonly)[...] = imread(...,ref)?? (HDF only)[...] = imread(...,'BackgroundColor',BG) (PNG only)[A,map,alpha] = imread(...) (ICO,CUR, and PNG only)uigetdir:uigetdir displays a dialog box enabling the user to browse through the directory structure and select a directory. directory_name = uigetdir opens a dialog box in the current directory displaying the default title.directory_name = uigetdir('start_path') opens a dialog box in the directory specified by start path. directory_name = uigetdir ('start_path','dialog title') opens a dialog box with the specified title.directory_name = uigetdir('start_path','dialog_title',x,y) positions the dialog box at position [x,y], where x and y are the distance in pixel units from the left and top edges of the screen. This feature is only supported on UNIX platforms. Syntax:directory_name = uigetdirdirectory_name = uigetdir('start_path')directory_name = uigetdir('start_path','dialog_title')directory_name = uigetdir('start_path','dialog_title',x,y)Title:Title(txt) adds the specified title to the axes or chart returned by the gca command. Reissuing the title command causes the new title to replace the old title.title( HYPERLINK "" \l "btpi3rq-1-target" target,txt) adds the title to the axes, legend, or chart specified by target.title(___, HYPERLINK "" \l "namevaluepairarguments" Name,Value) modifies the title appearance using one or more name-value pair arguments. For example, 'FontSize',12 sets the font size to 12 points. Specify name-value pair arguments after all other input arguments. Modifying the title appearance is not supported for all types of charts.t =title(___) returns the object used for the title. Use t to make future modifications to the titleSyntax:title(txt)title(target,txt)title(___,Name,Value)t =title(___)Strcat:s = strcat(s1,...,sN) horizontally concatenates s1,...,sN. Each input argument can be a character array, a cell array of character vectors, or a string array.If any input is a string array, then the result is a string array.If any input is a cell array, and none are string arrays, then the result is a cell array of character vectors. If all inputs are character arrays, then the result is a character array.For character array inputs, strcat removes trailing ASCII white-space characters: space, tab, vertical tab, newline, carriage return, and form feed. For cell and string array inputs, strcat does not remove trailing white space.Syntax:s = strcat(s1,...,sN)imshow:imshow(I) displays the grayscale image I in a figure. imshow optimizes figure, axes, and image object properties for image display. imshow(I,[low high]) displays the grayscale image I, specifying the display range as a two-element vector, [low high]. For more information, see the Display Range parameter.imshow(I,[]) displays the grayscale image I, scaling the display based on the range of pixel values in I. imshow uses [min(I(:)) max(I(:))] as the display range. imshow displays the minimum value in I as black and the maximum value as white. For more information, see the Display Range parameter.imshow(RGB) displays the truecolor image RGB in a figure.imshow(BW) displays the binary image BW in a figure. For binary images, imshow displays pixels with the value 0 (zero) as black and 1 as white.imshow( HYPERLINK "" \l "bvmnrxi-1-X" X,map) displays the indexed image X with the colormap map. A colormap matrix can have any number of rows, but it must have exactly 3 columns. Each row is interpreted as a color, with the first element specifying the intensity of red, the second green, and the third blue. Color intensity can be specified on the interval 0.0 to 1.0.imshow(filename) displays the image stored in the graphics file specified by filename.imshow(___,Name,Value) displays an image, using name-value pairs to control aspects of the operation. himage = imshow(___) returns the image object created by imshow.Syntax:imshow(I)imshow(I,[low high])imshow(I,[])imshow(RGB)imshow(BW)imshow(X,map)imshow(filename)imshow(___,Name,Value)himage = imshow(___)LabVIEWLabVIEW is a graphical programming environment you can use to quickly and efficiently create applications with professional user interfaces. Millions of engineers and scientists use LabVIEW to develop sophisticated measurement, test, and control system applications using intuitive icons and wires. In addition, the LabVIEW platform is scalable across different targets and Operating Systems(Oss). In fact, LabVIEW offers unrivaled integration with thousands of hardware devices and provides hundreds of built-in libraries for advanced analysis and data visualization for you to create virtual instruments you can customize to your needs. Because LabVIEW programs imitate the appearance and operation of physical instruments, such as oscilloscopes and multimeters, LabVIEW programs are called virtual instruments or, more commonly, VIs . VIs have front panels and block diagrams. The front panel is the user interface. The block diagram is the programming behind the user interface. After you build the front panel, you add code using graphical representations of functions to control the front panel objects. The code on the block diagram is graphical code, also known as G code or block diagram code.In contrast to text-based programming languages, like C++ and Visual Basic, LabVIEW uses icons instead of lines of text to create applications. In text-based programming, instructions determine the order of program execution. LabVIEW uses graphical dataflow programming. In graphical dataflow programming, the flow of data through the nodes on the block diagram determines the execution order. Graphical programming and dataflow execution are the two major ways. LabVIEW is different from most other general-purpose programming languages.LabVIEW to effectively create simple data acquisition applications using the three steps: acquire, analyze, and present. You can develop applications on a Windows, Mac OS, or Linux system. Furthermore, you can deploy LabVIEW applications to a variety of real-time and FPGA targets.It introduces how to navigate the LabVIEW environment. This includes using the menus, toolbars, palettes, tools, help, and common dialog boxes of LabVIEW. You also learn how to run a VI and gain a general understanding of a front panel and block diagram. 9080503028950090805030289500 LabVIEW CharacteristicsLabVIEW programs have the following characteristics:A graphical and compiled nature.Dataflow and/or event-based programming.Multi-target and platform capabilities.Object-oriented flexibility.Multi-threading possibilities. 1. Graphical and CompiledWhile represented graphically, with icons and wires instead of with text, G code on the block diagram contains the same programming concepts found in most traditional languages. For example, G code includes data types, loops, event handling, variables, recursion, and object-oriented programming. LabVIEW compiles G code directly to machine code so the computer processors can execute it. You do not have to compile G code in a separate step. 2. Dataflow and Event-Driven ProgrammingLabVIEW programs execute according to dataflow programming rules instead of the procedural approach found in most text-based programming languages such as C and C++. Dataflow execution is data-driven, or data-dependent. The flow of data between nodes in the G code determines the execution order.Event-driven programming features extend the LabVIEW dataflow environment to allow the user's direct interaction with the program without the need for polling. Event-based programming also allows other asynchronous activity to influence the execution of G code on the block diagram.MathScript ModuleThe LabVIEW MathScript Module adds textual math to the LabVIEW development environment with a native compiler for the .m files you have developed in MATLAB? or GNU Octave software. You can blend textual and graphical approaches for algorithm development, signal processing, control design, and data analysis tasks. Easily deploy your .m code to real-time hardware without extra code-generation steps. LabVIEW MathScript RT is an add-on module for the LabVIEW Full and Professional Development Systems. It is designed to natively add text-based signal processing, analysis, and math into the graphical development environment of LabVIEW. With more than 800 built-in functions, LabVIEW MathScript RT gives you the ability to either run your existing custom .m files or create them from scratch. Using this native solution for text-based math, you can combine graphical and textual programming within LabVIEW because the text-based engine is part of the LabVIEW environment. With LabVIEW MathScript RT, you can choose whether graphical or textual programming is most appropriate for each aspect of your application.MathScript RT Module?– The LabVIEW MathScript RT Module is the add-on product for the LabVIEW development system and contains the technologies listed below.MathScript?– MathScript is the engine that accepts general .m file syntax and translates that into the G language of LabVIEW. The MathScript Engine does a lot of the “behind-the-scenes” work discussed later in this article.MathScript Interactive Window?– The MathScript Interactive Window is one of two methods for interacting with the MathScript Engine. It is a floating window accessed from the LabVIEW toolbar and is intended for developing your .m files.MathScript Node?– The MathScript Node is the other method for interacting with the MathScript Engine. The MathScript Node is a structure on the LabVIEW block diagram and is accessed from the functions palette. Although sufficiently useful for developing your .m files, the primary function of the MathScript Node is to execute your .m files in line with LabVIEW G code.MathScript allows you to reuse your existing .m files without having to rewrite themSimplifying IP reuse is quickly becoming a must-have in any modern-day software application. Every software environment has strengths and weaknesses relative to others, and today’s casual user is much more adept in using multiple applications within the same application. Most .m file environments, such as The MathWorks Inc. MATLAB? software and Digiteo Scilab, are great tools for algorithm development. The .m file has become a general syntax used by many different environments.As with many companies, you probably have a library of IP that you (or someone else at your company) have spent years developing and perfecting. There is no reason to reimplement that IP in a different language. The LabVIEW MathScript RT Module lets you simply import your existing .m files and run them as part of your LabVIEW program.MathScript allows you to perform your analysis while you are acquiring your dataRaw data from the real world does not always immediately convey useful information. Usually, you must transform the signal, remove noise disturbances, correct for data corrupted by faulty equipment, or compensate for environmental effects, such as temperature and humidity. For that reason, signal processing, which is the analysis, interpretation, and manipulation of signals, is a fundamental need in virtually all engineering applications.Most vendors of data acquisition hardware provide some sort of interface to give you the ability to acquire and save your data to a file. Whether that interface is a proprietary software product or a DLL with function calls from ANSI C or C++, the process is generally trivial to an experienced programmer. Likewise, most math packages provide the necessary built-in functions to fully analyze your data, whether that requires some filtering, transforms, or noise reduction. However, the problem generally lies in the movement of data between these applications. This is because you can’t actually perform the analysis of the signal?while?you are acquiring the signal.This might seem trivial, but it is necessary when you need to perform actions based on the results of that analysis or correlate anomalies in the data with happenings in the real world. The LabVIEW MathScript RT Module gives you the power to combine your .m files in line with the acquisition of data, meaning your analysis happens as you are acquiring the data, providing results in real time.LabVIEW provides a built-in Graphical User Interface for your .m filesA challenge that users of traditional .m file environments face is the development of graphical user interfaces (GUI). A GUI provides added interaction to algorithm development, giving you the ability to add a simple knob or slider to see how your algorithm responds to varying input variables.LabVIEW contains a comprehensive collection of drag-and-drop controls and indicators so you can quickly and easily create user interfaces for your application and effectively visualize results without integrating third-party components or building views from scratch. The quick drag-and-drop approach does not come at the expense of flexibility. Power users can customize the built-in controls via the Control Editor and programmatically control UI elements to create highly customized user experiences.Deploy custom .m files to embedded hardwareThe LabVIEW MathScript RT Module delivers the ability to deploy .m files directly to real-time hardware.The LabVIEW MathScript RT Module delivers the ability to deploy .m files directly to real-time hardware. No code rewrites. No translating to ANSI C. None of that. That is a big deal. This is important because right now there is no other direct methodology for doing this.Many scientists and engineers developing mathematical algorithms do so in one of several .m file environments. A primary challenge of these highly abstract .m file languages is that they lack some key characteristics necessary for deployment to embedded hardware. These languages are loosely typed, which means that the data type of a variable can change at run time without explicit casting. Although this can be valuable in a desktop environment where memory is abundant, dynamically changing a variable’s data type during an operation introduces jitter, which could violate the application’s timing constraints in a real-time scenario. The lack of explicit resource management functions and timing constructs further complicates the deployment to embedded hardware.1. While loop: Repeats the code within its sub diagram until a specific condition occurs. A While Loop always executes at least one time. The While Loop is located on the Structures palette. Select the While Loop from the palette and then use the cursor to drag a selection rectangle around the section of the block diagram you want to repeat. When you release the mouse button, a While Loop boundary encloses the section you selected.2. Case Structure: A Case structure has two or more subdiagrams, or cases. Only one subdiagram is visible at a time, and the structure executes only one?case at a time. An input value determines which subdiagram executes. The Case structure is similar to switch statements or if...then...else statements in text-based programming languages. The case selector label at the top of the Case structure contains the name of the selector value that corresponds to the case in the center and decrement and increment arrows on each side.3. Flat Sequence Structure: Consists of one or more subdiagrams, or frames, that execute sequentially. Use the Flat Sequence structure to ensure that a subdiagram executes before or after another subdiagram.Data flow for the Flat Sequence structure differs from data flow for other structures. Frames in a Flat Sequence structure execute from left to right and when all data values wired to a frame are available.?The data leaves each frame as the frame finishes executing. This means the input of one frame can depend on the output of another frame.4. Wait (ms): Waits the specified number of milliseconds and returns the value of the millisecond timer. Wiring a value of 0 to the?milliseconds to wait?input forces the current thread to yield control of the CPU.This function makes asynchronous system calls, but the nodes themselves function synchronously. Therefore, it does not complete execution until the specified time has elapsed. 5. Vision Acquisition Express vi: Launch the NI Vision Acquisition Express VI through LabVIEW by creating a block diagram, opening the Functions palette, and selecting the Vision Acquisition Express VI from the Vision and Motion menu. Place the NI Vision Acquisition Express VI on the block diagram. This will launch the NI Vision Acquisition Wizard. After you have configured the acquisition, you can double-click the NI Vision Acquisition Express VI to edit the acquisition. Use the Select Acquisition Source step to select the device to use for an acquisition. Based on the type of device you choose to work with at this step, the settings and options available during the?Configure Acquisition Settings?step will vary.?Use the Configure Acquisition Settings step to configure your acquisition. The configuration settings available in this step will vary based on the type of device you select in the?Select Acquisition Source?step of the wizard. Use the following tools to perform different functions on your images:Image Display—This control allows users to view the resulting images acquired.Test—Configures the acquisition using the acquisition type and acquisition settings configured.Stop—Stops a continuous acquisition once it has started.Image Number—Allows you to scroll through the acquired images from a finite acquisition.Zoom to Fit—Scales the image to fit in Image Display window.Zoom 1:1—Displays the image without any zooming.Zoom In—Enlarges the image.Zoom Out—Reduces the size of the image.Acquisition Status String—Updates information about the status of the acquisition and the acquired image.6. IMAQ Create: Creates a temporary memory location for an image. Use IMAQ Create in conjunction with the?IMAQ Dispose?VI to create or dispose of NI Vision images in LabVIEW.9. IMAQ Cast Image: Converts the current image type to the image type specified by?Image Type. If you specify a lookup table, the IMAQ Cast Image VI converts the image using a lookup table. If converting from a 16-bit image to an 8-bit image, the VI executes this conversion by shifting the 16-bit pixel values to the right by the specified number of shift operations and then truncating to get an 8-bit value.10. IMAQ ExtractSingleColorPlane: Extracts a single plane from a color image. Image Src?is the reference to a color image that has one of its color planes extracted. If?Image Dst?is not connected, the source image is converted to an image that contains the extracted plane. Color Plane?defines the color plane to extract. Image Dst Out?is a reference to the destination image. If?Image Dst?is connected,?Image Dst Out?is the same as?Image Dst. Otherwise,?Image Dst Out?refers to the image referenced by?Image Src.11. Concatenate Strings: Concatenates input strings and 1D arrays of strings into a single output string. For array inputs, this function concatenates each element of the array.Add inputs to the function by right-clicking an input and selecting?Add Input?from the shortcut menu or by?resizing the function.12. Build Path: Creates a new path by appending a name or a relative path to an existing path. A relative path describes the location of a file or directory relative to an arbitrary location in the file system. An absolute path describes the location of a file or directory starting from the top level of the file system.13. IMAQ Write File 2: Writes the image to a file in the selected format. Use the pull-down menu to select an instance of this VI. Color Palette?is used to apply a color palette to an image.?Color Palette?is an array of clusters constructed by the user or supplied by the?IMAQ GetPalette?VI. This palette is composed of 256 elements for each of the three color planes (red, green, and blue). A specific color is the result of applying a value between 0 and 255 to each of the three color planes. If the three planes have identical values, a gray level is obtained (0 specifies black and 255 specifies white). If the image type requires a color palette and it is not supplied, a grayscale color palette is generated and written. 14. Event Structure: Waits until an event occurs, then executes the appropriate case to handle that event. The Event structure has one or more subdiagrams, or event cases, exactly one of which executes when the structure executes to handle an event. This structure can time out while waiting for notification of an event. Wire a value to the Timeout terminal at the top left of the Event structure to specify the number of milliseconds the Event structure waits for an event. The default is -1, which indicates never to time out. You can configure a?single event case to handle multiple events, but only one of these events within the event case can occur at a time. You must?place the Event structure in a While loop?to handle multiple events.A single case in the Event structure cannot handle both?notify and filter events. A case can handle multiple notify events but can handle multiple filter events only if the event data items are identical for all events. You can configure any number of Event structures to respond to the same notify event or filter event on a specific object.Before you?configure events?for the Event structure to handle, review the?caveats and recommendations for using events?in LabVIEW. The Timeout terminal specifies the number of milliseconds to wait for an event before timing out. If you wire a value to the Timeout terminal, you must provide a?Timeout?event case to avoid an error. The dynamic event terminals accept an event registration refnum or a cluster of event registration refnums for?dynamic event registration. If you wire the inside right terminal, that terminal no longer carries the same data as the left terminal. You can wire the event registration refnum or cluster of event registration refnums to the inside right terminal through a?Register For Events?function and?modify the event dynamically. Depending on the palette from which you select the Event structure, the dynamic event terminals might not appear by default. To display these terminals, right-click the Event structure and select?Show Dynamic Event Terminals?from the shortcut menu. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download