Paper Title (use style: paper title) - LSBU Open Research



Design of a voice control 6DoF grasping robotic arm based on ultrasonic sensor, computer vision and Alexa voice assistanceZhiheng WangDepartment of Electrical and Electronics EngineeringLondon South Bank UniversityLondon, UKwzhlove2003@Daqing ChenDepartment of Electrical and Electronics EngineeringLondon South Bank UniversityLondon, UKchend@lsbu.ac.uk Perry XiaoDepartment of Electrical and Electronics EngineeringLondon South Bank UniversityLondon, UKxiaop@lsbu.ac.ukAbstract— The article presents a study to design a 6-degree of freedom robotic arm which can pick up objects in random positions on a 2D surface based on Arduino microcontroller, ultrasonic sensors and picamera. The robotic arm is able to recognise objects based on computer vision algorithm for shape detection. The ultrasonic sensor measures the distance between the objects and the robotic arm, and the position of the objects in the real world will be detected by its mass centre in the image to improve the accuracy of the pick-up movement. Arduino microcontroller will calculate the rotation angles for the joints of the robotic arm by using inverse kinematics algorithms. The movement of the robotic arm also can be controlled by an Amazon Alexa voice assistance device. The experiment of applying the artificial neural network to control the robotic arm pick-up movement is achieved. The artificial neural network can manipulate the position of the robotic arm to pick up objects after training using the values which are calculated by inverse kinematics equations. The Raspberry Pi is used for processing the computer vision data, and voice commands from Alexa Voice Service based on cloud service.Keywords—robotic arm, computer vision, object detection, artificial neural network, Alexa voice assistance IntroductionRobot concepts have been introduced from science fiction ADDIN EN.CITE <EndNote><Cite><Author>Leitner</Author><Year>2013</Year><RecNum>31</RecNum><DisplayText>[1]</DisplayText><record><rec-number>31</rec-number><foreign-keys><key app="EN" db-id="2e9dew5ez950aze9f0ov2sdlvsfzafd9ter2">31</key></foreign-keys><ref-type name="Thesis">32</ref-type><contributors><authors><author>Jürgen Leitner</author></authors></contributors><titles><title>From Vision to Actions Towards Adaptive and Autonomous Robots</title><secondary-title>Faculty of informatics</secondary-title></titles><pages>30</pages><volume>PhD</volume><dates><year>2013</year><pub-dates><date>Oct 2013</date></pub-dates></dates><publisher>Universita della svizzera italiana</publisher><urls></urls></record></Cite></EndNote>[1]. As the developments of mechanical, electronics, computer, sensor and communication technologies, robots have already extensively applied in industrial manufacture and even humans' daily life. The robotic arms are one of the earliest programmable automated machines utilised in the industrial production and helped humans to finish some work faster, easier and more accurate. It is able to move objects, tools, heavy stuff and pick and place objects in automation and complete some hazardous work which human cannot do. Robotic arms have been used for a variety of applications in industry, such as welding, gripping, lifting and even automotive assembling.Because of development of sensors, camera, microcontroller and computer, robotic arms are able to ‘feel’ outside world by sensors and ‘think’ using microcontrollers and computers, then decide what they should do. Robotic arms become very intelligent. Especially when cameras and image processing have applied into robotic arms, they can capture images by cameras and analyse images to take useful information and then process commands and take actions. Furthermore, an artificial neural network is the most modern technology in robot control. It has many advantages in robot performance especially for machine learning and deep learning. Robots are able to learn many things using the artificial neural network, such as robot movement and image recognition. In this paper, a robotic arm is intended to pick up, move and place the objects. The location of the objects is unknown. The robotic arm can detect the desired objects and find the location of the objects by picamera and ultrasonic sensor. The inverse kinematics algorithm is applied in the robotic arm control to decide the position to grasp the objects. The experiment of applying the artificial neural network to control the pick-up movement is achieved after training.The movement of the robotic arm also can be controlled by voice command by using an Amazon Alexa voice assistance device. system descriptionThe system consists of 4 sections, the robotic arm which contains the Picamera and the ultrasonic sensor, Raspberry Pi, Arduino and motor driver shield, and Amazon Echo Dot. The schematic representation of the robotic arm control system is shown in REF _Ref532591699 \h \* MERGEFORMAT Fig. 1. Fig. SEQ Fig. \* ARABIC 1 The diagram of the robotic arm and controlRobotic ArmThe robotic arm has been built by 6 servo motors, and it has 5 Degree of Freedom(DOF). It contains base, shoulder, elbow and wrist and wrist rotation joints and they are connected by its limbs. An ultrasonic sensor is installed on the base of the robotic arm in order to measure the distance between the target objects and the robotic arm. A Picamera is attached at the gripper to capture the images of the objects.The rotation range of servo motor is 0 degree to 180 degrees. However, since the structure and balance of the robotic arm, some of the motors cannot rotate as full range, the joint’s name and rotation range are given in REF _Ref532661112 \h \* MERGEFORMAT TABLE I. TABLE SEQ TABLE \* ROMAN I. The motors and rotation range of the robotic armMotorJoint NameRotation RangeM1Base0° ~ 180°M2Shoulder15° ~ 165°M3Elbow0° ~ 180°M4Wrist Pitch0° ~ 180°M5Wrist Roll0° ~ 180°M6Gripper10° ~ 73° (10° the gripper is open, 73° the gripper is closed.Raspberry Pi and computerThe Raspberry Pi is a small single-board computer. The Raspberry Pi 3 model B is used in the system because of its fast processing speed and on-board Wi-Fi. These features are able to deal with complex computer vision algorithms and access public web service to accomplish voice control. The computer is only used to run Matlab software and simulate the artificial neural network. Arduino and motor driverArduino is a microcontroller to control the servo motors by its Pulse Width Modulation(PWM) pins and detects outside world by connecting sensors to its analogue pins. The motor driver connects the external electrical power supply and provides enough electrical current to drive servo motors. Amazon Echo DotAmazon Echo Dot is a smart speaker which is developed by Amazon. It has an array of 7 microphones and Wi-Fi chip to collect voice and access to the internet. The Echo Dot gathers the voice commands and streams audio to a cloud-based service Alexa Voice Service which is able to recognise the natural voice and interpret them, and respond the request. It also provides self-service APIs, tools, documentation, and code samples that make developers create personalised skills. Fig. SEQ Fig. \* ARABIC 2 The flowchart of the pickup movementThe pick-up movement which is used to pick and place the objects in a random position has applied ultrasonic distance sensor, the inverse kinematic equation of the robotic arm, Arduino control, and computer vision processing. The flowchart of the pick-up movement is shown in REF _Ref532679180 \h \* MERGEFORMAT Fig. 2. Once the pick-up function has been called, the ultrasonic sensor measures the distance by the echo wave. If the object is not in the range which the robotic arm can pick it up, the basement of the robotic arm then rotates for 1° and continue to find an object in range. When the object is in the robotic arm pick-up range, then Arduino will send ‘1’ to the Raspberry Pi through serial communication. After the Raspberry Pi reads ‘1’ from the serial port, it then starts to process the video which is captured by the camera. The computer vision function is called to detect the objects and calculate the centroid of the object. If the object is not at the centre of the video frame, it means the end-effector has not pointed at the centre of the object. The Raspberry Pi will send ‘0’ to Arduino, then Arduino will control the robotic arm rotates for 1°. When the object is at the centre of the video frame, the Raspberry Pi send ‘1’ to Arduino, and that means the end effector points at the centre of the object. Then, the robotic arm will stop rotating, and the ultrasonic sensor measures the distance between the object and the robotic arm. The distance value is used for calculating the rotation angles for shoulder, elbow and wrist motors in the robotic arm. After getting the angle values, Arduino controls the motors and moves the end-effector to the object and pick it up. After these processings, the robotic arm is able to pick up an object at any reachable location.kinematic modelDenavit-Hartenberg (D-H) ADDIN EN.CITE <EndNote><Cite><Author>Hartenberg</Author><Year>1955</Year><RecNum>37</RecNum><DisplayText>[2]</DisplayText><record><rec-number>37</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1492431952">37</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Hartenberg, Richard S</author><author>Denavit, Jacques</author></authors></contributors><titles><title>A kinematic notation for lower pair mechanisms based on matrices</title><secondary-title>Journal of applied mechanics</secondary-title></titles><periodical><full-title>Journal of applied mechanics</full-title></periodical><pages>215-221</pages><volume>77</volume><number>2</number><dates><year>1955</year></dates><urls></urls></record></Cite></EndNote>[2] is a general transformation between attaching reference frames to the links and the links of spatial kinematic which is often used in the robotic mechanical system. The D-H method applies the four parameters which are the rotation about the z-axis, the joint offset, length of a link and twist angle ADDIN EN.CITE <EndNote><Cite><Author>Niku</Author><Year>2011</Year><RecNum>36</RecNum><DisplayText>[3]</DisplayText><record><rec-number>36</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1491605386">36</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Niku, Saeed B.</author></authors></contributors><titles><title>Introduction to robotics : analysis, control, applications</title></titles><keywords><keyword>Robotics</keyword></keywords><dates><year>2011</year></dates><publisher>Hoboken, N.J : Wiley, ?2011.&#xD;2nd ed.</publisher><isbn>9780470604465&#xD;0470604468</isbn><work-type>Bibliographies&#xD;Non-fiction</work-type><urls><related-urls><url>;[3]. The advantage of this method is the storage efficiency for dealing with the kinematics of robot chains and computational robustness ADDIN EN.CITE <EndNote><Cite><Author>Roshanianfard</Author><Year>2016</Year><RecNum>38</RecNum><DisplayText>[4]</DisplayText><record><rec-number>38</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1492434358">38</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Roshanianfard, Ali</author><author>Noguchi, Noboru</author></authors></contributors><titles><title>Development of a 5DOF robotic arm (RAVebots-1) applied to heavy products harvesting</title><secondary-title>IFAC-PapersOnLine</secondary-title></titles><periodical><full-title>IFAC-PapersOnLine</full-title></periodical><pages>155-160</pages><volume>49</volume><number>16</number><keywords><keyword>Forward kinematics</keyword><keyword>Reverse kinematics</keyword><keyword>Torque calculation</keyword><keyword>Robotic arm</keyword><keyword>Workspace</keyword><keyword>Servo motor</keyword></keywords><dates><year>2016</year><pub-dates><date>2016/01/01</date></pub-dates></dates><isbn>2405-8963</isbn><urls><related-urls><url>;[4]. The coordinate of the robotic arm and the D-H parameters are shown in REF _Ref532728918 \h \* MERGEFORMAT Fig. 3 and REF _Ref532731789 \h \* MERGEFORMAT TABLE II.Fig. SEQ Fig. \* ARABIC 3 The coordinates of the robotic armTABLE SEQ TABLE \* ROMAN II D-H parameters table of the robotic armNumberθdaα0-1θ1d0901-2θ20l102-3θ3-900l203-4θ400904-5θ5l300Θ represents a rotation about the local z-axis, in another word, θ is the joint angle. The joint offset which is the distance on z-axis between two successive common normals is defined as d. The length of each link is a. The twist angle α means the angle between two successive z-axes. Notice that the rotation about z2 is θ3-90°. This is because when θ3 is 0, there is a -90° angle between link 1 and link 2.The transformation between two successive frames An+1 is expressed by:An+1=Cθn+1-Sθn+1Cαn+1Sθn+1Sαn+1an+1Cθn+1Sθn+1Cθn+1Cαn+1-Cθn+1Sαn+1an+1Sθn+10Sαn+1Cαn+1dn+10001 ( SEQ Equation \* ARABIC 1)Where Cθn+1 means cosθn+1, Sθn+1 means sinθn+1. According to equation REF _Ref480212485 \h \* MERGEFORMAT (3.42) and REF _Ref480212539 \h \* MERGEFORMAT Table 4, the transformation of between two joints are:A1=c10s10s10-c10010d0001 ( SEQ Equation \* ARABIC 2) A2=c2-s20l1c2s2c20l1s200100001 ( SEQ Equation \* ARABIC 3)A3=c3-s30l2c3s3c30l2s300100001 ( SEQ Equation \* ARABIC 4) A4=c40s40s40-c4001000001 ( SEQ Equation \* ARABIC 5)A5=c5-s500s5c500001l30001 ( SEQ Equation \* ARABIC 6)So the total transformation between the refernce frame and the end-effector wll be:T=A1A2A3A4A5 ( SEQ Equation \* ARABIC 7)The aim of this study is that the robotic arm is able to pick up, move and place objects. The objects are at the same horizontal level with the base of the robotic arm. The position and the distance between the objects and the robotic arm can be detected by ultrasonic sensor or camera, so the microcontroller and computer need to use the information to calculate the rotation angle for each motor to make the gripper move to the desired location and pick up objects. Thus, the inverse kinematics of the robotic arm also involves the path planning which tells the robotic arm which position to reach the objects.There are three paths have been designed to take an object at the ground level. The first path is designed to make the robotic arm reach an object at far territory, so the end-effector must be horizontal. Another path is to reach the objects in the middle range, so the orientation of the gripper keeps at -45°. The wrist angle is fixed in the third path to make end-effector arrive at the position where is closed to the robotic arm. The illustration of the three positions to take objects at three regions shown in REF _Ref532598381 \h \* MERGEFORMAT Fig. 4.The distance between the desired objects and the robotic arm can be detected by the ultrasonic sensor. After applying the kinematic model of the robotic arm and path planning, the inverse kinematic equations are able to be calculated. Then, the rotation angles for shoulder, elbow, wrist joints also can be calculated by Arduino.In order to achieve picking and placing objects in a random position in robotic arm's operation range, the base of the robotic will turn from 0°-180° and an ultrasonic sensor will scan the area. When the sensor detects the object, it will give the distance value between the sensor and objects to the microcontroller. Arduino will calculate the distance and convert to rotation angles for each motor to pick that object, then it will pass commands to the motor driver and motors spin until the robotic arm at the right position and pick up the object. In the end, the robotic arm takes the object to the desired place. Fig. SEQ Fig. \* ARABIC 4 The three picking up positions of the robotic armcomputer vision processingSince the distance sensors send out the ultrasonic wave in a cone, the sensor cannot precisely detect the location of an object or find the centre of an object. So the Picamera can be used for compensation of the disadvantage of the distance sensor to find the precise location of an object. In another word, the camera is used to detect an object and tell the robotic arm to face the object straight and the gripper points at the centre of the object in order to improve the accuracy of the pickup movement. In order to detect and track an object by a camera, the computer vision technique is involved in the project. The single-board computer Raspberry Pi is used to deal with real-time computer vision. Python programming language and OpenCV (open source computer vision library) have been applied in the computer vision programming in the study.OpenCV is a programming function library for dealing with image processing and real-time computer vision ADDIN EN.CITE <EndNote><Cite><Author>Pulli</Author><Year>2012</Year><RecNum>54</RecNum><DisplayText>[5]</DisplayText><record><rec-number>54</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1505818898">54</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Kari Pulli</author><author>Anatoly Baksheev</author><author>Kirill Kornyakov</author><author>Victor Eruhimov</author></authors></contributors><titles><title>Realtime Computer Vision with OpenCV</title><secondary-title>Queue</secondary-title></titles><periodical><full-title>Queue</full-title></periodical><pages>40-56</pages><volume>10</volume><number>4</number><dates><year>2012</year></dates><isbn>1542-7730</isbn><urls></urls><custom1>2206309</custom1><electronic-resource-num>10.1145/2181796.2206309</electronic-resource-num></record></Cite></EndNote>[5]. The functionality of the library includes video/image input and output, processing, display and facial recognition, object identification. It also includes the machine learning library which contains artificial neural networks, decision tree learning, deep neural networks. When the camera captures real-time videos, the robotic arm can recognise the objects which it needs to pick up. It must distinguish the objects from background in the video. Because the robotic arm only takes cubic objects in this project, the objects become square or rectangle shape in 2-dimension videos. Thus, shape detection technique is used in programming for object detection. The robotic arm need face the object straight and the gripper must point at the centre of the object when it picks up the object. In addition, the camera is installed in the middle of the gripper. So the centre of the object should be in the middle of x-coordinate the video. The centre of the objects can be calculated by using the image moment. The REF _Ref532678590 \h \* MERGEFORMAT Fig. 5 shows the results of object detection and the location of the centre of the objects. Fig. SEQ Fig. \* ARABIC 5 The object detection and the centre of the objects in the videovoice controlIn this project, the robotic arm can be controlled by voice commands. The voice commands are collected by Amazon Echo and then recognised and analysed using Alexa Voice Service. After voice recognition, the voice commands are transferred to plain text commands to a public web service. So, the Raspberry Pi does not need to be on the same local network with Amazon Echo to access the web service and gets the text commands. Then, the Raspberry Pi gives commands to Arduino by changing its pin status. After Arduino successfully reads the pin status from Raspberry Pi, it will control the robotic arm to do a specific movement. The schematic of the voice control of the robotic arm is in REF _Ref532680475 \h \* MERGEFORMAT Fig. 6. Fig. SEQ Fig. \* ARABIC 6 The schematic of the voice control of the robotic armThe public web service is created by ngrok which creates a secure connection from a public endpoint to a locally running web service. It allows the Raspberry Pi to collect the text commands from Alexa Voice Service through any network. There are three voice commands have been designed for controlling the movements of the robotic arm. The REF _Ref532681251 \h \* MERGEFORMAT TABLE III shows the movements corresponding the voice command.TABLE SEQ TABLE \* ROMAN III The movements of the robotic arm controlled by voice commandVoice commandMovementRobotic arm to scanThe basement of the robotic arm starts to rotate from 0° to 180°Robotic arm to pick up an objectThe robotic arm finds an object and picks it up, then place the object at a specific place.To stopThe robotic arm stops the present movement and goes back to start position.artificial neural networkAs the fast development of artificial intelligence technology, robot control has benefited greatly from it ADDIN EN.CITE <EndNote><Cite><Author>Almusawi</Author><Year>2016</Year><RecNum>39</RecNum><DisplayText>[6]</DisplayText><record><rec-number>39</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1494247937">39</key></foreign-keys><ref-type name="Journal Article">17</ref-type><contributors><authors><author>Almusawi, Ahmed R. J.</author><author>Dülger, L. Canan</author><author>Kapucu, Sadettin</author></authors></contributors><titles><title>A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)</title><secondary-title>Computational Intelligence &amp; Neuroscience</secondary-title></titles><periodical><full-title>Computational Intelligence &amp; Neuroscience</full-title></periodical><pages>1-10</pages><keywords><keyword>NEURAL networks (Computer science)</keyword><keyword>KINEMATICS</keyword><keyword>ROBOTICS</keyword><keyword>MOTION control devices</keyword><keyword>FEEDBACK control systems</keyword></keywords><dates><year>2016</year></dates><publisher>Hindawi Limited</publisher><isbn>16875265</isbn><accession-num>117506491</accession-num><work-type>Article</work-type><urls><related-urls><url>;[6]. It has many advantages in robot performance such as machine learning, less computing time and speech recognition. The artificial neural network is a software simulation that is to simulate lots of interconnected neurons inside the computer, which is similar to the biological brain ADDIN EN.CITE <EndNote><Cite><Author>Woodford</Author><Year>2017</Year><RecNum>40</RecNum><DisplayText>[7]</DisplayText><record><rec-number>40</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1494253139">40</key></foreign-keys><ref-type name="Web Page">12</ref-type><contributors><authors><author>Chris Woodford</author></authors></contributors><titles><title>Neural networks</title></titles><volume>2017</volume><number>May 08</number><dates><year>2017</year><pub-dates><date>February 24, 2017</date></pub-dates></dates><urls><related-urls><url>;[7]. A large number of neurons are interconnected for information processing to solve a specific problem. The most important feature of the neural network is that it is able to learn by itself through examples, just like a human brain ADDIN EN.CITE <EndNote><Cite><Author>Christos Stergiou</Author><RecNum>41</RecNum><DisplayText>[8]</DisplayText><record><rec-number>41</rec-number><foreign-keys><key app="EN" db-id="pfx0tz9wnz9za7effpqp9edde5a02a9re99e" timestamp="1494253987">41</key></foreign-keys><ref-type name="Web Page">12</ref-type><contributors><authors><author>Christos Stergiou</author><author>Dimitrios Siganos </author></authors></contributors><titles><title>NEURAL NETWORKS</title></titles><volume>2017</volume><number>May 8</number><dates></dates><urls><related-urls><url>;[8]. In this project, supervised learning strategy has been applied in training the neural network, because the correct answers for angle values for the joints in the robotic arm can be calculated by inverse kinematic equations when the distance between objects and the robotic arm are known. Numbers of sets of distance and shoulder, elbow, wrist joints angles of the robotic arm will be used for training the neural network. Thus, the distance between objects and the robotic arm is the input for the neural network and the shoulder, elbow, wrist joints rotation angles of the robotic arm are the three outputs of the neural network. Neural Network Toolbox in Matlab software has been applied for simulating the artificial neural network. It provides algorithms and pretrained models to create and train the neural network. In the project, there are 20 neurons has been used in the hidden layer of the network. It has been mentioned that there are three different distance regions where the end-effector of the robotic arm can reach. The robotic arm works path 1, path 2 and path 3 in configurations to pick up objects in the three regions and different inverse kinematics equations. Thus, the neural network has used three different sets of value for the training process. There are 100 distance values has been created randomly in each region. the rotation angles of shoulder, elbow and wrist can be calculated by the distance values. Hence, the distance value is the input for the artificial neural network and three angle values are the output. The 300 sets of example values have been applied for training the neural network. After 4 times training the artificial neural network, the performance becomes more accurate. There are 11 random distance data feeds in the input layer of the neural network to test its performance. The REF _Ref532682806 \h \* MERGEFORMAT Fig. 7 provides the test results.Fig. 7 The results getting from the neural network using 11 test data In the figure, SA, EA and WA are shoulder angle, elbow angle and wrist angle respectively. According to Fig. 7, most of the angle values which are calculated by the neural network are fallen on the original data. That means the angle values are almost the same with the test data. The training performance is shown in REF _Ref532682973 \h \* MERGEFORMAT Fig. 8. The neural network can achieve the Mean Square Error (MSE) performance of 0.028362 at 50 of epochs.Fig. 8 Training performance resultsAt the beginning of the training, the error between the correct answer and the output of the neural network is significant because of random initial weight in the network. However, the network adjusts the weights by itself during the training and square error is sharply decreased. After 50 epochs, the network becomes stable and remains the mean square error level. It means the neural network is reliable.The error histogram result shows most of the error occurs at the range -0.07617mm to 0.08428mm. That means the error of the neural network is less than ±0.1mm. It is very accurate.Fig. SEQ Fig. \* ARABIC 9 The regression results of training, test and validation REF _Ref532682973 \h \* MERGEFORMAT Fig. 9 shows regression data of the training, test and validation. The value R is close to 1 means outputs values of the neural network are approaching the outputs' value of the actual data. In the figure, the R values in both training and test are very close to 1. According to the performance of the training and testing, the artificial neural network has excellent accuracy and can produce the correct joint rotation values for robotic arm by feeding in distance value between objects and the robotic arm. Comparison with other methodsA research team from both Princeton University and Massachusetts Institute of Technology has presented a system which is capable of grasping and recognizing both known and novel objects by using deep convolutional neural networks (ConvNets)PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5aZW5nPC9BdXRob3I+PFllYXI+MjAxODwvWWVhcj48UmVj

TnVtPjM0PC9SZWNOdW0+PERpc3BsYXlUZXh0Pls5XTwvRGlzcGxheVRleHQ+PHJlY29yZD48cmVj

LW51bWJlcj4zNDwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9

IjJlOWRldzVlejk1MGF6ZTlmMG92MnNkbHZzZnphZmQ5dGVyMiI+MzQ8L2tleT48L2ZvcmVpZ24t

a2V5cz48cmVmLXR5cGUgbmFtZT0iQ29uZmVyZW5jZSBQcm9jZWVkaW5ncyI+MTA8L3JlZi10eXBl

Pjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5BLiBaZW5nPC9hdXRob3I+PGF1dGhvcj5T

LiBTb25nPC9hdXRob3I+PGF1dGhvcj5LLiBZdTwvYXV0aG9yPjxhdXRob3I+RS4gRG9ubG9uPC9h

dXRob3I+PGF1dGhvcj5GLiBSLiBIb2dhbjwvYXV0aG9yPjxhdXRob3I+TS4gQmF1emE8L2F1dGhv

cj48YXV0aG9yPkQuIE1hPC9hdXRob3I+PGF1dGhvcj5PLiBUYXlsb3I8L2F1dGhvcj48YXV0aG9y

Pk0uIExpdTwvYXV0aG9yPjxhdXRob3I+RS4gUm9tbzwvYXV0aG9yPjxhdXRob3I+Ti4gRmF6ZWxp

PC9hdXRob3I+PGF1dGhvcj5GLiBBbGV0PC9hdXRob3I+PGF1dGhvcj5OLiBDLiBEYWZsZTwvYXV0

aG9yPjxhdXRob3I+Ui4gSG9sbGFkYXk8L2F1dGhvcj48YXV0aG9yPkkuIE1vcmVuYTwvYXV0aG9y

PjxhdXRob3I+UC4gUXUgTmFpcjwvYXV0aG9yPjxhdXRob3I+RC4gR3JlZW48L2F1dGhvcj48YXV0

aG9yPkkuIFRheWxvcjwvYXV0aG9yPjxhdXRob3I+Vy4gTGl1PC9hdXRob3I+PGF1dGhvcj5ULiBG

dW5raG91c2VyPC9hdXRob3I+PGF1dGhvcj5BLiBSb2RyaWd1ZXo8L2F1dGhvcj48L2F1dGhvcnM+

PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+Um9ib3RpYyBQaWNrLWFuZC1QbGFjZSBvZiBO

b3ZlbCBPYmplY3RzIGluIENsdXR0ZXIgd2l0aCBNdWx0aS1BZmZvcmRhbmNlIEdyYXNwaW5nIGFu

ZCBDcm9zcy1Eb21haW4gSW1hZ2UgTWF0Y2hpbmc8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+MjAx

OCBJRUVFIEludGVybmF0aW9uYWwgQ29uZmVyZW5jZSBvbiBSb2JvdGljcyBhbmQgQXV0b21hdGlv

biAoSUNSQSk8L3NlY29uZGFyeS10aXRsZT48YWx0LXRpdGxlPjIwMTggSUVFRSBJbnRlcm5hdGlv

bmFsIENvbmZlcmVuY2Ugb24gUm9ib3RpY3MgYW5kIEF1dG9tYXRpb24gKElDUkEpPC9hbHQtdGl0

bGU+PC90aXRsZXM+PHBhZ2VzPjEtODwvcGFnZXM+PGtleXdvcmRzPjxrZXl3b3JkPmdyaXBwZXJz

PC9rZXl3b3JkPjxrZXl3b3JkPmltYWdlIGNsYXNzaWZpY2F0aW9uPC9rZXl3b3JkPjxrZXl3b3Jk

PmltYWdlIG1hdGNoaW5nPC9rZXl3b3JkPjxrZXl3b3JkPm9iamVjdCByZWNvZ25pdGlvbjwva2V5

d29yZD48a2V5d29yZD5yb2JvdCB2aXNpb248L2tleXdvcmQ+PGtleXdvcmQ+cm9ib3RpYyBwaWNr

LWFuZC1wbGFjZTwva2V5d29yZD48a2V5d29yZD5pbWFnZSBjbGFzc2lmaWNhdGlvbiBmcmFtZXdv

cms8L2tleXdvcmQ+PGtleXdvcmQ+MjAxNyBBbWF6b24gUm9ib3RpY3MgQ2hhbGxlbmdlPC9rZXl3

b3JkPjxrZXl3b3JkPk1JVC1QcmluY2V0b24gVGVhbSBzeXN0ZW08L2tleXdvcmQ+PGtleXdvcmQ+

Y2F0ZWdvcnktYWdub3N0aWMgYWZmb3JkYW5jZSBwcmVkaWN0aW9uIGFsZ29yaXRobTwva2V5d29y

ZD48a2V5d29yZD5jcm9zcy1kb21haW4gaW1hZ2UgbWF0Y2hpbmc8L2tleXdvcmQ+PGtleXdvcmQ+

R3Jhc3Bpbmc8L2tleXdvcmQ+PGtleXdvcmQ+Um9ib3RzPC9rZXl3b3JkPjxrZXl3b3JkPkNsdXR0

ZXI8L2tleXdvcmQ+PGtleXdvcmQ+Um9idXN0bmVzczwva2V5d29yZD48a2V5d29yZD5Qcm9wb3Nh

bHM8L2tleXdvcmQ+PGtleXdvcmQ+VGFzayBhbmFseXNpczwva2V5d29yZD48L2tleXdvcmRzPjxk

YXRlcz48eWVhcj4yMDE4PC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+MjEtMjUgTWF5IDIwMTg8L2Rh

dGU+PC9wdWItZGF0ZXM+PC9kYXRlcz48aXNibj4yNTc3LTA4N1g8L2lzYm4+PHVybHM+PC91cmxz

PjxlbGVjdHJvbmljLXJlc291cmNlLW51bT4xMC4xMTA5L2ljcmEuMjAxOC44NDYxMDQ0PC9lbGVj

dHJvbmljLXJlc291cmNlLW51bT48L3JlY29yZD48L0NpdGU+PC9FbmROb3RlPn==

ADDIN EN.CITE PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5aZW5nPC9BdXRob3I+PFllYXI+MjAxODwvWWVhcj48UmVj

TnVtPjM0PC9SZWNOdW0+PERpc3BsYXlUZXh0Pls5XTwvRGlzcGxheVRleHQ+PHJlY29yZD48cmVj

LW51bWJlcj4zNDwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBwPSJFTiIgZGItaWQ9

IjJlOWRldzVlejk1MGF6ZTlmMG92MnNkbHZzZnphZmQ5dGVyMiI+MzQ8L2tleT48L2ZvcmVpZ24t

a2V5cz48cmVmLXR5cGUgbmFtZT0iQ29uZmVyZW5jZSBQcm9jZWVkaW5ncyI+MTA8L3JlZi10eXBl

Pjxjb250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5BLiBaZW5nPC9hdXRob3I+PGF1dGhvcj5T

LiBTb25nPC9hdXRob3I+PGF1dGhvcj5LLiBZdTwvYXV0aG9yPjxhdXRob3I+RS4gRG9ubG9uPC9h

dXRob3I+PGF1dGhvcj5GLiBSLiBIb2dhbjwvYXV0aG9yPjxhdXRob3I+TS4gQmF1emE8L2F1dGhv

cj48YXV0aG9yPkQuIE1hPC9hdXRob3I+PGF1dGhvcj5PLiBUYXlsb3I8L2F1dGhvcj48YXV0aG9y

Pk0uIExpdTwvYXV0aG9yPjxhdXRob3I+RS4gUm9tbzwvYXV0aG9yPjxhdXRob3I+Ti4gRmF6ZWxp

PC9hdXRob3I+PGF1dGhvcj5GLiBBbGV0PC9hdXRob3I+PGF1dGhvcj5OLiBDLiBEYWZsZTwvYXV0

aG9yPjxhdXRob3I+Ui4gSG9sbGFkYXk8L2F1dGhvcj48YXV0aG9yPkkuIE1vcmVuYTwvYXV0aG9y

PjxhdXRob3I+UC4gUXUgTmFpcjwvYXV0aG9yPjxhdXRob3I+RC4gR3JlZW48L2F1dGhvcj48YXV0

aG9yPkkuIFRheWxvcjwvYXV0aG9yPjxhdXRob3I+Vy4gTGl1PC9hdXRob3I+PGF1dGhvcj5ULiBG

dW5raG91c2VyPC9hdXRob3I+PGF1dGhvcj5BLiBSb2RyaWd1ZXo8L2F1dGhvcj48L2F1dGhvcnM+

PC9jb250cmlidXRvcnM+PHRpdGxlcz48dGl0bGU+Um9ib3RpYyBQaWNrLWFuZC1QbGFjZSBvZiBO

b3ZlbCBPYmplY3RzIGluIENsdXR0ZXIgd2l0aCBNdWx0aS1BZmZvcmRhbmNlIEdyYXNwaW5nIGFu

ZCBDcm9zcy1Eb21haW4gSW1hZ2UgTWF0Y2hpbmc8L3RpdGxlPjxzZWNvbmRhcnktdGl0bGU+MjAx

OCBJRUVFIEludGVybmF0aW9uYWwgQ29uZmVyZW5jZSBvbiBSb2JvdGljcyBhbmQgQXV0b21hdGlv

biAoSUNSQSk8L3NlY29uZGFyeS10aXRsZT48YWx0LXRpdGxlPjIwMTggSUVFRSBJbnRlcm5hdGlv

bmFsIENvbmZlcmVuY2Ugb24gUm9ib3RpY3MgYW5kIEF1dG9tYXRpb24gKElDUkEpPC9hbHQtdGl0

bGU+PC90aXRsZXM+PHBhZ2VzPjEtODwvcGFnZXM+PGtleXdvcmRzPjxrZXl3b3JkPmdyaXBwZXJz

PC9rZXl3b3JkPjxrZXl3b3JkPmltYWdlIGNsYXNzaWZpY2F0aW9uPC9rZXl3b3JkPjxrZXl3b3Jk

PmltYWdlIG1hdGNoaW5nPC9rZXl3b3JkPjxrZXl3b3JkPm9iamVjdCByZWNvZ25pdGlvbjwva2V5

d29yZD48a2V5d29yZD5yb2JvdCB2aXNpb248L2tleXdvcmQ+PGtleXdvcmQ+cm9ib3RpYyBwaWNr

LWFuZC1wbGFjZTwva2V5d29yZD48a2V5d29yZD5pbWFnZSBjbGFzc2lmaWNhdGlvbiBmcmFtZXdv

cms8L2tleXdvcmQ+PGtleXdvcmQ+MjAxNyBBbWF6b24gUm9ib3RpY3MgQ2hhbGxlbmdlPC9rZXl3

b3JkPjxrZXl3b3JkPk1JVC1QcmluY2V0b24gVGVhbSBzeXN0ZW08L2tleXdvcmQ+PGtleXdvcmQ+

Y2F0ZWdvcnktYWdub3N0aWMgYWZmb3JkYW5jZSBwcmVkaWN0aW9uIGFsZ29yaXRobTwva2V5d29y

ZD48a2V5d29yZD5jcm9zcy1kb21haW4gaW1hZ2UgbWF0Y2hpbmc8L2tleXdvcmQ+PGtleXdvcmQ+

R3Jhc3Bpbmc8L2tleXdvcmQ+PGtleXdvcmQ+Um9ib3RzPC9rZXl3b3JkPjxrZXl3b3JkPkNsdXR0

ZXI8L2tleXdvcmQ+PGtleXdvcmQ+Um9idXN0bmVzczwva2V5d29yZD48a2V5d29yZD5Qcm9wb3Nh

bHM8L2tleXdvcmQ+PGtleXdvcmQ+VGFzayBhbmFseXNpczwva2V5d29yZD48L2tleXdvcmRzPjxk

YXRlcz48eWVhcj4yMDE4PC95ZWFyPjxwdWItZGF0ZXM+PGRhdGU+MjEtMjUgTWF5IDIwMTg8L2Rh

dGU+PC9wdWItZGF0ZXM+PC9kYXRlcz48aXNibj4yNTc3LTA4N1g8L2lzYm4+PHVybHM+PC91cmxz

PjxlbGVjdHJvbmljLXJlc291cmNlLW51bT4xMC4xMTA5L2ljcmEuMjAxOC44NDYxMDQ0PC9lbGVj

dHJvbmljLXJlc291cmNlLW51bT48L3JlY29yZD48L0NpdGU+PC9FbmROb3RlPn==

ADDIN EN.CITE.DATA [9]. There are four pick up behaviors have been developed and the system uses ConvNet to predict affordances for a scene. The candidate objects are recognized by cross-domain image matching and the grasping behaviors adapt to novel objects without additional re-training. The system has 75% pick success grasping rate and 100% recognition accuracy. However, it is more complex and it requires advanced computing machines. The method presents in this paper is not suit for novel objects. The objects recognition is based on shape detection and poses of the objects depend on centroid of the objects. The method is not suit for the shape unknown objects and the grasping success rate will be reduced if the weight of the object is uneven. There is another method for localization of the objects by placing two 2-D cameras attaching to the ground ADDIN EN.CITE <EndNote><Cite><Author>Kang</Author><Year>2016</Year><RecNum>35</RecNum><DisplayText>[10, 11]</DisplayText><record><rec-number>35</rec-number><foreign-keys><key app="EN" db-id="2e9dew5ez950aze9f0ov2sdlvsfzafd9ter2">35</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Kang, Sangseung</author><author>Kim, Kyekyung</author><author>Lee, Jaeyeon</author><author>Kim, Joongbae</author></authors></contributors><titles><title>Robotic vision system for random bin picking with dual-arm robots</title><alt-title>MATEC Web of Conferences</alt-title></titles><pages>07003</pages><volume>75</volume><dates><year>2016</year></dates><urls></urls><electronic-resource-num>10.1051/matecconf/20167507003</electronic-resource-num></record></Cite><Cite><Author>Teodorescu</Author><Year>2016</Year><RecNum>36</RecNum><record><rec-number>36</rec-number><foreign-keys><key app="EN" db-id="2e9dew5ez950aze9f0ov2sdlvsfzafd9ter2">36</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Teodorescu, Catalin Stefan</author><author>Vandenplas, S.</author><author>Depraetere, Bruno</author><author>Anthonis, Jan</author><author>Steinhauser, Armin</author><author>Swevers, Jan</author></authors></contributors><titles><title>A fast pick-and-place prototype robot: design and control</title></titles><dates><year>2016</year></dates><urls></urls><electronic-resource-num>10.1109/cca.2016.7588005</electronic-resource-num></record></Cite></EndNote>[10, 11]. This method establishes 3-D position and orientation of the objects and the average error rate in the distance to the target object is about 2mm. The method provide precise results but the system is more complex and requires that the robotic arm and the two cameras must be at specific locations. The systems in this paper is less complex and the camera is located at the end-effector. The robotic arm doesn’t have to be at a certain location. It is still able to pick up and recognize objects even the robotic arm has mobility. conclusionIn this paper, the robotic arm can pick and place the objects at the random positions based on Arduino microcontroller, simple sensors and cameras. The ultrasonic sensor detects the objects and provides the distance value for the inverse kinematic equation to calculate the rotation angles for each servo motor. Arduino controls the servo motors’ movement, and the robotic arm moves its gripper to arrive at the desired location. The camera captures the video of the objects and Raspberry Pi processes the video and detects the objects and find the centroid of the objects. The information of the computer vision processing is used to tell the robotic arm whether the objects are at the centre of its gripper, in order to improve the accuracy for the robotic arm to pick up. Raspberry Pi communicates with Arduino microprocessor to control the robotic arm to move to the right position. The pick-up movement is successful and accurate. The voice commands are captured by Amazon Echo and Amazon Echo streams the human speech to Alexa Voice Service to analyse. It produces the plain text command, and Raspberry Pi gets the command from the cloud service and tells the Arduino which movement should be executed. In addition, the artificial neural network also can control the movement of the robotic arm. After training, the output of the artificial network is able to provide angle value for every joint in the robotic arm when the distance between objects and the robotic arm is the input of the network. The performance of the neural network is accurate and reliable to control the robotic arm.Acknowledgment We thank London South Bank University for the financial support of this project.References ADDIN EN.REFLIST [1]J. Leitner, "From Vision to Actions Towards Adaptive and Autonomous Robots," PhD, Faculty of informatics, Universita della svizzera italiana, 2013.[2]R. S. Hartenberg and J. Denavit, "A kinematic notation for lower pair mechanisms based on matrices," Journal of applied mechanics, vol. 77, pp. 215-221, 1955.[3]S. B. Niku, Introduction to robotics : analysis, control, applications: Hoboken, N.J : Wiley, 2nd ed., 2011.[4]A. Roshanianfard and N. Noguchi, "Development of a 5DOF robotic arm (RAVebots-1) applied to heavy products harvesting," IFAC-PapersOnLine, vol. 49, pp. 155-160, 2016/01/01 2016.[5]K. Pulli, A. Baksheev, K. Kornyakov, and V. Eruhimov, "Realtime Computer Vision with OpenCV," Queue, vol. 10, pp. 40-56, 2012.[6]A. R. J. Almusawi, L. C. Dülger, and S. Kapucu, "A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)," Computational Intelligence & Neuroscience, pp. 1-10, 2016.[7]C. Woodford. (2017, May 08). Neural networks. [8]C. Stergiou and D. Siganos. (May 8). NEURAL NETWORKS. [9]A. Zeng, S. Song, K. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, N. Fazeli, F. Alet, N. C. Dafle, R. Holladay, I. Morena, P. Q. Nair, D. Green, I. Taylor, W. Liu, T. Funkhouser, and A. Rodriguez, "Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 1-8.[10]S. Kang, K. Kim, J. Lee, and J. Kim, Robotic vision system for random bin picking with dual-arm robots vol. 75, 2016.[11]C. S. Teodorescu, S. Vandenplas, B. Depraetere, J. Anthonis, A. Steinhauser, and J. Swevers, A fast pick-and-place prototype robot: design and control, 2016. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download