School of Informatics | The University of Edinburgh



Team Organisation and Software Engineering PracticesMilestone 2GOALSSTATUS DESCRIPTION AND ACTIONS TAKENSTATUSRestructure solution to reflect architectureThe software controlling the behaviour for Milestone 1 was hardcoded into one project, NxtControl. We have since split the design into five components: the Main App, the Core, the Master Block, the Slave Block and Vision. Details of the solution architecture can be seen in figure [11]. Achieved Set up SVN and consistent development environmentThe old project, NxtControl, has been deprecated. The new projects are now in the SVN trunk. Team members are now able to access this via TortoiseSVN and Putty. The vision project has been migrated to Netbeans from text editors. Also, coding standards for Java and Python have been created to ensure consistent coding practices.AchievedCreate agent solutionWe have the agent pseudo-code solution, and the framework to support the solution, but not the actual implementation. The implementation has been deferred until the next milestone due to time constraints.[8]Not Yet AchievedPython/Java integrationA working, robust, server has been created for communication between the two components. AchievedScrum/Agile processThe whole team meets formally once a week to outline objectives and plan for the following week, called a sprint. At the end of the week, each member shows what they have done. XXX has taken responsibility for planning the meetings and booking the rooms. AchievedTestingEach algorithm has been manually tested. Each component has been tested independently and in conjunction. Manual testing has been greatly preferred over automated testing due to the complex nature of the task. There is an automated testing framework not yet merged which is a contingency should manual testing prove insufficient. AchievedTeam Logo and NameThe team has changed its name to Hat Trick to better reflect its “tri-wheel” and “sporting” nature. [7]PostponedSDP Group X Milestone 2 Report, 9/2/11IntroductionThere were three motivating factors driving our goals: the use of holonomic wheels, the need to make the architecture more scalable, and the need to meet milestone two requirements. The first factor led to reconfiguring not only the robot from a simple two wheeled to a holonomic three wheeled design, as well as the code controlling its movement. The second factor ensured that our solution would be scalable and modular, and usable by an agent-architecture. The milestone two requirements made us focus on the vision and the necessity to detect objects reliably and to integrate the vision and control feedback loop. Build, Communication and ControlMilestone 2 GOALSSTATUS DESCRIPTION AND ACTIONS TAKENSTATUSRebuild robotThe holonomic wheels arrived, and the team debated whether to pursue a three or four wheel design. The four wheel model offered simplicity of control, but would require many additional motors to control. The three wheel design would have a more difficult control, and have a slower top speed, but would be better able to spin, and would use fewer motors. Furthermore, the three wheel design was more innovative, and ultimately our desire for innovation won out. This design has proved feasible, and will need few significant structural modifications. [9][10]Achieved Resolve stability issuesTwo competing designs were created. In the first design the robot had a high centre of mass, which caused the robot to fall when it stopped. The second design angled the motors, thus lowering the centre of mass. For more stability, we may add a ball bearing caster below the centre. This has proved more stable, and it is unlikely the base can be significantly improved without exceeding the space constraints. [10].AchievedThree wheel controlSince the wheels are positioned at 120 degrees relative to each other, control for the design was not obvious. Fortunately, research revealed methods appropriate to moving an omni-robot.[2][3] A misunderstanding of power and speed led to unachievable requests of the motors. As the battery died, the difference in power in the wheel affected the linear motion. These issues have been resolved. AchievedEstablish communication between RCX/NXT bricksAn additional motor to control three wheels meant we needed either a multiplexor or an RCX brick We chose the RCX solution because the multiplexor was more costly. In hierarchical terms, the RCX will be slaved to the NXT, which is in turn slaved to the PC. Communication between bricks is critical! Not Yet AchievedEstablish communication between PC/NXTOur team wants to make the bricks partially independent to increase reaction time. For the PC/NXT communication channel, this has been achieved. However, we hoped to standardise communication across both bricks and the PC. This has not been achieved. Partially AchievedAdd sensorsWe considered using sensors for low-level reactive behaviour. Due to the nature of the milestone, time constraints, and the optional nature, implementing sensors has been deferred until the next milestone. PostponedRe-implement kickerThe three wheel robot requires three motors, one for each wheel. NXT only supports controlling three wheels, so any additional motors would be controlled through the RCX brick. Since there are still issues with controlling the RCX brick, the kicker has been delayed pending communication fixes. Additionally, the triangular body would not support the old kicker, which was intended for a rectangular body. [10]Not Yet AchievedVisionMilestone 2 GOALSSTATUS DESCRIPTION AND ACTIONS TAKENSTATUSCorrect DistortionsThe raw feed has many issues which must be corrected. Among these issues are barrel distortion and frame interlacing. The team used a picture of a chessboard inside the raw feed to figure out the exact distortion. However, the code from last year was available, and the method was the same. These issues have been resolved. [6]Achieved Tweaking image feedOriginally, values for the robots and ball had been hardcoded into RGB values. The team switched to using hue, saturation and value encoding (HSV) to more reliably capture the location and orientation of the objects on the pitch. They have cropped the field with hard coded values, but hope to make a robust boundary detection system. [4][5]AchievedFinding the objectsThe main technique for detecting orientation and objects is the thresholding technique. This involves capturing any pixels within a certain range of values for red, blue, yellow and white. To speed detection, they search only around the robot to find the orientation reference point, which is a white point on the front right of the robot. [4]AchievedFix OpenCV issuesThe OpenCV library for python is very buggy. The main issues is getting the live feed from the camera; it sometimes returns corrupted images because it queries to quickly for images. Other contingency libraries have been investigated, pending the proven inadequacy of OpenCV. For the time, the team will keep OpenCV. PostponedUse classifiers to boost accuracyOpenCV has a classifier built in, but it appears built for face detection. We have investigated other techniques for quickly classifying objects, but have not implemented them. [1]PostponedFuture Goals and ConclusionThere are many goals we need to achieve by the next milestone. First, we want to make our entire software design more robust, e.g. to handle: vision system crashes, communication failures, both NXT/RCX and NXT/PC channels, and object collisions. We also want to make the robot itself more robust by attempting to hide the wheels to avoid snags and adding sensors to handle collisions. Furthermore, we want to develop a simulator for generating lots of test data, and to simulate behaviours without the need for the vision system. The control will also be tweaked to ensure the spinning behaviour is implemented correctly. Finally, we hope to investigate multiple object recognition techniques, including classifiers, to improve vision system as a whole. Our team has met most of the goals we set out to achieve, and met all of the goals we needed to achieve for this milestone. We have an ambitious timetable for the next milestone, and with proper planning, aim to meet it.Works Cited[1]Aldavert, D., Ramisa, A., Mantaras, R. L., & Toledo, R. (2010). Fast and Robust Object Segmentation with the Integral Linear Classifier. Barcelona: Spanish Ministry of Education.[2] Ashmore, M., & Barnes, N. (2010). Omni-drive robot motion on curved paths: The fastest path between two points is not a straight line. . Melbourne: The University of Melbourne.[3]Lee, G. D., Lee, K. S., Park, H. G., & Lee, M. H. (2010). Optimal path planning with Holonomic mobile robot using localization vision sensors. International Conference on Control, Automation and Systems 2010. Gyeonggi-do, Korea.[4]Qiang Zhou, Limin Ma, David Chelberg, David Parrott, "Robust Color Choice for Small-size League RoboCup Competition", SYSTEMICS, CYBERNETICS AND INFORMATICS, pp. 62-69.[5]M. Asada and H. Kitano, "The RoboCup challenge", Robotics and Autonomous Systems, vol. 29, no. 1, pp. 3-12, 1999.[6]Gernot Hoffmann, "Interpolations for Image Warping", pp. 1-23, 2006.[7] Team Hat Trick Logo [8] Agent Control loop [9] Four wheel holonomic robot[10] Three wheel design [11] Solution architecture diagram. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download