Vehicle Conceptual Design - Robofest
Lawrence Technological University
Think-TankH2Bot
IGVC 20065
Autonomous Vehicle
[pic]
[pic][pic]
Team Members
David Bruder, Eden Cheng, Johnny Liu
Mark Salamango, John Girard, Matthew V. Coburn, MaryGrace-Soleil B. Janas, Nathaniel Johnson, Tim Helsper, Jacob Paul Bushon, Danielle E. Johnson, Bill Gale, Dave Daraskavich, Matthew W. Ericson and Maurice Tedder
Faculty Advisor Statement
I, Dr. Chan-JinCJ Chung of the Department of Math and Computer Science at Lawrence Technological University, certify that the design and development on H2Bot Think-Tank has been significant and each team member has earned credit hours for their work.
Signed,
__________________________________ _________________________
Dr. Chan-JinCJ Chung (chung@ltu.edu) Date
1. Introduction
Intelligent Systems have become increasingly prominent in our daily lives. Innovations in autonomous systems such as robotic lawnmowers, vacuums, space exploratory vehicles, and military vehicles have an important role in advancing technology.
The IGVC competition fosters original ideas in intelligent systems. This year LTU presents Think-TankH2Bot, a unique entry in the 20065 IGVC competition. The robot is built with solid engineering principles and maximizes portability and scalability through hardware and software designFeaturing hydrogen PEM fuel cell power, H2Bot pushes the envelope for innovation. The robot uses the Java programming language as its core intelligence building block, and was built from scratch to fulfill the requirements of the competition. This paper describes the Think-TankH2Bot design, considerations, improvementsand improvements over previous submissions, and core functionality.
2. Design Process
2.1 Project Planning Process
The Think-Tank team examined the requirements of the customer; in this case, the judges of the competiOne of the key inputs to H2Bot’s design besides thetion. Official IGVC competition rules provided the baseline requirements document necessary to began the design planning processwas the notes taken during the 2005 competition. The notes on lessons-learned, ideas for improvement, and observations formed a solid base to start development of this year’s autonomous vehicle. Another change this year was to involve students from the Engineering School to assist in H2Bot’s hardware design.
A design process was selected after careful review of the requirements document and team capabilities.
The selection of a design process for this project was guided by the traditional three constraints of time, budget, and quality. Based on these constraints and the desire to deliver a functional system as soon as possible, an agile design philosophy was selected.
Figure 1 illustrates our agile cycle of 1. speculation, 2. collaboration, 3. learning, and 4. eventual release of a system increment.
The agile philosophy is characterized by small self-organizing, self-managing teams, rapid delivery of incremental work products, an iterative development cycle, and working products as the primary measure of success. Agile methods stress product delivery over analysis and design, with design and construction occurring concurrently.
2.2 Project Development Process
While software will still be designed using agile methods, new methodologies were used in designing the hardware. The methodologies employed were benchmarking and set based engineering. The team heavily relied on benchmarking from the 2005 competition to input not only observed best practices from other teams, but innovation ideas generated through the experience participating in the competition. The inclusion of students from the engineering school allowed us to pursue multiple design options for key subsystems (set based concurrent engineering, Alan Ward,et.al., 1995). Using set based engineering favors carrying multiple design options in the early stages of development so the chances of putting together a complete successful system are higher. The team used mass, cost, and energy consumption models to evaluate various components. Component mockups were constructed to evaluate packaging of different options. The team investigated gas electric hybrid, battery, and fuel cell power supplies, settling on a dual power strategy of interchangeable pure battery and fuel cell modules. Since the drive system needed significant improvement more than 30 options were considered. The final decision was based on cost, power consumption, torque, speed limiting and failsafe braking. Keeping the options open allowed for many feedback cycles and iteration to get a more optimal design. Figure 1 illustrates how initial options gradually narrowed over time, but not necessarily frozen at the same point in time.
[pic]
Figure 1. Set based concurrent engineering.
In order to allow maximum time for software development, the 2005 ThinkTank robot was used as a development mule while the new robot was being designed and constructed. Agile design methodology was used to create a stream of fully functioning versions of control software. The object oriented nature of the Java language facilitates reuse and sharing of coded functions among the software developers.
Development of the Think-Tank autonomous vehicle consisted of iterative development of software and hardware prototype increments. We built a fully functional vehicle prototype to use for software development and vehicle design evaluation. From this prototype we discovered drive train motor mounting and alignment problems. These issues were reviewed and corrective action was taken by mounting the motors vertically and moving them back four inches to improve handling. These changes were implemented on the prototype vehicle and incorporated into the final competition vehicle.
2.3 Project tracking and Quality Metrics
A high level project plan was developed in Fall 2005 as follows:Both hardware and software metrics were tracked to improve quality. Software quality metrics were further broken down into product, process, and project metrics. Process and product metrics focused on improving the defect removal rate and the response to fixing software defects. Project metrics (number of developers, skill levels, schedule, etc.) were more difficult to track.
|10 / 2005 |Concept Integration I |
|11 / 2005 |Concept Integration II |
|12 / 2005 |Proposal and Parts Order |
|01 / 2006 |Integration Build/Test I |
|02 / 2006 |Integration Build/Test II |
|03 / 2006 |Integration Build/Test III & Release |
|05 / 2006 |Submit Design Report |
|06 / 2006 |Competition |
Hardware was also measured byTesting at 2005 mule and 2006 prototype levels was used to confirm conformance to requirements and performance parametersperformance of software and hardware against initial targets.
2.4 Collaborative Team Organization
The 2005 Think-TankH2Bot team organization can be found below in Figure 2. Each functional block represents a self-organizing and managing unit responsible for the three competition events. Thus far, approximately 1,595 total work hours were contributed to the project by all members of the team.
[pic][pic]
Figure 2. 222 Team Organization.
2.5 Vehicle Conceptual Design
Think-TankH2Bot is designed to fulfill the dimension, safety, and vehicle performance requirements as specified in the 20065 IGVC rules. Additional input came from reviews of previous LTU IGVC vehicle hardware designs from the 2003 and 2004 competitions. The review of previous vehicles revealed the following problems in our previous IGVC designs: poor traction, inaccurate control, lack of all-weather performance, and excessive workload on the camera sensor. The review also revealed successful features that the new vehicle should inherit from previous designs such as the use of lightweight, low cost materials and a simple mechanical design that is easily constructed using basic fabrication processes such as drilling, cutting, and nut and bolt fasteners.H2Bot represents a collaborative effort between the Computer Science and Engineering programs and LTU, which is a first for us. New hardware features in this year’s design are an aluminum frame, fuel cell power source, and completely redesigned drive system. The new frame is designed to improve structural rigidity and component accessibility. The fuel cell power source is a stretch innovation the team decided to take on and represents a good opportunity to learn about this technology. The redesigned drivetrain exhibits significantly reduced power consumption, improved torque capacity, speed limiting inherent in the design, and failsafe braking. The vehicle frame structure is shown in Figure 3.
[pic]Think-Tank’s conceptual design is based on extensive use of CAD models using SolidWorks CAD software. Two essential design decisions were made before modeling the vehicle in CAD:
• A semi-monocoque vehicle frame utilizing only flat panels to form the vehicle shell
• A two piece vehicle shell with a lower drive train chassis platform and a removable command and control upper platform to house all of the control equipment and components
The CAD model allowed us to analyze the dimensions and placement of motors, wheels, laptop computer, and camera sensor and to check for interference between components while maintaining the dimension constraints imposed by the IGVC rules. After several iterations of the CAD model, a satisfactory design was obtained and is shown in Figure 3. Blueprints were created from the solid model to begin construction of the prototype vehicle.
[pic]
Figure 3. C 3. CAD Models of Vehicle Structure
3. Hardware Design
3.1 Robot Structure
H2Bot is a The three wheel base configurationvehicle with fronth tank-style differential drive steering and rear caster wheel was inherited from previousas in the 2003 and 20042005 designs. It has proven to be an effective configuration that provides zero-turn radius maneuverability and simple motionvehicle control dynamics. The frame will be constructed of extruded aluminum, welded at critical load points, mostly in the drive train, and clamped together at all other connection points.
3.2 Drive Train
Significant upgrades in the drivetrain bow in the 2006 design with objectives of lower power consumption, improved speed control granularity, hardware speed limiting, and increased torque capacity. The motor configuration is 3.5”diameter, brushed DC servo with windings selected to achieve required torque, stay within competition speed limits, and achieve lowest power consumption for a given torque output. Torque capacity is increased by going to larger motors compared to the 2005 vehicle. Speed resolution is improved by mounting encoders to the motor shaft rather than the gear reduction output. Speed limiting is achieved by selecting motor windings with the appropriate Ke (V/rpm), combined with 16:1 planetary reduction and 24v system drive voltage to limit speed to 4.2 mph. Planetary gear heads with additional lash allowance were specified to reduce cost over the more costly precision gear heads. Torque target of 389 in-lb per front drive wheel was calculated based on the use-case of climbing over a 2.5” curb. The projected performance is 429 in-lb at a max power consumption of 685 watts / motor. Typical power consumption should be closer to 250 W. Also new in this year’s design are failsafe electromagnetic brakes. If power is lost or e-stop is engaged, the brakes will be engaged on both front drive wheels, quickly stopping the vehicle.
Design reviews of the 2003 and 2004 LTU IGVC vehicles revealed that our ability to win the autonomous challenge and navigation challenges were severely limited by a lack of traction and precise motion control. Both of these problems were solved by using two high performance 12 volt DC 40 amp electric motors with 21:1 worm gear ratio and .25 horsepower at 150 RPM.
The addition of 400 CPR incremental optical encoders to the output shaft of each motor of the 2005 LTU vehicle allows precise feedback speed and position control not available in previous vehicles. The encoders solve the traction problem encountered in previous vehicles by allowing our vehicle to maintain the same vehicle speed regardless of the terrain or grade of the incline. Think-Tank does not have the loss of control traveling down inclines that plagued the 2004 vehicles. The poise of the robot can be precisely controlled to navigate through and around obstacles.
3.3 Motor ControlActuators
Think-TankMotor control is facilitated by a Roboteq has vastly improved motion control capabilities compared to vehicles from previous years. Its motors are driven by the AX3503500 60A/channel two channel motor controller by Roboteq. controller with serial port interface and included java control class. The AX3500 provides velocity feedback control through optical encoders mounted on the motor shafts. The e-stop is wired to the controller main drive power lines via mechanical relay which can be triggered by top-mounted kill switch or RC control according to IGVC rules. The e-stop also cuts power to the failsafe brakes integrated in the H2Bot drivetrain. Motion control interface of speed and yaw rate are carried over from the 2005 system. This provided a modular interface for the 2006 team to build on.
The new motor controller for the 2005 vehicle adds the capability to precisely control speed or position using feedback motion control with optical encoders. It also has key safety features such as independent power source inputs for the controller electronics and motor outputs, which allow us to turn off power to the motors for an emergency stop.
4. Electrical System
4.1 Power Sourceystem
The H2Bot has two power modular power systems which are each capable of independently supplying all of the robot’s electrical needs. Each module can be removed from the robot and replaced with the other easily.
The primary module consists of two 32Ah AGM Batteries. The battery module will be used for development testing. An onboard charger with 110VAC interface is employed to restore battery power when required. A large capacitor may be used to support transient current draw during high drive motor loads.
The second module employs a Ballard 1.2kW Nexa Fuel Cell Module shown in Figure 4. The fuel cell system will house all components required to operate correctly. This includes the Hydrogen tank, leak detector, fuel lines, 24V battery pack (buffer), and 24V start up batteries. The fuel cell consumes 99.9% pure hydrogen gas. It can run both indoors and outdoors as it produces only water vapor as exhaust. The fuel cell can operate on a single tank of hydrogen at 100% output for 40 minutes, however the output of the fuel cell will be actively modulated via serial data link to match system power sconsumption demands. The hydrogen gas tank can be refilled without removal from the vehicle or exchanged with a full tank. This module will be used to power the vehicle during the competition.
[pic]
Figure 4. Nexa Fuel Cell.
The installed power module feeds directly to a power distribution box. In the current design both power module types can be installed into and uninstalled from the H2Bot with a minimum of time and difficulty. The exchange requires no tools. This also allows the H2Bot to easily accept new power sources that may be developed in the future.
The power distribution box is supplies appropriate voltage to each of the electrical components and performs the crucial task of buffering the power to protect the H2Bot’s electronics. The batteries help buffer the voltage levels and the current flow so the electronics can perform at peak efficiency. In the case of very sensitive electronics, such as the LIDAR, a DC/DC converter ensures controlled input voltage. There are currently 3 independent power sources available in Think-Tank’s power system. An independent 12-volt DC battery power source is dedicated to powering the motors to eliminate voltage and current fluctuations during motor operation in other circuits. Each electric motor circuit is protected by a 30-amp circuit breaker that resets itself after current levels return to normal conditionsH2Bot maintains steady voltage by adjusting the fuel cell power output via serial data interface. A second 12-volt battery provides electrical power for the Ax3500 motor controller board, Novatel ProPack-LB DGPS unit, camera, camera image converter, and a laptop computer. Two 12-volt DC batteries are connected in series to provide a 24-volt power bus for the laser scanner. Batteries are changed when a low battery voltage warning is displayed in our main control software based on voltage information monitored by the Ax3500 motor controller.. The power and communications control system schematic for Think-TankH2Bot vehicle is shown in Figure 54.
[pic][pic]
Figure 5. 4666 Power and Communications Control System Schematic.
4.2 Computer Hardware
Think-TankThe computer hardware for H2Bot is MPC Celeron Laptop, accessed exclusively by 802.11g wifi. This facilitates more convenient development and was one of the ideas that came out of the 2005 lessons-learned.’s computer hardware consists of a MPC TransPort T3000 laptop. The MPC T3000 is a 1.6 GHz, x86 Pentium processor, with 512MB memory, and Windows XP operating system. Additionally, it has all the required data ports necessary to interface to Think-Tanks other components.
4.3 Sensors
Think-TankH2Bot has increased sensor capability and updated thecarries over low-level software architecture to promote a “plug-and-play” paradigm. Additional sensors include incremental optical wheel encoders, a sub-meter accuracy differential NovaTel Propack LB DGPS unit, a laser range finder (Ladar), and a digital compass/inclinometer. Table 1 summarizes the variety of sensors used in the Think-TankH2Bot vehicle.
|Sensor Component |Function |
|OIncremental optical Ewheel encoders |Motor shaft position feedback for servo controlControls motor speed within .1 mph and|
| |2” distance |
|NovaTel ProPack-LB DGPS unit |Global positioningCapable of .8 sub-meter accuracy with OmniSTAR |
|Digital Compass/Inclinometer |HProvides heading, roll, and pitch information |
|High Dynamic Range DV Camera |Capture field image streamDetects course lines and potholes in a 92 degree field of |
| |view |
|Pixelink PL-A544 Video Converter |Compresses a raw 640x480 video image stream to 160x120 |
|Sick Laser Scanner |Provides a 180 degree polar ranging array |
| |Creates a 180-degree 2D map of obstacles and safe corridors |
Table 11. Vehicle Sensor Summary.
4.4 E-stop
As outlined in the igvc safety rules, H2Bot is equipped with both a manual and a wireless (RF) remote emergency stop (E-Stop) capability. The wireless transmitter and receiver that were selected are the FM Transmitter, and the Receiver seen below in Figures 6 and 7 respectively. The range of the transmitter to receiver is a ½ mile. The WSS 1 is designed to be mounted to a wall or in an enclosure; the transmitter is triggered by supplying 12VDC to the terminals on the transmitter enclosure. Applying 12VDC to the transmitter activates the transmitter sending a coded set of instructions to the receiver.
[pic] [pic]
Figure 6. FM Transmitter (Part #01247) Figure 7. Receiver (Part #01246)
5. Software Design
5.1 Software StrategyStrategy
The Think-TankH2Bot software team applied a layered design process which focusedbuilt on the hardware interface layer developed in 2005on code reuse, scalability, and portability. For these reasons a true object-oriented language, Java, was selected to receive input, manipulate data, and control the robotThe team continues to use free, public domain Sun Java and the Eclipse IDE. Java supports portability and code reuse as hardware evolves. Sensor signal acquisition and processing are performed in independent, concurrent threads of execution. A redesign of the high level architecture for H2Bot improves modularity and reuse between the autonomous and navigation challenges. Both are organized around the paradigm of a virtual world map. The new architecture is shown below in figure 8.
[pic]
Figure 8. H2Bot software architecture.
All robot code has been written in Java for maximum portability. As is the case in the “real world,” many times hardware becomes obsolete and developers are left with laborious, expensive integration efforts to port code from one hardware platform to another. By using Java, Think-Tank is more suited to adapting as hardware and customer requirements change.
To satisfy our goal of highly reusable code, our architecture was structured so modules such as Median Filter, Hough Transform, Black and White filter, Sobel transform, etc. could be plugged in or pulled out very quickly and easily. The notion of Java interfaces was exploited to make cose reusability and systems integration easy to implement.
Multi-threaded code can often lead to improved performance and scalability. This fact persuaded the software team to make each input to the system run in its own thread. The threads are joined later and can then be used for decision making. The thread model allows the robot to scale up to accept and process input from many more sources simultaneously.
5.2 Autonomous ChallengeLayered Architecture
The autonomous challenge requires extensive use of both the video processing and Ladar subsystems of the robot. They deliver data to the system for processing, aggregation, and then decision making.As shown in figure 8, the software is organized by layers in order to improve reuse. Using a common Domain layer for both the Navigation and Autonomous challenges provides improved focus and division of development tasks among team members.
5.2.1 Controller SubsystemHardware Interface Layer
It is very important that data coming in from multiple sources is synchronized so that the robot is making decisions based on data that represents a fixed moment in time. For that reason, Think-TankH2Bot creates, and tracks individual threads that are spawned at the Capture Layer seen at the left side of Figure 5 below. Those threads can continue to do their jobs in both background and foreground modes or Graphical User Interface (GUI) mode. When they are not updating to the screen, they are collecting and processing the incoming data. The background/foreground modes offer an important performance optimization.
The software model used to get filtered data into the data aggregator and then to the “Brain” of the system is also depicted in Figure 5 below.
[pic]
Figure 5 Think-Tank Software Model
Data from the Ladar, Video, GPS, and the Digital Compass are all taken and presented to data consumers which then process the data. For example, the Ladar Capture Processor looks for corridors and obstacles as it receivesacquires ranging data from the Ladar, and the Video Capture Processor takes grabs camera frames and manipulates them through different filtering modulesand presents the data for processing.
Once the data is processed, it is passed to the Virtual Environment which registers interest in all the data inputs. Once there is new data from each of the inputs of interest, the Virtual Environment gets notified and aggregates each piece of information.
Finally, when all information is entered into the Virtual Environment for a given point in time, the “Brain” module can then make a decision on the best path for the robot to follow.
5.2.2 Video SubsystemInterpretation Layer
Video is used to detect the white lines. To accomplish this, the original video image goes through the following steps:
5.2.2.1 Filtering
Filtering is accomplished using a scalable, modified, color median filter. The median filter was chosen for its ability to preserve edges in the image while reducing salt and pepper video noise. The filter is set up with a 3X3 convolution window, but can be set to an arbitrary size
5.2.2.2 Thresholding
Thresholding is used to prepare a binary image for the Hough Line Transform. The image must be reduced to just 1 or 2% of its original pixels since the Hough transform does many calculations on each pixel.
5.2.2.3 Hough Line Transform
The Hough line transform [Hough 1959] is a method for finding dominant lines in noisy images. Every point in the binary image is transformed to Hough-space which represents the polar coordinates of potential lines through each point in the image. All of the potential lines through a given point in the image show up as sinusoidal curves in the Hough-space. The intersections of the curves represent collinear points in the image. By locating the peak intersection in the Hough-space, the coordinates of the dominant line is extracted.
5.2.2.4 Hough Circle Transform
The Hough Circle Transform uses similar techniques as described above to find circles of a specified radius in a noisy image. This is helpful in determining the location of potholes on the course.
5.2.2.5 Perspective Correction
In order to make the line information extracted from the image more useful for navigation, the image is transformed to look like a projection onto level ground. This is accomplished by entering five physical dimensions from the camera position on the robot and performing the trigonometric manipulation based on these. The parameters and correction diagram can be found in Figure 6.
[pic]
Figure 6 Perspective Correction
5.2.2.6 Map to Virtual Environment
The last step in the process is mapping to a specific field size in front of the robot which represents a common virtual environment.
The Interpretation Layer is where the heavy scene analysis is performed based on LADAR and video inputs. These data must be interpreted to populate the virtual world map with navigation data. Compass and DGPS data are processed to determine vehicle pose for the virtual world map and as augmentation to video and LADAR scene analysis subsystems.
5.2.3 Line Following ControllerDomain Layer
The line following controller is a single Java class which combines several other classes to accomplish the following tasks:
1. Get heading error signal
2. Get centering error signal
3. PID controller
4. Motor control
5. AX3500 motor driver
All of the tunable parameters are in one block in the controller class and there is no cross coupling or dependency between the classes in keeping the object oriented design approach.The Domain Layer receives input from the Interpretation Layer to construct a virtual world map for the robot. This will be the basis for all motion planning. Information in the lower layers will be hidden to insure clean interfaces.
5.2.4 Application LayerPID Controller
Once the heading and centering errors are determined, they are combined into a single error signal for the PID controller. Filtered video data provides input to the PID closed loop controller, which controls the vehicle’s yaw-rate.
The Application Layer contains the path planning and motion control for the autonomous and navigation challenges. Information from the Domain Layer is used to decide the best path for the vehicle. The path is executed using high level commands in engineering units which are pushed back down to primitive commands in the HW Interface Layer.
5.3 Video Scene Analysis
The IGVC vision system’s primary purpose is to identify obstacles that threaten the vehicle’s progress. Two general types of obstacles exist: those with definite shapes (such as potholes, construction barricades, and other impediments), and those with variable shapes (such as the field lines). This preliminary design presents a system that will be able to detect both types of objects reliably.
5.3.1 Preprocessor (Illumination Correction)
The preprocessor’s task is to remove as many unimportant factors from the image as possible. One of the most important factors in image analysis is illumination, especially when color is utilized as a feature in later processing steps. A change in illumination (especially a change in illumination intensity, such as darkening or lightening of the scene) could change the contents of the image enough to prevent the system from correctly identifying objects. To reduce the effect of illumination upon the later processing steps, a color card is used to reference the input scene’s general illumination to ideal illumination conditions.
A color reference card is an object within the camera’s field of view, upon which one or more known colors are printed. Figure 9 is an example color reference card, optimized for the hue-
Figure 9. An example color reference card
saturation-intensity color space. Ideally, this color reference card will be placed in an orientation so that, under as many situations possible, the reference card will receive the same general illumination as the rest of the scene. From the information known about the card and from the data returned about it by the vision sensor, the illumination of the scene as a whole can be calculated, and the image can be corrected to match an ideal illumination value.
This solution, if not implemented carefully, could leave openings for problems. For example, drastic changes in local illumination (such as those caused by shadows) could cause issues. If a small shadow crosses the color reference card, the preprocessor would assume the entire scene is darkened, and the scene would be artificially and incorrectly brightened. Such immediate changes could be avoided by using the last several illumination calculations as reference (so that the reference card must be in a shadow for an extended period of time before it becomes a significant factor). In addition, the total brightness of the scene could be used as a validity check to confirm that the illumination of the color reference card correctly represents the illumination of the scene as a whole.
In addition to illumination correction, some quantization of the colors in the input scene may be handled by the preprocessor. Depending on the method employed for complex feature detection, the colors in the image may be quantized into a discrete set of colors (such as 16 or 32 colors), or the colors may be left alone.
5.3.2 Complex Feature Profiler and Comparator
After the scene is preprocessed, complex features in the scene are detected, labeled, and tracked. Contruction barrels, construction barricades, pails, and sand pits are all defined as complex features.
[pic]
Figure 10. Input image for profiling a construction barrel (image and mask)
Note that translation, rotation, and scale are all permitted to vary freely; the detector must be capable of detecting specific objects at any location in the image, rotated about any angle, and at any scale (near or far).
[pic]
Figure 11. Hue and saturation histogram profile (32 x 32) of Figure 3
To detect a specific feature, the detection system must be aware of the characteristics of that feature. Several different types of profile-generating algorithm are available. The introduction of color histograms as a search method originated in [Swain and Ballard 1991], and such a mechanism would be one of the simpler profiling methods. For example, in common terms, the construction barrel (Figure 3) consists of about 15% white and 85% orange. This information can be converted into a histogram with spikes at orange and white. In Figure 4, the large vertical grouping of points at the left side of the two-dimensional histogram represents the red-orange of most of the barrel, while the clustering near the bottom right of the histogram represents the white stripes of the barrel. The feature profiler is just a single component in the system, one that can be changed modularly if necessary.
The feature profiler generates a profile for a desired feature. Many of these profiles will exist in a feature database, accessible to the detection system. In addition, segments of a scene can be profiled in exactly the same way as desired features, and the comparison of two profiles (a desired profile versus a candidate profile) can be used to correctly identify objects. The ‘comparator’ portion of this module compares two profiles and returns a value indicating how alike the two are.
5.3.3 Complex Feature Tracking
The next part of the complex feature system is the feature tracking system. This module handles the challenge of tracking a known profile through an entire search space. It will use a limited genetic algorithm to search a range in and around the original feature bounds.
[pic]
Figure 12. Example of the 'exclusion zone' algorithm.
If we assume the system correctly detects the construction barrels, their outlines can then be removed from later analysis. One important part of this module is that once a known feature is definitively re-located in the next scene frame, its search rectangle is removed from the scene. Essentially, since the system now is relatively sure that the search rectangle contains a known feature (and little or nothing else), then it does not have to waste computational power on analyzing that section further. More than one feature can be tracked at the same time; in fact, the more complex features that can be pulled out of the image, the better chance later steps have at picking out more ambiguous features (such as field lines). Although, in the case of round features such as potholes, meaningful pixels could be removed, this should not be a big concern.
5.3.4 Complex Feature Detection
At this point, the detection system has the tools it needs to detect complex features in an input scene. First, any known objects are located and parsed out of the image. If no features are known, then the entire image is searched for complex features. A simple genetic algorithm is executed upon every input scene. This algorithm implements four variables: X1, Y1, X2, and Y2. These four variables define the points (X1, Y1) and (X2, Y2), the corners of a rectangle search space within the scene. A profile for the search window is generated (discussed in Section 3) and compared to one or more saved feature profiles (such as the profile of a construction barrel).
Anywhere from dozens to hundreds of search rectangles are inserted into the genetic pool. Crossover and mutation is used to propel the genetic algorithm until the genetic algorithm completes (determined by some end-condition, such as total generations executed). If the best feature match in the scene is over some threshold (such as 80% confidence that the feature in the best search window is the compared object), then that feature is officially recognized, and its details are passed to the complex feature tracker. Once the input scene leaves the complex object detection system, all detected complex objects will have their pixels removed from the image, making the image simpler for analysis of more ambiguous features.
5.3.5 Simple Feature Detection
[pic]
Figure 13. Implementation of the multi-niche crowding algorithm on a test image
The final interpretation of the image is simple feature detection. In this case, the simple features are line segments (representative of the painted field lines). The ability to detect multiple lines in the image leads to several advantages. In this manner, curves can be detected (via the linking of two segments), and both boundary field lines can be interpreted in the same search space. Therefore, a multimodal (multi-solution) genetic algorithm is a good choice to detect simple line segments in the simplified image.
The best candidate so far researched is the implementation of a genetic algorithm with multi-niche crowding [Cedeno et al 1994]. Figure 13 illustrates an advanced solution from a multi-niche crowding implementation. Such an implementation should be adept at detecting field lines.
5.2.5 Motor Control
The motor control module is a smart actuator that receives speed and yaw-rate commands in engineering units; and provides retrieval functions for left and right motor command values. These can be directly transmitted to the motor driver software.
5.3.62.6 Ladar Scene Analysisubsystem
The Ladar (Sick LMS 291) is the predominant robot subsystem that feeds the controller with information about where the robot can or cannot go due to obstaclesused to detect and confirm obstacles for placement in the virtual world map so they can be circumvented by the path planning module. The Ladar Scene Analysis subsystem must use a combination of Ladar ranging and vehicle pose information to correctly locate objects on the virtual world map.
5.4 Virtual World Map
The virtual world map is implemented as a 2-d array representing the vehicle’s environment in terms of traversable space and obstacles. This will be the basis for path planning in both the Navigation and Autonomous Challenges. In the Navigation Challenge, the GPS way points will become part of the map at the beginning. In the Autonomous Challenge, the system will populate the map as the vehicle progresses based on sensor inputs.
5.5 Path Planning
Paths will be analyzed in arcs based on the vehicles current heading, where it has come from, known obstacles and boundaries. The path segment with the best cost will be executed.
5.6 Autonomous Challenge
5.7 Navigation Challenge
Java Ladar processing code classifies objects as obstacles or corridors and feeds that information to the decision making module, or “Brain.” The Obstacle and Corridor Detection algorithm is outlined in the steps below.
1. Start iterating through all the points of the Ladar sweep.
2. Search: If you haven’t found an obstacle yet, keep searching. When you find one, check to see if the current point you’re looking at is within the “Yellow Zone.” If it is, go to the scanning obstacle mode. Otherwise, let’s look to see if there is a possible corridor.
3. If isObstacle: Keep track of the maximum and minimum radii as well as the angle. Search to the left of the obstacle to see if there is an obstacle that is continuous to the last point. If there are, “expand” the angle and continue tracking the maximum and minimum angles. Once you have a gap between two points, go back to 2. (Search).
4. If Scanning a Corridor: Check all the points collected to see if they are greater than or equal to the tunable parameter CORRIDOR_THRESHOLD which sets the optimal value for a “Safe” corridor. If you find an object, switch back to isObstacle mode in Step 3.
Because the system picks out corridors up to a range of 8 meters, the robot can avoid cul-de-sacs that are less than 8 meters deep.
Figure 7 below depicts the categorization of corridors and obstacles. The key at the bottom describes the threat levels of each zone the Ladar sees.
[pic]
Figure 7 Categorization of Corridors and Obstacles
5.3 Navigation Challenge
Previous LTU vehicles did not have a compass or accurate GPS sensors to determine the orientation of the vehicle. The compass and sub-meter accurate GPS on this vehicle makes the robot position information more accurate and faster because it can determine the direction to the goal and precisely move there. The robot uses the controller and Ladar subsystem code found in sections 5.2.1 and 5.2.6 to accomplish the tasks of the Navigation Challenge.
5.3.1 Navigation Challenge Algorithm
Our method of reaching the GPS waypoint goal is to first start by heading in the direction of the goal. This is determined using GPS and compass data. When an obstacle is encountered, the robot will use Ladar data to avoid it and recalculate the path to the waypoint. We can calculate the angle, θ, between our robot direction and the goal direction, which is shown in Figure 8. Our navigation algorithm reduces the angle error between the robot and the goal to an angleθclose to zero, which means the robot is headed straight toward the goal.
5.3.2 Radar Graphical User Interface
A graphical interface in the form of a radar display, shown in Figure 9, presents compass and GPS data in a graphical tool used for testing the robot. The N represents north, the blue point is the robot, and the red line depicts the direction to the goal.
The radar graphical interface presents easy to understand graphics of the navigation. A red line on the radar display links the robot position to the waypoint indicating the path the robot should follow to reach the waypoint. When the robot reaches the goal, the red line disappears from our radar and a new link to the next waypoint is displayed.
6. Performance Analysis
A summary of vehicle predicted and trial data is listed in Table 2. The discrepancies between predicted and actual result are mainly due to the “real world” effects of varying terrain and sensor noise not included in prediction calculations.Vehicle performance was estimated based on the design parameters. These same attributes were measured after integration.
|AttributePerformance Measure |Design PredictionPerformance Prediction|ActualPerformance Results |
|Maximum Speed |4.28 mMph |7 Mph |
|Ramp Climbing Ability (mu = .3516) |60%2.5 inch curb |20% |
|Nominal Power ConsumptionReaction Time |X watts1 – 2 Hz |1.5 Hz |
|Battery Operating TimeLife (104W required) |x1 minutesHour |45 Minutes |
|Hydrogen Operating TimeObject Detection |x minutes8 meters |8 meters |
|Brake holding capacityDead-end and Traps |>30% grade8 meters deep |8 meters deep |
|Waypoint accuracy (with Omnistar) |.1 to .8 meters |.6 to .8 meters |
Table 2. Performance Analysis.
6.1 Safety
The issue of safety was considered early in the initial speculation phase of our agile design process in which requirements for the vehicle were gathered. Electrical safety was addressed by installing circuit breakers inline with the motor power circuit and clearly labeling electrical wiring. The E-stops can be engaged in emergency situations when the robot has to be immediately disabled. After an E-Stop, the vehicle can only be restarted by resetting it manually. H2Bot safety features include manual and e-stop, failsafe brakes, and hydrogen leak detection shutdown.
6.2 Reliability
The Mean Time Between Failure (MTBF) represents the average time between failures of a system. The Think-Tank robot hasn’t had any hardware failures to date, so the MTBF in this case represents the total time the robot has been assembled. Battery redundancy increases reliability in the event one fails. Also, multiple sources of obstacle avoidance inputs from the Ladar and camera offers increased reliability if one sensor should The reliability of H2Bot should be improved relative drive motor function and power availability. Both of which hampered the 2005 design. These items were improved by complete drive system redesign and the development dual modular power sources with increased capacity.fail.
6.3 Durability
Durability of the Think-Tank vehicle was measured as the number of times any component on the vehicle required repair or maintenance during testing. None of the components required any repair or significant maintenance during testing of the veDurability improvements include the extruded aluminum frame and redesigned drive systemhicle. The new frame is significantly stronger and more rigid than last year’s fiberboard construction. The drive system has design margins that exceed competition requirements. Last year’s model was not capable of climbing over the front lip of the ramps in the Autonomous Challenge.
6.4 Testing
Hardware and software tests were conducted under a three phase test plan which included: unit testing of each component by the developer as the components were created, integration testing as each component was integrated into the entire system to check that the components functioned properly as a system, and regression tests to verify that previously tested components didn’t introduce bugs after integration with other newer components. Practice courses were set up to test the functionality of the robot for both the Autonomous and Navigation Challenges.
6.5 Systems Integration
The systems integration model was a direct benefit from the object-oriented programming model and was aided by the use of Java interfaces and a custom written threading model. Hardware integration was facilitated by the modular design of the chassis and the use of electrical buses to deliver power to each subsystem. Hardware and software systems integration was performed during each increment of the development.
6.6 Vehicle Cost Summary
Table 3 summarizes the total material cost for the Think-TankH2Bot vehicle. This is the most expensive and sophisticated vehicle developed by LTU teams so far.
|Component |Total Cost |Team Cost |
|(1) Sick LMS 291-S05 Ladar* |$7,000 |$0 |
|Nexa 1.2 KW Fuel Cell |$7,000 |$7,000 |
|(1) NovaTel ProPack-LB DGPS & GPS-600-LB Antenna* |$2,700 |$0 |
|(1) MPC Laptop* |$2,215 |$0 |
|(2) Brush DC planetary gearmotors with e-brake |$2,000 |$2,000 |
|(1) JVC TK-C1480U Color Video Camera* |$1,000 |$0 |
|(2) Hydrogen tanks + fuel |$900 |$900 |
|(1) PixelLink PL-A544 Video Converter box* |$500 |$0 |
|Electrical Hardware |$500 |$500 |
|(1) Digital Compass/inclinometer* |$400 |$0 |
|(1) Roboteq AX3500 Dual Channel Motor Controller |$395 |$395 |
|(4) Main Battery 12 Volt 32 Ah AGM |$350 |$350 |
|Chassis Materials |$300 |$0 |
|Misc Hardware (nuts, bolts, etc…) |$200 |$200 |
|(2) Hollow Shaft Optical Encoder Kit |$122 |$122 |
|(2) 14" Tire & Wheel Assembly |$58 |$58 |
|Total |$25,640 |$11,525 |
|*reused from 2005 vehicle | | |
Items with a team cost of $0 are components that were loaned to us for the competition.
|Component |Total Cost |Team Cost |
|Chassis Materials |$82 |$82 |
|(2) 12V DC motors, 285 rpm |$310 |$310 |
|(2) 4" Tire & Wheel Assembly - 10.5" Knobby Tire |$48 |$48 |
|(1) NovaTel ProPack-LB DGPS & GPS-600-LB Antenna |$2,700 |$2,700 |
|(2) Incremental Rotary Optical Encoder Kit |$122 |$122 |
|(1) Digital Compass/inclinometer |$400 |$400 |
|(1) JVC TK-C1480U Color Video Camera |$1000 |$0 |
|(1) PixelLink PL-A544 Video Converter box |$500 |$0 |
|(1) Sick LMS 291-S05 Ladar |$7,000 |$0 |
|(1) Roboteq AX3500 Speed controller /w optical encoder inputs |$395 |$395 |
|(4) Main Battery 12 Volt 7 Ah Power Sonic Sealed Lead Acid |$48 |$48 |
|Electrical Hardware |$136 |$136 |
|Hardware (nuts, bolts, etc…) |$75 |$75 |
|Miscellaneous (paint, Velcro, etc…) |$14 |$14 |
|(1) MPC Laptop |$2,215 |$0 |
|Total |$15,045 |$4,330 |
Table 3. Vehicle Cost Summary.
7. Conclusion
Think-TankH2Bot is an autonomous vehicle designed and constructed by LTU students that incorporates modular extensible software architecture with a robust design capable of challenging the top ranking schools at the 2005 IGVC. New to this years vehicle is a host of sensor capabilities integrated using a virtual environment map. Various design improvements based on previous LTU designs have increased the control, navigation, and vision system accuracy to levels comparable to past event winnerscontinues the LTU tradition of innovation and continuous improvement. Its fuel cell power source, redesigned drive system, and new software approach should make it an interesting and competitive entry..
-----------------------
Figure 9 Radar GUI
Figure 1 Agile Design Cycle
Figure 8 The angle θ between our robot and the goal
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- social work conceptual framework
- social work conceptual framework theories
- conceptual framework for qualitative studies
- conceptual framework qualitative research
- theoretical and conceptual framework pdf
- conceptual framework qualitative examples
- social work conceptual framework example
- conceptual physics book
- conceptual framework ifrs 2018 pdf
- conceptual framework accounting 2018
- conceptual framework for financial reporting
- conceptual framework of accounting pdf