START Receiver Outline



WiFi Test Bed

Experimentation Plan

Version 2.12

25 28 May 2004

Prepared under Subcontract SC03-034-191 with L-3 ComCept, Contract Data Requirements List (CDRL) item A002, WiFi Deployment and Checkout Plan

Prepared by:

Timothy X. Brown, University of Colorado at Boulder

303-492-1630

timxb@colorado.edu

Kenneth Davey, L-3 ComCept

972-772-7501

ken.davey@L-

Intentionally Blank

Table of CONTENTS

1.0 INTRODUCTION 1

1.1 Purpose 1

1.2 Background 1

1.3 Objective 2

1.4 Approach 2

2.0 Test Bed Overview 5

2.1 Test Site 5

2.2 Network Overview 7

2.3 Network Architecture 8

2.4 Monitoring 12

2.5 Security 13

3.0 Modes & Configurations 15

3.1 Scenario 1: Ground-UAV-Ground 15

3.2 Scenario 2: Multiple UAVs 16

4.0 Measures of Performance & effectiveness 17

4.1 Measures of Performance 17

4.2 Measures of Effectiveness 17

5.0 Demonstration & Test experiments 19

5.1 Fixed Ground - to - Fixed Ground 19

5.1.1 Baseline Ground Network 20

5.1.2 Ad Hoc Ground Network Changes 21

5.2 Mobile Ground - to - Fixed Ground 21

5.2.1 Mobile Node at the Edge of the Network 21

5.2.2 Mobile Nodes within the Network 21

5.3 Ground-to-UAV 22

5.3.1 Fixed Ground – to – Fixed Ground UAV Affects 22

5.3.2 Mobile Ground – to – Fixed Ground UAV Affects 22

5.3.3 UAV Connecting Disconnected Ground Troops 22

5.4 UAV-to-UAV 23

5.4.1 UAV Range Ground-to-Air 23

5.4.2 UAV Pair 23

5.4.3 Three UAVs 24

5.4.4 Ground – UAV – UAV 24

6.0 Methods & procedures 25

6.1 Experimental Measures 25

6.2 Derived Measures of Performance 26

6.2.1 Data Throughput 26

6.2.2 Latency 27

6.2.3 Jitter 27

6.2.4 Packet Loss, Congestion 28

6.2.5 Packet Loss, Radio 28

6.2.6 Communication Availability 28

6.2.7 Remote Connectivity 29

6.2.8 Hardware Reliability 29

6.2.9 Range 29

6.2.10 Network Self-Forming 29

6.2.11 Node Failure Recovery 30

6.2.12 Mobility Impact 30

6.2.13 Data, Voice, Video, Web Page Communication 30

6.2.14 Deployment & Transportability 31

6.2.15 Ease of Operation 31

LIST OF FIGURES

Figure 1: Project Phases 2

Figure 2: Views of the Table Mountain National Radio Quiet Zone. 5

Figure 3: Map of Table Mountain. 6

Figure 4: Network Overview 7

Figure 5: Network Architecture 8

Figure 6: Ground Vehicle Node Equipment 9

Figure 7: Handheld Personal Communicator - Sharp Zaurus SL-5600 10

Figure 8: CAD drawing of the UAV design 11

Figure 9: Mesh Network Radio Equipment in UAV - Fidelity Comtech FCI-2701 11

Figure 10: Fixed Site 1 Architecture 12

Figure 11: Monitoring Display Example 13

Figure 12: Test Scenarios 15

Figure 13: Measurement Relationships 26

LIST OF TABLES

Table Title Page

1 Ground-UAV-Ground Test Configurations 16

2 Multiple UAV Test Configurations 16

3 Measures of Performance 17

4 Measures of Effectiveness 17

5 Number of Tests per Category for Ph.2 & Ph. 3 19

6 Phase 2 Procedure vs. Experiment Checklist 32

APPENDICES

APPENDIX A – RELATED DOCUMENTS 33

APPENDIX B – GLOSSARY 35

2 INTRODUCTION

1 Purpose

This document provides experimentation plan details associated with a WiFi-based (802.11b) Wireless Local Area Network (WLAN) test bed made up of terrestrial and airborne nodes as well as broadband connectivity back to a Network Operations Center (NOC). Following an overview of the test bed in Section 2, modes and configurations are outlined in Section 3, measures of performance and effectiveness are listed in Section 4, demonstration and test experiments are detailed in Section 5, and methods & procedures are provided in Section 6.

This WiFi Test Bed Experimentation Plan is the result of “Phase 2” activities associated with experimentation planning, deployment, and initial testing. It represents the full test plan for Phase 2 as well as Phase 3 of the project. As specified within, 16 of 50 tests are to be completed in Phase 2, with the balance to be completed in Phase 3. This document will be updated to reflect changes and additions as they arise.

2 Background

Communication networks between and through aerial vehicles are a mainstay of current battlefield communications. Present systems use specialized high-cost radios in designated military radio bands. Current aerial vehicles are also high-cost manned or unmanned systems.

L-3 ComCept Inc. has contracted with the Air Force Materieal Command (AFMC), Aeronautical Systems Center (ASC), Special Projects (ASC/RAB) to establish and manage a Wireless Communications Test Bed project for the purpose of assessing a WLAN made up of terrestrial and airborne nodes operating with WiFi-based (802.11b) communications. The University of Colorado has been subcontracted to design, install and operate the test bed made up of Commercial Off-The-Shelf (COTS) equipment, and to integrate and operate Unmanned Aerial Vehicles (UAVs) which will interact with it. The network shall support rapidly deployed mobile troops that may be isolated from each other, allow for ad hoc connectivity, and require broadband connection to a Network Operations Center. Experiments are to be performed to measure and report on the performance and effectiveness of the test bed communications capabilities.

The Wireless Communications Test Bed project is being executed in phases. The objectives and dates associated with each phase are outlined in Figure 1 below.

[pic]

Figure 1: Project Phases

1.3 Objective

The objective of the wireless communications test-bed effort is to deploy and test a COTS-based communications network made up of terrestrial and aerial nodes that employ state-of-the-art mobile wireless and Internet Protocol (IP) technology. The solution shall support rapid deployment of mobile troops that may be isolated from each other, and require broadband connectivity to a Network Operations Center. Experiments are to demonstrate the potential for rapid deployment of an IP-centric, wireless broadband network that will support both airborne and terrestrial military operations anywhere, anytime.

1.4 Approach

A platform supporting the IEEE 802.11b (“WiFi”) industry standard for Wireless Local Area Networks has been chosen as the basis for the test bed due to its support of broadband mobile wireless communications, dynamic ad hoc mesh network operation, and being commercially available at low cost. A common 802.11 platform will be utilized for all ground-based and UAV-based nodes. Special software (routing protocols) developed by the University of Colorado to efficiently manage ad hoc mobile mesh network functionality will be applied to the ad hoc network nodes.

The terrestrial and airborne communication devices will form an IP-centric network on an ad hoc basis. Broadband links will be established to a remote NOC location. Remote monitoring capabilities will allow for remote users to access data obtained, and to monitor the test site and activity on a real time basis. Packet data traffic in low, medium, and high-load regimes will be utilized for measuring performance and service support abilities. Typical multimedia applications (messaging, web page download, video, and VoIP) will be evaluated.

A location has been chosen that allows for uninterrupted testing of multiple deployment scenarios. Baseline performance will be established on a ground-to-ground connected configuration. Mobile node impacts will be tested. UAV deployment will allow for its introduction to the theater to be characterized. UAV effectiveness for connecting isolated troops will be evaluated, along with UAV abilities to extend the range of communication.

Intentionally Blank

2.0 Test Bed Overview

An overview of the test site and test bed design is provided in the sections below. More detailed test bed design information is included in sections to follow.

2.1 Test Site

The Table Mountain National Radio Quiet Zone (NRQZ) is owned by the Department of Commerce and operated by the Institute for Telecommunication Sciences (ITS) approximately 10 miles north of Boulder, Colorado. The site is 2.5 miles by 1.5 miles on a raised mesa with no permanent radio transmitters in the vicinity. An aerial photo of the site is shown in 2a and a view at ground level on the top of the mesa is shown in 2b. A map of the site is shown in Figure 3, with fixed sites, nomadic personnel positions, isolated troop positions, and mobile unit paths identified.

[pic] [pic]

(a) (b)

Figure 2: Views of the Table Mountain National Radio Quiet Zone.

Aerial View (a) and Ground Level View (b).

[pic]

Figure 3: Map of Table Mountain.

FS - Fixed Site (powered locations)

HP - Handheld Position (location of ground-based personnel)

IP - Isolated Handheld Position

- Public Road

- Ground Vehicle Path

In Figure 3, the grid lines are 1000ft (300m) spacing. FS1 and FS2 are powered fixed site locations, and are connected via fiber optic cable. Broadband connectivity to the Internet and the NOC is through FS1 and FS2, and over the fiber optic connection. The green, large dashed, line highlights a public road circuit around the base of the mesa.

Table Mountain has several facilities that make it ideal for the wireless test bed needs. First it is a large 2.5sq mi zone where radio communications is controlled. The top is flat and unobstructed. The facility itself is a mountain obstacle suitable for obstructing users on opposite flanks of the mountain as in Scenario 1 in Figure 12. It is circled by public roads so that communication to or from the mountain can be easily set up from any direction. The site has buildings that can house equipment and provide AC power. Fiber optic cable runs exist between buildings. Finally, it has several areas suitable for UAV flight operations (one is labeled in Figure 3).

2 Network Overview

An overview of the wireless communications network to be established by the test bed is provided in Figure 4. As shown, a WLAN comprised of fixed, mobile ground, handheld, and aerial units is connected to a remote NOC location through a Local Command Center (LCC). Local area connectivity is made available with units supporting 802.11b wireless transmission and mesh network routing. Remote monitoring and display capabilities are possible through an internet connection. It is anticipated that LCC connectivity to the NOC and internet through an Iridium satellite link will be tested in Phase 3 of the program.

Figure 4: Network Overview

802.11b nodes will be connected together using standard IEEE 802.11b WiFi cards. The cards being used are Orinoco Gold cards. The cards can be operated in “infrastructure” mode which allows them to communicate only through 802.11b access points. For the mesh networking that will be part of the test bed operation however, the cards will be operated in “ad hoc” mode, which allows any of the Mesh Network Radios to talk directly with each other. All network elements will communicate using IP version 4 (IPv4). More details of the 802.11 standard can be found at: ANSI/IEEE 802.11 Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications 1999 Edition, IEEE, March 18, 1999.

The 802.11b standard allows one node to talk to another. Multi-hop routing capability is provided by the Dynamic Source Routing (DSR) protocol.[1] DSR defines route discovery, forwarding, and maintenance protocols that enable the mesh network operation. Colorado University personnel have implemented this protocol so that it can be modified for testing, optimization, and monitoring. The DSR protocol is being implemented with the Click Modular Router language[2], which is very flexible and provides a high degree of control on the network operation.

2.3 Network Architecture

The network architecture to be used for experimentation and monitoring is shown in Figure 5. Information on each node and the interfaces involved are provided in paragraphs to follow.

Figure 5: Network Architecture

Ad Hoc Radio Network

The ad hoc radio network is made up of ground vehicle units, handheld personnel communicators, fixed site, and unmanned aerial communication points. A common radio and 802.11b WLAN interface platform is used between the ground vehicle, fixed site, and aerial nodes. The software can also be run on other platforms such as laptop computers. Additional information on each node is provided in the following paragraphs.

Ground Vehicle Node

The ground vehicle node is a mobile vehicle equipped with an 802.11b mesh network radio, a GPS receiver, a power supply, and end-user equipment used for test and application demonstration purposes. Application demonstration equipment to be made available is laptop computers, VoIP phones, and video monitoring equipment. The end-user application equipment is connected to the mesh network radio via an Ethernet switch. Figure 6 depicts the equipment configuration to be within the mobile vehicles. The mesh network radio equipment used for the ground vehicle nodes, including the power supply and GPS receiver, is supplied by Fidelity Comtech, and identified by model number FCI-2601.

Figure 6: Ground Vehicle Node Equipment

Handheld Node

The handheld personnel communicator is a commercial Personal Digital Assistant (PDA) with 802.11b wireless communication abilities, and special software (routing protocols) applied to efficiently manage ad hoc mobile mesh network functionality. Sharp Zaurus SL-5600 Linux PDAs are utilized. In addition, standard laptop computers running the Linux operating system are also capable of running the routing protocols as the handheld nodes.

[pic]

Figure 7: Handheld Personal Communicator - Sharp Zaurus SL-5600

Aerial Vehicle Node

The aerial vehicle is a UAV, and will be a modified version of existing designs developed at the University of Colorado. A CAD drawing for the airframe being developed for this project is shown in Figure 8. The payload bay is the shaded area and the dimensions (19.5x6.5x6.5) are shown in inches. These dimensions are the maximum space and available space is reduced by airframe ribs and tapering towards the tail. The designed performance includes a payload mass of 10lb, flight time of 90min, and cruise speed of 60mph. Control of the UAV for Phase 2 operations is manual via 900MHz 72MHz remote control. Automatic waypoint control is planned for Phase 3 operations. Emergency recovery is through pre-programmed descent. UAV position data will be supplied to the network along with communications data.

The UAV will be equipped with 802.11b mesh network radio equipment that is common to that used in the ground vehicles and fixed sites. Figure 9 depicts the equipment configuration to be within the UAVs. The mesh network radio equipment used for the UAVs, including the power supply and GPS receiver, is supplied by Fidelity Comtech, and identified by model number FCI-2701.

[pic]

Figure 8: CAD drawing of the UAV design

[pic]

Figure 9: Mesh Network Radio Equipment.

The center shows the core radio, the left shows it mounted in the environmental enclosure, (Fidelity Comtech FCI-2601) and the right shows the UAV version (Fidelity Comtech FCI-2701) mounted in the UAV.

Fixed Sites

The fixed sites within the test bed will form a part of the active 802.11 communications network, and will be used for backhaul purposes. Backhaul traffic will be carried over the fiber ring located at the Table Mountain test site, and then transported to the monitor server over the internet. The Monitor Server is connected to the internet over a standard Ethernet connection. Much like the ground vehicle node, the fixed site node will be equipped with an 802.11b mesh network radio, a GPS receiver, a power supply, and end-user equipment used for test and application demonstration purposes. Application demonstration equipment to be made available is laptop computers, VoIP phones, and video monitoring equipment. The end-user application equipment is connected to the mesh network radio via an Ethernet switch. Figure 10 depicts the equipment configuration to be within the fixed sites. The mesh network radio equipment used for the fixed sites, including the power supply and GPS receiver, is supplied by Fidelity Comtech, and identified by model number FCI-2601.

[pic]

Figure 10: Fixed Site 1 Architecture

Network Operating Center (NOC)

For the test bed, all that is placed in the NOC location is a Monitor Server. The Monitor Server monitors and collects data from the UAV and ground nodes through Fixed Site 1 and Fixed Site 2. It also provides an interface for remote monitoring, with real-time display and play-back modes. The Remote Monitor connection will be over the internet. Connection to the Table Mountain test site will also be over the internet.

2.4 Monitoring

The test range monitoring will consist of additional software loaded upon each ad hoc node. This will allow for collection of performance statistics with time and location stamps measured from the GPS. This data is periodically sent to the Monitor Server through Fixed Sites 1 and 2. The total experimental data traffic is expected to be small and should not significantly impact the network throughput. Though small, for experimental control, we would like to minimize the monitoring traffic use of the ad hoc network.

Each network node collects the following data. The data is then sent periodically to the Monitor Server once every 15 seconds via Fixed Sites 1 and 2.

• GPS position and time

• Number of data packets sent on each route

• Number of data packets received on each route

• Number of data packets lost due to congestion on each route

• Number of control packets received

• Number of control packets transmitted

Remote monitoring capabilities will be built into the test bed so that remote observers will be able to monitor test bed performance, display results, and playback test scenarios. The remote monitoring and display capabilities are being designed via a Java interface. The following capabilities are anticipated. See Figure 11 for a monitor display example.

• Situation Map

• Network Status Messages

• Performance Graphs

• Drill-Down on Nodes

[pic]

Figure 11: Monitoring Display Example

2.5 Security

For the test bed, both physical and communication security is considered. The Table Mountain facility is a fenced facility that includes storage and work buildings that can be locked. Equipment such as pole mounted antennas and other outdoor equipment can be left set up over several days. Portable radios, laptops, and UAV equipment will be stored in on-site buildings or carried to and from the site.

To limit access to the wireless communication network, MAC filtering algorithms will be used. The hardware MAC address of every node on the test bed (approximately 20 in total) will be stored on the network devices and only packets that match one of these addresses will be processed. This will prevent casual users from gaining access to the network.

The monitoring server will require a password in order to have access to the remote monitoring facilities.

3.0 Modes & Configurations

In order for the wireless network solution to be adequately tested for performance and effectiveness against deployment types, the test bed will be configured in multiple ways. Two broad scenarios, shown in Figure 12, will be used for testing the unique 802.11-based network solution.

[pic]

Figure 12: Test Scenarios

Multiple configuration types will be involved with each scenario, and are detailed in the following sections. Experimentation techniques for each configuration type and characterization approaches for ease of deployment and operation are detailed in Section 5.

1 Scenario 1: Ground-UAV-Ground

In Scenario 1, radios on the ground are mounted in vehicles, carried by personal, or placed at fixed sites. The radios implement a wireless ad hoc (aka mesh) network whereby if a traffic source and destination are not in direct communication range, intermediate nodes will automatically relay the traffic from the source to the destination.

This generally provides good connectivity between ground nodes. When nodes become separated by distance or geography, then the network is disconnected. In these situations, the UAV serves as a communication relay between disconnected nodes on the ground. Ground nodes that are isolated from other ground users can reach each other through the UAV.

This scenario will demonstrate that ad hoc networks working with COTS WLAN radios can provide connectivity to widespread units. It will further demonstrate that low-cost UAVs can extend this connectivity over wider ranges and geography than is possible solely among ground units. It will demonstrate typical performance measures such as network throughput, latency, and availability that would be possible with these networks.

The following table lists the test configurations involved with this scenario. Experiments to measure performance and effectiveness are provided in Section 5.0.

|Test Configurations |

|Ground—Ground |

|Ground—Mobile—Ground |

|Ground—UAV—Ground |

Table 1: Ground-UAV-Ground Test Configurations

The purpose of testing ground-to-ground and ground-mobile-ground communication performance and effectiveness is that it provides a baseline for UAV deployment to be measured against.

2 Scenario 2: Multiple UAVs

In Scenario 2, we focus on an ad hoc network of UAVs. A UAV is on a long-distance mission. Communication range is limited because of power, weight, and volume constraints on the low-cost, light-weight vehicle. Communication range is extended by using intermediate UAVs to relay back to the control center.

The scenario will demonstrate that ad hoc connectivity between UAVs can greatly extend the low-cost, light-weight UAV mission profile. As in the first scenario, the second scenario will demonstrate typical performance measures such as network throughput, latency, and availability that would be possible with these networks.

The following table lists the test configurations involved with this scenario. Experiments to measure performance and effectiveness are provided in Section 5.0.

|Test Configurations |

|UAV—UAV |

|Ground—UAV—UAV |

Table 2: Multiple UAV Test Configurations

Various UAV-Ground configurations will be tested to identify any issues associated with joint coverage and other deployment scenarios.

Measures of Performance & effectiveness

Each of the performance and effectiveness measures to be tested or characterized during the experiments are listed below. Methods and test procedures for each item are provided in Section 6.0.

1 Measures of Performance

For a detailed description of how each of the following measures of performance are to be tested, see the corresponding Methods and Procedures in Section 6.0.

|Measures of Performance |

|Data Throughput |

|Latency (communication delay) |

|Jitter (delay variation) |

|Packet Loss, Radio |

|Packet Loss, Congestion |

|Communication Availability |

|Remote Connectivity |

|Hardware Reliability |

|Range |

Table 3: Measures of Performance

2 Measures of Effectiveness

For a detailed description of how each of the following measures of effectiveness are to be characterized, see the corresponding Methods and Procedures in Section 6.0.

|Measures of Effectiveness |

|Network Self-forming |

|Node-failure Recovery |

|Mobility Impact |

|Ease of Deployment/Transportability |

|Ease of Operation |

|Data, Voice, Video, Web Page Communication |

Table 4: Measures of Effectiveness

Intentionally Blank

Demonstration & Test experiments

The following paragraphs detail demonstration and test experiments to be run against each deployment scenario and configuration type. Experimentation techniques are detailed and expected results provided. Applicable measures of performance and effectiveness, listed in Section 4, will be tested for.

For Phase 2 there are 16 of 50 tests to be completed. The balance of tests is to be completed in Phase 3. Table 5 below depicts the quantity of tests per category to be performed in Phase 2 and Phase 3.

|Test Category |Phase 2 Experiments |Phase 3 Experiments |

|Baseline Ground Network |2 |3 |

|Ad Hoc Ground Network Changes |0 |6 |

|Mobile Node at Edge |1 |1 |

|Mobile Nodes within Network |2 |9 |

|UAV Affects upon Fixed Ground |2 |9 |

|UAV Affects upon Fixed+Mobile Ground |3 |10 |

|UAV Connecting Disconnected Troops |4 |6 |

|UAV Range Ground-to-Air |1 |1 |

|UAV Pair |1 |1 |

|Three UAV Multi-Hopping |0 |3 |

|Multi-UAV Range |0 |1 |

|TOTAL |16 |50 |

Table 5: Number of Tests per Category for Ph.2 & Ph. 3

1 Fixed Ground - to - Fixed Ground

The fixed ground-to-ground test configuration allows for a baseline of performance to be established for the equipment used, and test bed as a whole. In addition, the ability of the network to address the appearance and disappearance of nodes/devices in an ad hoc manner will be demonstrated.

1 Baseline Ground Network

Six nodes numberd 1, 2, …, 6 will be arranged so that they form a five hop network (1 to 2 to … to 6) are placed near the surface of the ground (< 2m). Mountings include low poles or tops of vehicles. The spacing of the nodes is such that nearest neighbors are near the maximum range but still able to communicate. Nodes will be powered up with no scenario or location specific configuration in their communication software. Communication tests will consist of several scenarios that will be controlled by application scripts.

1. Performance vs. number of hops will be measured. Data will be transferred between nodes 1 and 2, then between 1 and 3, and so on to the five hop transfer between 1 and 6. Throughput, latency, etc will be measured as a function of the number of hops.

2. Dynamic traffic patterns will be generated. Each node will randomly and repeatedly choose a destination, send traffic for a random period of time, then choose a new random destination. The traffic rate will be chosen so that if all nodes choose the same destination, the destination will not be overloaded. The period will be chosen so that many changes will occur over the duration of the experiment and any pair of nodes will communicate with each other in multiple different periods. The fraction of time that any pair of nodes is able to communicate will be measured.

3. Subjective tests will be made of connection quality. Users at different nodes will download web pages, communicate via e-mail, watch video, or communicate via VoIP connections.

In all experiments, delay and packet losses will be measured on a packet-by-packet basis and statistics will be collected over 10-15 second intervals along with a timestamp and node GPS coordinated. From these will be inferred the jitter, the throughput (i.e. the rate when congestion losses first appear), availability (fraction of time when packets are not delivered between a source destination pair), range (by analyzing radio losses as a function of node separation), and remote connectivity (availability to destinations outside the network). A log of how often the radios themselves (and later the UAVs) failed to operate will be kept to track hardware reliability. Though all these factors will be measured, factors may be singled out in some of the below experiments to emphasize the purpose of the experiment. More details on the measurement process are provided in Section 6.0.

Expected Results:

The first experiment will establish maximum throughputs and typical latencies as a function of the number of hops in the network. Throughput will decrease and latency will increase with more hops. The second experiment will show the effect of dynamic traffic patterns on routing and performance. It will establish the typical availability of communication between nodes. By design the nodes will be placed so as to form a connected network so availability should be at or very near 100%. The last experiment should demonstrate a user experience similar to the experience with a few 100kbps DSL line. It will also measure the ability of the nodes to connect through a specific gateway to the internet. Implicitly, these experiments will show that the network is self-forming.

2 Ad Hoc Ground Network Changes

Communication test experiments 1, 2, and 3 listed in Section 5.1.1 will be repeated with several disturbances to show the performance in a stressed network.

1. Simulated failures will be generated. Nodes will be turned on and off at random intervals. The number of nodes off at any time will be less than half. The intervals are long enough so that communication losses would be observed by end users (10’s of seconds).

2. Network congestion will be introduced. An additional set of nodes near the center of the experiment will generate a high rate of traffic between themselves.

Expected Results:

With the first disturbance, the network will find new routes when possible. This will demonstrate the network node failure recovery. Occasionally the pattern of node failures will result in the network being disconnected so that availability will decrease.

With the second disturbance, the network should route around the congestion when possible. This may not always be possible and so availability will decrease.

2 Mobile Ground - to - Fixed Ground

In this test configuration, an 802.11 node is to be operated upon a moving vehicle within the test bed, interacting with other fixed units. The ability of the network to accommodate mobile nodes and the affect upon performance will be tested.

1 Mobile Node at the Edge of the Network

A mobile vehicle-mounted node will be added to the network in 5.1. It will drive around the network on the roads surrounding Table Mountain. It will generate traffic to a node on the Mountain. Availability will be measured as it circles the mountain.

Expected Results:

The node will route traffic dynamically through nodes on Table Mountain as they come into and out of range. Because of the flat top and irregular steep sides of Table Mountain some portions of the road will not have connectivity to any node and availability for this node will be lower than typical nodes on the mountain. This will show the limits of ad hoc networks to connect to nodes moving at the fringe of the networks collective coverage.

2 Mobile Nodes within the Network

The experiments in Section 5.1 will be repeated except that two of the nodes will traverse roads on the mountain top. One will travel on the main N-S road and the other on the main E-W road.

Expected Results:

More dynamics will be observed in the network as nodes move into and out of range. Jitter will be greater as the network pauses to find new routes when old routes no longer are valid. The hop count will change for a node over time. Throughput and availability will decrease as more communication time is devoted to control packets and route error recovery. Some end-user applications may have perceptible degradations. This will show how well the network performs when the topology is dynamic.

3 Ground-to-UAV

All experiments in Sections 5.1 and 5.2 will be repeated with a single UAV circling above the center of Table Mountain. We will experiment with protocols that will vary when the UAV can be used as a node, e.g. only when no ground based route less than n hops is available. In addition, the performance and effectiveness of a UAV bridging isolated ground troops will be studied. The UAV will not explicitly be a destination for any packets in these experiments.

1 Fixed Ground – to – Fixed Ground UAV Affects

The UAV will act as an additional node to the experiments in Section 5.1. The manner in which the UAV augments fixed ground communications will be characterized.

Expected Results:

The UAV will be used occasionally when routes have too many hops or other routes are not available. The UAV scenarios will have improved availability and reduced latency. Throughput may decrease as the UAV blankets the test bed with its signal and interferes with other nodes communication.

2 Mobile Ground – to – Fixed Ground UAV Affects

The UAV will act as an additional node to the experiments in Section 5.2. The communication test experiments listed in Section 5.1 will be repeated. The manner in which the UAV augments fixed/mobile ground communications will be characterized.

Expected Results:

The longer range and more stable UAV to ground links will be used more often to maintaining connectivity. The availability of nodes should approach the baseline availability in the Section 5.1.1 experiments.

3 UAV Connecting Disconnected Ground Troops

In this experiment ground nodes will be divided into subgroups which because of range and intervening terrain will not be able to communicate directly (this placement is instead of the five hop linear placement in Section 5.1.1, but, otherwise the procedure is the same). The Dynamic Traffic Patterns and Subjective experiments in Section 5.1.1 will be performed. Deployment of the UAV will enable communications between the disconnected subgroups. Performance and effectiveness will be characterized, with a focus upon the availability of communication between the nodes with and without a UAV flying overhead. Availability and throughput as a function of plane orientations will also be measured.

Expected Results:

The UAV will bridge all traffic flowing through the disconnected ground troops. The UAV should impact subgroup communication performance only to the extent that there is additional inter-group communication.

4 UAV-to-UAV

The previous experiments use the UAV as a relay to support the ground communication. The experiments in this Section use the UAV as a traffic source or destination. The UAVs will fly at 400-500 feet AGL throughout the testing to meet AMA flight rules[3]. .

1 UAV Range Ground-to-Air

This experiment will measure throughput, latency, and availability for a single link. The link will be between a UAV and the ground. The purpose will be to show over what ranges reliable communication is possible in these environments. This will clearly show the maximum ground to UAV bridging range possible. Performance as a function of distance and plane orientations will be measured.

Expected Results:

Performance will degrade as required link budget parameters approach their limit. Throughput and availability will decrease with increasing separation. Error rates and packet loss will increase with range. Antenna characteristics and plane orientation (banking, etc.) are determining factors. Maximum range for levels of performance required for various service types will be characterized.

2 UAV Pair

A pair of UAV will maneuver in figure eight patterns at increasing separation. Data will be sent between the UAV pair at each separation. Availability and throughput as a function of distance and plane (and so antenna) orientation will be measured.

Expected Results:

The planes should be able to communicate at distances over many miles. We may lower transmit power to keep flight operations localized. Correlation to maximum ranges at higher powers can be calculated. Throughput and availability will decrease with increasing separation. Normal variations in plane orientation should not interfere with communication at shorter ranges. At longer ranges when signals are marginal availability may depend on plane orientation.

3 Three UAVs

Multihopping among UAVs will be tested. The experiments as in Section 5.1.1 will be repeated among the three UAVs.

Expected Results:

The performance will be similar to three ground nodes, except the ranges will be longer.

4 Ground – UAV – UAV

The main purpose of this experiment will be to test the range at which a ground node can communicate through multiple hops with a distant UAV. The ground, middle UAV, and distant UAV nodes will be deployed in a line. The middle UAV will loiter around a fixed position. The distant UAV will travel away from the other nodes. Traffic will be transmitted at increasing and decreasing rates in a saw tooth pattern. Availability and throughput as a function of distance and plane orientations will be measured. The experiment will be tested for different ground to middle UAV separation.

Expected Results:

The throughput and availability will decrease with increasing separations.

Methods & procedures

The following sections detail methods and procedures to be used for characterizing the measures of performance and effectiveness called out in Section 4.0 of this document. Section 6.1 describes the measurements made during experiments. Section 6.2 describes how each of the performance and effectiveness measures are then derived along with expected results. The methods and procedures are common to varying experimental modes and configurations except where noted.

1 Experimental Measures

The measures of performance and effectiveness are derived from data collected from the test bed. This section describes the data collected at each node and how this is used to compute results. Figure 13 shows the relationships between different parameters. The measures can be divided into direct user observations and node statistics.

For the direct user observations, a diary will be kept by the experimenters. This will document hardware anomalies, steps taken to complete experiments, and effort required. For subjective tests of performance when running different applications, users will record observations on performance at the test bed relative to performance through an 802.11b access point in an office setting.

The node statistics will be collected at each experiment node. The node will record statistics on each route. Each route can be identified since the ad hoc routing protocol uses source routing that explicitly lists the route in data packet headers. Data will be collected over a measurement interval of 10 seconds. During this interval, it will record the following data for each route:

NTX = number of packets transmitted on this route

NRX = number of packets received on this route

NLC = number of packets lost due to a full send buffer (i.e. congestion)

If the node is the destination, it will also record the delay of each packet on this route. For each measurement interval, the node location from the GPS and a timestamp is added to the per route data and sent to the monitor server.

The monitor server collects the per node data and derives additional information. The timestamp is used to collate measurements from different nodes. The GPS data from different nodes is compared to compute node separations. The number of packets sent by a node compared to the number of packets received by the next node in a route determines:

NLR = the number of packets lost on the radio link,.

Per packet delays can be used to compute delay statistics.

Figure 13: Measurement Relationships

2 Derived Measures of Performance

The following subsections describe how measures of performance are derived from experimental data collected.

1 Data Throughput

Data Throughput is the maximum rate that data can be sent on a connection. In practice, you would use a link at 10-20% less than the data throughput. Since the link at maximum rate is unstable to variations.

Method/Procedure:

Traffic is sent at increasing rates and the delay and loss rates are observed. When the sending rate exceeds the data throughput, delay and loss rates rise sharply. This point defines the data throughput.

Expected Results:

The throughput will depend on several factors. Mixes of low rate and high rate 802.11b users can lead to anomalous behavior that we wish to avoid here. Therefore all users will be fixed at 2Mbps nominal transmission rate. Ideal 802.11b links have significant overhead. The data throughput of an ideal link is approximately 1.7 Mbps. Smaller packets (which have relatively more overhead), multiple link hops, TCP overhead, poor link quality, etc. can all reduce the data throughput. We expect throughputs on the order of a few 100 kbps on multi-hop links.

2 Latency

Latency is the time it takes a packet to be sent from an application at a source and received by an application at the destination.

Method/Procedure:

Packets will be sent between source and destination pairs. The packets will be time stamped when sent. The time stamp will be compared with the time at the receiver when received. The GPS will keep timers synchronized. At least 100 packets will be sent. The minimum, maximum, and average latency will be computed. The latency standard deviation will also be computed.

Expected Results:

The 802.11b MAC layer introduces a minimum of about 1msec delay per hop. Additional processing by the sending and receiving nodes can add 10’s of msec. This delay will be cumulative per hop.

3 Jitter

Jitter is the latency standard deviation.

Method/Procedure:

Jitter is computed with the latency.

Expected Results:

Jitter will increase as a link quality decreases since the 802.11b interface retransmits lost packets. The latency will vary and jitter increase as the number of retransmissions varies between packets.

4 Packet Loss, Congestion

A radio must send packets that it sources plus packets arriving from other radio nodes. When the total source and arrival rate exceeds the radio sending rate, packets will queue in a buffer. If a new packet arrives when the buffer is full it is discarded.

Method/Procedure:

The number of packets discarded due to buffer overflow will be recorded at each node on a route-by-route basis.

Expected Results:

Congestion losses will only occur when the arrival rate exceeds the data throughput.

5 Packet Loss, Radio

Some packets will be lost because of errors during transmission. These errors will be due to collisions between different transmitters, noise (802.11b uses an unlicensed band), or weak signals due to extreme range.

Method/Procedure:

The sender will record the number of packets sent on each route, the receiver will record the number of packets received on each route. Differences between these numbers will determine the number of packets lost on the radio interface.

Expected Results:

Occasional losses will be the norm (less than 1%) due to random bit transmission errors. The 802.11b interface should avoid collisions between nodes. No noise sources are expected at the Table Mesa National Radio Quiet Zone. The error rate will increase with range.

6 Communication Availability

Communication Availability measures the fraction of time that a source and destination can reliably send packets between them. We don’t consider congestion losses since by sending too much traffic a destination might never appear available.

Method/Procedure:

Any measurement interval when packets are sent between a source and destination packets are counted as follows: Let N = NRX + NCL + NRL be the total number of packets sent. Let NE = N – NCL be the number of non-congestion losses eligible for counting. Then, then the availability during this period is NRX / NE.

Expected Results:

Availability will be generally high. When links are strong, few packets will be lost due to link errors and availability will be more than 99%. When only weaker links are available, the availability will go down.

7 Remote Connectivity

Remote Connectivity is simply availability to destinations outside of the network such as the remote server at CU or an Internet website. This will measure how well the ad hoc network can be used to keep users online.

Method/Procedure:

Availability will be measured to any off-testbed address.

Expected Results:

The backhaul will be reliable, so, this is a measure of the ability to reach the gateway node.

8 Hardware Reliability

Hardware Reliability measures how often the different test bed hardware equipment functions correctly.

Method/Procedure:

Whenever a piece of hardware fails, an entry will be made in a log. A summary of these problems will be created. In addition, UAV flight operations will be video taped for post-experiment analysis.

Expected Results:

The hardware is expected to be reliable, but, the log and video tape will help detect specific failure modes that might be corrected.

9 Range

Range is the distance that two nodes can reliably communicate. As range increases, radio packet loss rates increase smoothly. The range is then defined as the distance within which packet losses are below a threshold.

Method/Procedure:

Packet loss rates as a function of distance will be measured. The range threshold will be 5% packet loss rate. It will be measured separately for ground-to-ground, ground-to-air, and air-to-air.

Expected Results:

Experience has shown that range is difficult to define precisely, since the packet losses depend on many factors. We expect to measure this value to the nearest factor of two.

10 Network Self-Forming

Network Self-Forming is a measure of whether nodes can turn on with no prior knowledge of the environment and be able to establish communication with each other.

Method/Procedure:

Whenever a route between two nodes is unavailable, when range and location information indicate that a route ought to be available, then the network self forming will have failed. A log of such incidents will be recorded.

Expected Results:

The ad hoc routing protocols are reliable and should always form a network.

11 Node Failure Recovery

Node Failure Recovery measures the robustness of the network to failures. When a node fails (as will be induced in some experiments), packets will be delayed until they can be routed on a new route. If a new route is not possible, then packets will be lost. When a node fails, the network should find a new route around the failed node when available. The time for this recovery should be minimal. The fraction of failures that do find a new route and the average time when it does find a new route will be measured.

Method/Procedure:

If after a node failure, routes that are actively using the failed node will be determined. Recovery from the failure can be determined by observing packet losses between the source destination pair. The time to recover can be determined by observing the worst case packet latency around the time of the recovery.

Expected Results:

The ad hoc routing protocol is designed for network variability. Node failure recovery time is expected to be on the order of a second.

12 Mobility Impact

Mobility will cause more network dynamics. These, in turn, will cause more node and link failures.

Method/Procedure:

Experiments will be repeated with and without mobile nodes and the other measures will be compared.

Expected Results:

Mobility will decrease availability and increase latency, jitter, and packet losses.

13 Data, Voice, Video, Web Page Communication

The other metrics are objective measures. They may not answer whether typical network applications will work well as judged by end users.

Method/Procedure:

An end-user connected to the test bed network will try typical applications such as web browsing, telnet, ftp, voice over IP (VoIP), and watching streaming video. These represent major types of traffic as identified by the IETF, namely: elastic-interactive, elastic-interactive-bulk-transfer, real-time-interactive, and real-time-streaming. The VoIP will use VoIP phones from our lab. The streaming video will use video downloads such as CNN Live, RealPlayer, or something similar. The user will report on differences in performance between the test bed network performance and a typical high-speed connection (e.g. a wireline DSL connection).

Expected Results:

The performance will be similar, but, occasional communication gaps will occur.

14 Deployment & Transportability

Deployment and Transportability measures the difficulty of moving the test bed, setting up the network, and launching the UAVs.

Method/Procedure:

The total volume of test bed equipment when packaged for travel will be measured. The time to unpack, set up, and start for each equipment type will be measured.

Expected Results:

Most of the equipment is designed to be mobile and should be compact. The UAV can be partially disassembled (remove wing, landing gear, and prop) for compact storage and transport. The communication equipment is designed to be simply turned on with minimal configuration. With a suitable airfield, a UAV can be assembled, prepared, and launched in about an hour.

15 Ease of Operation

Ease of Operation measures what personnel are needed for full operation. In addition, it provides an input on the degree of complexity associated with operating the system.

Method/Procedure:

A log of personnel at each experiment will be kept. Surveys of personnel will be taken to obtain a measure of difficulty. Observations will also be recorded.

Expected Results:

We are planning on 6-8 personnel for initial experiments. Some experiments require drivers for the mobile nodes and are not necessary for the communication functionality. With the attributes of 802.11-based systems, the degree of complexity should be small.

3 Procedure Checklist

Table 6 below provides a listing of all procedure scripts to be run against the experiments detailed in Section 5.

[pic]

[pic]

Table 6: Phase 2 Procedure vs. Experiment Checklist

APPENDIX A

RELATED DOCUMENTS

The documents listed below have been generated in support of the Wireless Communications Test Bed project.

1. Wireless Communications Test Bed: Design and Deployment/Test Plan, Version 2.2, Dated December 4th, 2003. This document provides a high-level overview of the Test Bed design and deployment/test plan.

2. Wireless Communications Test Bed: Design & Interface Specification, Version 1.2, Dated March 8th, 2004. This document represents detailed design specifications for the test bed on a network and sub-nodal basis.

Intentionally Blank

APPENDIX B

GLOSSARY

802.11 Standard for wireless LAN MAC and PHY developed by IEEE

802.11b Extended 802.11 specification for enhanced DSSS operation

A

AC Alternating Current

AFMC Air Force Material Command

AGL Above Ground Level

AMA Academy of Model Aeronautics (US model aircraft hobbyist organization)

ANSI American National Standards Institute

AP Access Point

ASC Aeronautical Systems Center (Air Force Material Command)

B

-

C

CAD Computer Aided Design

CDRL Contract Data Requirements List

CNN Cable News Network

CPU Central Processing Unit

COTS Commercial Off-The-Shelf

CRADA Cooperative Research and Development Agreement

CU Colorado University

D

dB Decibel

DC Direct Current

DDR Double Data Rate

DoD Department of Defense

DSL Digital Subscriber Line

DSR Dynamic Source Routing

DSSS Direct Sequence Spread Spectrum

DVD Digital Video Disc

E

ECC Error Correcting/Correction Code

EIA Electronic Industries Alliance

EPP Enhanced Parallel Port

F

FCC Federal Communications Commission

FDDI Fiber Distributed Data Interface

FS Fixed Site

FTP File Transfer Protocol

G

GB Gigabit

GHz Gigahertz

GPS Global Positioning System

H

HP Handheld Position

I

IEEE Institute of Electrical and Electronics Engineers (Inc.)

IETF Internet Engineering Task Force

I/O Input/Output

IP Internet Protocol

IPv4 Internet Protocol version 4

IR Infrared

ISM Industrial, Scientific and Medical (radio spectrum)

ISP Internet Service Provider

ITS Institute for Telecommunication

J

-

K

-

L

LAN Local Area Network

LCC Local Command Center

LCD Liquid Crystal Display

LED Light Emitting Diode

LLC Logical Link Control

M

mA milliamp

MAC Media Access Control

MB or Mb Megabit

MII Media Independent Interface

mW milliWatt

N

NLC Number of packets lost due to a full send buffer (i.e. congestion)

NLR Number of packets lost on the radio link

NTX Number of packets transmitted on this route

NRX Number of packets received on this route

NRQZ National Radio Quiet Zone

NOC Network Operating Center

O

OS Operating System

P

PC Personal Computer

PDA Personal Digital Assistant

PER Packet Error Rate

PHY Physical Layer

PSTN Public Switched Telephone Network

Q

QoS Quality of Service

R

RC Radio Control

RF Radio Frequency

S

SCSI Small Computer System Interface

SIP Session Initiation Protocol

SOHO Small Office Home Office

SSID Service Set Identifier

T

T1 N. American transmission carrier consisting of 24 digitized voice channels

TCP/IP Transmission Control Protocol/Internet Protocol

U

UAV Unmanned Aerial Vehicle

UCB University of Colorado at Boulder

USB Universal Serial Bus

V

V Volts

VDC Volts Direct Current

VoIP Voice over Internet Protocol

W

WiFi Wireless Fidelity (also called wireless LAN)

WLAN Wireless Local Area Network

WWW World Wide Web

X

-

Y

-

Z

-

-----------------------

[1] The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR), David B. Johnson, David A. Maltz, Yih-Chun Hu, INTERNET-DRAFT: draft-ietf-manet-dsr-09.txt, 15 April 2003

[2]

[3] The official rules state “I will not fly my model higher than approximately 400 feet within 3 miles of an airport without notifying the airport operator.” Further, the pilot should maintain “unenhanced visual contact with the aircraft throughout the entire flight operation.” (). This does not preclude flights higher than 400ft, but, keeping the plane in sight and the nearby Vance Brand airport in Longmont (3.5miles) suggest keeping close to this limit.

-----------------------

per measurement interval

802.11 WLAN Card

NOC

Scenario 2:

where ad hoc networking between UAVs increases mission range

Scenario 1:

where ad hoc networking with the UAV increases ground node connectivity

Time Stamp

per node

NRX

NLC

Bulkhead Mount Monopole

1 Watt Amplifier

GPS

1000’

300m

1000’

300m

Public Road Circuit at Mesa Base

UAV

Landing Strip

FS1

(Bldg. B9)

72900MHz

RC link

Single Board Computer

UAV Mounting

Environmental Enclosure

21cm

TBD

Apr. ’04

Jun. ’04

Oct. ’03

Feb. ’04

Aug. ’03

Phase 3: Test Completion, Reporting

Phase 2: Build, Integration, Deployment, Eng. Test

Phase 1: Test Bed Design & Test Plan Generation

* Test Bed Finalization

* Full Experimentation & Test Execution

* Final Reports

Apr, ’04 to TBD, ‘04

* Procure & Test Equipment

* Software/Protocol Development

* Integration & Deployment

* Rehearsal Experimentation

Oct, ’03 to Jun, ‘04

* Test Bed Design

* Deployment & Test Planning

* Approvals to Operate

* Initial design and planning documentation

Aug, ’03 to Feb, ‘04

HP1

HP2

FS2

(Bldg. T3)

IP1

IP2

Vehicle Paths

Backhaul Connection to Internet and NOC

NTX

Node Separation

Per Pkt Delay

Throughput

Avail- ability

Remote Connectivity

Latency

Mobility

Impact

Self Forming

Range

Failure Recovery

Jitter

Congestion Loss Rate

Radio Loss Rate

Data, Voice, Video, WWW

Hardware Reliability

Deployment and Transportability

Ease of Operation

user input

GPS

Delay Statistics

per route

NLC

NLR

NRX

Performance and Effectiveness Measures

Derived Data

Experimental Data

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches