Wireless Sensors and Controls for Environmental Systems



Proposal Outline and approximate page budget (total = 15 pages)

Introduction (1 page)

Applications

Energy (2 pages)

Disaster (2 pages)

SIS design

Sensor-net level design (1 page)

Service level design (1 page)

Data Handling (1 page)

Human-Centered-Computing (UCB) (.5 page)

Visualization (UCD) (.5 page)

Foundations:

Reliability (2-3 pages)

Availability (2-3 pages)

Security and Policy (1 page)

Outreach (.25 pages)

1. Introduction (Target: 1 page)

Information technology (IT) is transforming society at an accelerating pace, from business systems and social and political infrastructure to our personal lives. But the current IT development path will at best severely underutilize its potential and at worst yield a fragile IT infrastructure unable to meet many of society’s most vital needs, such as emergency preparedness and response and energy usage monitoring and control. We propose to establish the Center for Information Technology Research in the Interest of Society (CITRIS) to sponsor collaborative, IT-focused research to find solutions to grand-challenge social and commercial problems affecting the quality of life of individuals and organizations. CITRIS will be a multicampus center, including UC Berkeley (UCB), UC Davis (UCD) and UC Merced (UCM). This proposal is to support the key underpinning long term, high risk scientific and technological research endeavors within CITRIS. The NSF-ITR award will have the potential of high leverage from other CITRIS activities paid for by private and industrial donations (Appendix A.)

CITRIS’s driving applications include (1) boosting efficiency of energy production and consumption, and (2) saving lives and property and establishing emergency response IT infrastructure in the wake of disasters, among others [1a]. The solutions to these applications have the common feature that they depend on highly-distributed, reliable, and secure (briefly: high-confidence) information systems that can evolve and adapt to radical changes in their environment, delivering networked information services and up-to-date sensor network data stores over ad-hoc, flexible and fault tolerant networks that adapt to the people and organizations that need them. We call such systems Societial-Scale Information Systems (SISs). An SIS must easily integrate devices, ranging from distributed ad-hoc sensors and actuators, to hand-held information appliances (such as PDAs), workstations, and room-sized cluster supercomputers at Network Operation Centers. Such devices must be connected by ad-hoc sensor nets, extranets, short-range wireless networks as well as by very high-bandwidth, long-haul optical backbones. Distributed data and services must be secure, reliable, and high-performance, even if part of the system is down, disconnected, under repair, or under (information) attack. The SIS must configure, install, diagnose, maintain, and improve its quality of service features — this applies especially to the vast numbers of sensors that will be cheap, widely dispersed, and even disposable. Finally, the SIS must allow vast quantities of data to be easily and reliably accessed, manipulated, interactively explored, disseminated, and used in a customized fashion by users, from expert to novice.

The web, telephone network, and some military and intelligence systems are limited, albeit highly successful, SISs. Yet none satisfies the needs of the societal problems. Only by attacking more than one driving application will we learn to design a system that will meet the needs of other applications that we can only now imagine.

CITRIS will have 3 tiers of activity: Driving Applications (for which we deliver an Energy Management SIS (§2.1) and a Disaster Response SIS (§2.2)), SIS Design (Sensor level (§3.1), Service level (§3.2), Data Handling (§3.3), Human-Centered Computing (§3.4)), and Foundations (Reliability (§4.1), Availability (§4.2), and Security (§4.3)). Each activity has a leader and affiliates shown below.

For the two driving applications, we will use and leverage ubiquitous wireless sensor devices, called SmartMotes, designed by us [1b], which are about one inch cube in size and include an embedded 8-bit microcontroller, a radio, a sensor board with microsensors for measuring acceleration, strain, temperature, light, relative humidity and a battery. We have had around 500 (and expect to have at least 1000 more) of these made for us by Crossbow, Inc.(not priced on this grant). Each of these with a 1% duty cycle and 2 AA batteries will last about a year. The next generation of these MEMS devices called SmartDust (2mm cube) will have this functionality integrated into a single chip with power harvesting from the environment, ultra-efficient radio (the size of the sensor will be dominated by the antenna size), and will provide the distributed, adaptive self-organizing ubiquitous sensing and computational fabric. All are our design [1b].

2. Driving Applications

2.1 Energy Management (Target: 2 pages) (Faculty: Rabaey; Pister, Arens, Sastry)

The deregulation of electricity and the increasing cost of natural gas have made energy front page news. A recent CITRIS working group of the above faculty plus key researchers in energy and resources groups at Lawrence Berkeley Laboratories have established that that SISs can address and resolve major issues hampering the effective generation, distribution and utilization of energy, potentially saving $55B/year and 35M metric tons/year of carbon emissions nationwide [2.1a] just from better control of HVAC in large buildings. We will demonstrate such an SIS on the Berkeley campus.

It is possible to save energy on the demand side and the supply side; we first consider demand.

Demand Side. Two-thirds of primary energy use is in the form of electricity and about two-thirds of all electricity generated nationally is used in buildings [2.1b]. We will use SISs consisting of high density wireless sensor/actuator networks to enable energy conscious control of buildings (including reducing both total energy as well as peak power demanded. (Wires can be up to 90% of the total cost of such a system, so they must be wireless.)

• High-density sensor networks will allow existing environmental control technologies to operate in more sophisticated and energy-efficient ways, and the redundancy of sensors will improve the reliability of control by detecting faulty signals.

• High-density sensor networks will also allow new energy-efficient environmental control technologies to become feasible for the first time.

Imagine the following scenario: All significant energy-consuming devices in buildings are equipped with a multifunctional metering, communications, and control devices. These devices provide real-time information to building owners and occupants on rate of energy use (e.g., kW), cost associated with energy use rate ($ per hour), cumulative energy usage and associated costs for past 24 h, month, and year. By itself, this information would help energy users to make rational decisions such as how much and when to use certain devices, and when to take inefficient ones out of service.

In addition to reducing total energy use, it is important to limit peak demand through mechanisms like real-time pricing. Real-time pricing will require more sophisticated electricity meters than are currently in common use. However, devices that are moderate to heavy electricity users should also be equipped with controls that would permit rational response to real-time price signals. With the right combination of sensing and processing smart appliances could use electricity mostly at off-peak periods.

Thus, by making end-users of the energy-supply chain part of an integrated network of monitoring, information processing, controlling, and actuating devices, we enable a wide range of techniques that will both help to spread the consumption of energy over time reducing peak demand, as well as help to reduce the average demand of energy through efficiency increase. While the process of designing, constructing, starting up, controlling, and maintaining building systems is very complex, and changing the building and appliance industry overnight is not possible, we believe that a gradual roll-out plan can show impact in the very near future. We envision a triple-tiered research program:

Phase 1: Passive monitoring. We will monitor energy usage of buildings and the health of individual appliances. It is estimated that the cost of “broken energy systems” in commercial buildings costs 30% of their operational budget ($45B/year nationwide) [2.1a]. This information feedback plays the dual role of (1) primary feedback to the user on energy-consumption statistics, and (2) monitoring the health of the equipment and the environment – detect problems at the source. We will start by fully instrumenting several buildings on the UCB campus with light and temperature sensors in each room, and make the data available on a website. Then we will make a wireless power monitor with a standard 3-prong feed-through receptacle to monitor power consumption of electronic devices over time, providing roughly one thousand such devices for rotating use around the campus to educate, chart usage, verify compliance, real-time display of consumption in a given room or lab. The impact of these simple devices could be tremendous. A similar device would be passively coupled to high-power wiring to monitor total power consumption through breaker boxes. This would give us a much finer granularity of power-consumption details, and let us look at clusters of rooms, floors, etc.

We will also instrument the campus steam network.

Phase 2: Developing Mechanisms for Monitoring and Control. By combining the monitoring information with feedback on the cost of usage (augmented by an hourly pricing system reflecting wholesale market prices) helps to close the feedback loop between end-user and supplier. Key challenges here are to develop a pricing scheme which does not in itself cause instabilities in the bidding and consumption process by hierarchical verification.

• Phase 3: Active Energy-Management through Feedback and Control(Smart Buildings and Intelligent Appliances. The addition of instantaneous and distributed control functionality to the sensory and monitoring functions (measuring the operation of systems such as climate conditioning and lighting) increases the energy-efficiency of these functions dramatically, while at the same time improving the comfort of the users. We will experiment with control of power at UCB, enforcing compliance with load reduction, and charging/rewarding departments according to their use during peak times.

Supply Side: The deployment of SISs can substantially increase the efficiency and improve the control of the electricity-supply infrastructure as well. Through a combination of sensing, communication, computation, and control, the network can achieve major improvements in (1) the management and utilization of the distributed generation resources, (2) the efficiency of the distribution network, and (3) dealing with overflow conditions and emergencies. Especially the fine granularity of the information contents, combined with its timeliness, make it possible to introduce management and control techniques that otherwise would be impossible or useless. We list a few examples below; their ultimate implementation will require collaboration with State agencies.

Demand response. As identified earlier, exposing the true cost of energy to the end-user through, say, hourly pricing, encourages users to move their usage from expensive to inexpensive peiods. This demand response approach deals with a key deficiency in California’s market design – the disconnection between wholesale and retail markets. Closing the feedback loop in real time is essential in the long term but in the short time merely making costs visible to the end-user has proven effective: Georgia Power Company operates the largest real-time pricing program in the nation, with more than 1,600 commercial and industrial customers accounting for up to 5,000 MW of demand. Georgia Power estimates that it achieved load reductions ranging from 400 to 750 MW on moderate to very high-price days in 1999.

Increase in grid transmission capacity. Constraints on line flows limit the use of the transmission grid to transfer power from the least expensive sources. Such line flow limits are proxy measures to prevent overheating of transmission line or other transmission equipment and sag of the lines (that could result in touching of ground causing shortingtrees leading to catastrophic failures). Without direct measurements the flow limits are set conservatively thus unnecessarily limiting the utilization of the transmission grid. Massive deployment of sensors that could measure and transmit data on temperature and line sag coupled with computation that would assimilate such data could significantly increase grid utilization and enhance the efficiency of electricity supply. While power companies currently using global environmental data to determine the load a transmission line can carry at a given time, dynamic and real-time distributed measurements of the weather conditions may increase the peak load of a wire as much as 30%.

Emergency Management. In case of emergencies (natural disasters, an overburdening of the distribution network, or through shortfall in available energy), the utilities must now lock off complete blocks of the power-grid (e.g. rolling blackouts). These blackouts have an enormous impact on the economy, and may cause life-threatening situations. The increased control granularity made available through widely dispersed SISs would make it possible to selectively manage power-consuming components and systems, and avoid blunt load-shedding. In the case of rolling blackouts, for instance, it would be possible to keep critical businesses and functions such as traffic lights on line. When even a larger granularity is available, one could even turn off non-essential devices, such as air-conditioning units, individually. For example, devices could routinely be equipped with a “standby” setting in addition to direct on/off control.

2.2 Disaster Risk Reduction and Emergency Response(Faculty: Fenves, Glaser, Kanafani, Demmel)

Each year large natural disasters cost the U.S. hundreds of lives, many critical structures, and billions of dollars in economic disruption. Earthquakes present a substantial risk to large cities, with probability ( 60% that a major earthquake will strike California in the next 30 years. Casualty estimates number in the thousands, direct damage losses are on the order of $100 to $200 billion and indirect losses due to economix disruptions could be several times greater. Seismic hazard is not confined to California; with equally significant risks from the New Madrid, Boston, and Charleston earthquake zones.

A recent NRC Report [2.2a] states that improved information on disasters is the key to reducing losses and speeding recovery. Effective decisions by owners, operators and occupants of buildings is hampered by lack of information about their safety. We contend that SISs can help protect lives and speed economic recovery of a city after a large earthquake. These same technologies will be equally effective for tornadoes, hurricanes, fires, and floods. We describe SISs for three applications: (1) structural health prognosis of individual buildings and bridges, which pose the most hazard to the public in an earthquake, (2) real-time evaluation of inventories of buildings and lifeline networks, and (3) adaptive coordination of emergency response and recovery. We concentrate our efforts here on the first SIS, but note that it is an integral part of the latter two. All these SISs can share much of the infrastructure as the SISs for energy and transportation (the latter is part of CITRIS not described here), making their costs marginal.

The NRC report also emphasizes the need to tailor information to consumers in a disaster, who comprise three main groups: (1) system designers and developers, (2) official emergency response staff, (3) the public. Designing for these groups, especially the last two, requires careful user needs analysis. We will apply our extensive experience [2.2b, 2.2c] in needs analysis to these communities for SIS.

A current example of a regional sensor network is the Tri-Net system in California [2.2a, 2.2d, 2.2e] which contains approximately 750 ground accelerometers, communicating by digital telemetry to a central server. The ground motion data collected during an earthquake is used to develop “shakemaps” showing the distribution of ground motion. Currently, the shakemap is used to estimate losses [2.2f, 2.2g] although there is no direct measurement or assessment of loss. With the proposed SIS, information gathering can be scaled to much denser coverage of not only the ground motion but more importantly direct sensing of the effects of the ground motion on individual structures in an urban region.

2 Structural Health Diagnosis and Prognosis for Buildings and Bridges. With thousands of sensors monitoring a large structure, there is not enough power, bandwidth or space to collect and process all data in a central server (§3.1). Instead we must process data hierarchically, only occasionally collecting compressed traces [2.2h]. We propose to integrate modeling, data acquisition, and sensing processes that will allow civil engineers to approach design and structural health prognosis from a new on-line nonlinear viewpoint: Since damage begins locally, we will distribute “dense-packs” of sensors (eg 3D accelerometers) around key structural points around each key beam-column connection.

The most common approach to structural damage prognoses has been global modal analysis [2.2i] although recent full-scale experiments show that it is far too insensitive to be useful in practice [2.2j]. A prime example is the modal analysis work on an abandoned bridge in Albuquerque, NM, where there was only a small change seen in modal parameter after the main longitudinal plate girder was cut more than 2/3[2.2k]. Only after the main longitudinal plate girder was cut more than 2/3 through was even a small change seen in the modal parameters [2.2k]. Global modal analysis is inadequatedoomed for several reasons: Because evolving damage is local, a complex structure will redistribute internal forces to stiffer members as particular beams, columns, etc. are weakened. Only when damage is sufficient to affect performance of the entire structure will it be visible through global modal analysis – well after the safety of the structure is compromised.

Evaluation of damage in structural terms (diagnosis of cracking, yielding, buckling, etc.) is not sufficient for making decisions about the safety of a building. A prognosis must be based on forward simulation of the effects of the damage with the current loading and expected aftershocks, and requires integration of measuring and modeling, constantly updating both the model and information sensed. Each building can have an online model of itself, constantly updated with parameters estimated from the damage detection network. As a major change in state is detected, the updated model will determine the safety of the structure in the short term, prioritize the inspection and repair in the longer term, and reprogram the sensor agents and constitutive model as needed. Information on prognosis may be condensed into an automatic notification system for occupants, including safe egress routes.

2.2.3 Approach to Structural Data Interpretation. Development of analytical tools for determining system response in terms of damage initiation and damage propagation - understanding the interaction between the structural system and its components - is essential for performance-based design. The system identification (SI) approach is a powerful statistical tool to quantify and assess system damage parameters, and has been applied by many structural researchers [2.2l,2.2m.2.2n,2.2o,2.2p,2.2q,2.2r,2.2s].

System identification requires a model, whether black-box (e.g. a linear filter) or white box (a physical model). Identification can be made through the extended Kalman filter [2.2t.2.2u] which has successfully identified various physical systems. Physical parameters like elastic moduli, damping coefficients and effects of soil-structure interaction, can be identified.[2.2v,2.2w,2.2p,2.2x,2.2s]. Updating parameterized constitutive models with measured global response data has been attempted [2.2y,2.2z]. Integration of finite element modeling with SI of boundary conditions has been done successfully at UCB [2.2q]. The most promising parameterization of an evolving system is a unified methodology based on Bayesian/State-Space identification and adaptive estimation [2.2aa,2.2v].

With updated models, developed locally, forward simulations can be used to prognosticate the effects of damage. This is particularly critical when evaluating the safety of a building after a major earthquake and estimating the probability of collapse in an aftershock. For forward simulation, parameterized models can be updated and assembled in an object-oriented framework for simulation [2.2bb,2.2cc]. The models will be updated locally and assembled over the network in a dynamic process depending on processing, communication, and power available. Simulations may be centralized or distributed also. There can be hierarchies of simulation models: reduced parameter sets for rapid estimates, and more detailed models as processing power becomes available or sensitivity analysis shows that more refined models will reduce uncertainty in the prognosis.

Milestones. In year 1 we will develop model update procedures using sensor data and evaluate with small-scale laboratory tests. In year 2 we will implement sensing and diagnosis/progrognosis on structural specimens of building frames that will be tested on the Pacific Earthquake Engineering Research Center’s earthquake simulator (“shaking table”). This will provide a controlled laboratory setting to serve as the first-level testbed for the sensors, networking, and algorithms. In year 3: we will do field deployment on one of the new buildings planned for construction at UCB during the scope of the project. We anticipate collecting data during construction with forced-vibration tests to verify the system.

2.2.4. SIS for Inventories of Buildings and Lifeline Networks. Scaling up from individual buildings, owners of multiple buildings (corporate or university campus) are concerned with a disaster’s effect on the operation of their enterprise. Similarly, utility (electricity, gas, water) and transportation networks (highways, railways, ports, harbors, airports) must synthesize damage information to determine how to restore service. Owners of multiple facilities require damage assessment, repair estimates, prioritization of repair, and acquiring and deploying resources at short time scales rather than weeks or months for repair.

3 SIS for Regional Emergency Response and Recovery. Emergency response can be chaotic for lack of detailed information on damage location and severity, search and rescue needs, communication interruptions, and transportation interruptions for emergency personnel and equipment. The potential for casualties can be reduced substantially by using SISs to monitor buildings and precincts, and prioritize and rout emergency services. This information, coupled with SIS information on the capacity of transportation systems, hospitals, fire fighting equipment, and heavy construction equipment, will allow rapid and effective matching of needs with capabilities.

Emergency response staff must be able to reach key decision-makers and coordinate with other response teams. This requires new techniques for representing user expertise and current activity. We will develop tools to build expertise profiles of decision-makers, track their post-disaster actions, and provide a “knowledge network” so staff can (i) find an available decision-maker with relevant expertise and who has (ii) reviewed particular information about the situation and (iii) provide a visualization of disaster-response teams, including members, information they are reviewing, and issues that they are dealing with. Such distributed networks can accelerate decision-making in time-critical situations, and are tolerant of communication failures. A prototype will be built for the UCB campus.

3. SIS Design

The key open problems in designing an SIS to support the driving applications are as followsgiven below. We will demonstrate such SISs on room, building and campus scale.

Sensor Level Architecture (§3.1): What is the architecture of a massive distributed sensor system? How should it be programmed, synchronized, and maintained in the face of real time and low power constraints, intermittent and permanent failures of individual sensors, the need to download new software periodically, and physical inaccessibility preventing local maintenance?

Service Architecture for Distributed Systems (§3.2): A sensor network will be just one component in an SIS of multiple data repositories, computational services, and system or user interfaces; some commercial, government, or academic; some reliable and some unreliable; some trustworthy or untrustworthy; created dynamically under no central authority. How will these services be created, peered and interfaced in real time?

Adaptive Data Management and Query Processing (§3.3): The data collected by sensor networks will be massive, real-time, intermittent, and noisy. How will it be collected, summarized, filtered and indexed to provide diverse users the data they want reliably and in real-time?

Human-Centered-Computing (§3.4): How do we determine and support the needs of diverse users who need (some of) the data? How should the data be presented to help them make decisions?

We will demonstrate such SISs on the room, building and campus scale.

3.1 Sensor Network Level Architecture (Target: 1 page) (Faculty: Culler, Pister)

3.1.1 Experimental Networked Sensor System Architecture. We will build on the SmartMotes and SmartDust described in §1, with which we have extensive experience: We recently dropped a number of styrofoam-encased SmartMotes from an airplane. They landed, formed an ad-hoc wireless network, tracked the movement of nearby traffic, and sent it to a remote server. But many problems remain

The sensor network architecture has 3 tiers. The microsensor tier consists of the sensor devices themselves. Device locations and their communication structure will change over time due to environmental factors or mobility. The nodes will create a self-configured, multihop distributed network with extensive in situ data filtering capability. They must operate unattended for long periods.

The sensor-base tier has far fewer and more powerful base-stations to serve as gateways to longer range networks and provide analysis and storage services. These devices are embedded PCs with a sizable power storage, network connectivity providing a bridge between the low-power wireless network and a conventional LAN or WAN interfaces, and a rich sensor suite. This tier provides additional sensor modes with a wider range to be focused on areas of interest, as determined by localized sensor data, as well as providing communication, programming, storage, and landmark resources to the microsensor tier.

The analysis and control tier provides storage and data analysis resources, as well as facilities to interactively program the lower tiers. Typically, it consists of powerful arrays of servers and services.

4 Nodal Operating Environment. The nodal operating system must be energy efficient, especially in low duty-cycle vigilance mode, and be facile under a burst of events. It must meet hard real-time constraints, such as sampling the radio signal within bit windows, while handling asynchronous sensor events and supporting localized data processing algorithms. It must be robust and field re-programmable.

Our starting point is our recent TinyOS event-driven system [3.1a]. We began this work because no other OS handled extensive concurrency, bit or bytewise processing, and dynamic events in just a few kilobytes of space. We use very fine-grained and light weight multithreading. But the programming problem differs from parallel programming or distributed systems because sensor networks operate as aggregates, where information moves from regions of the network to other regions according to some higher-level transformation, often with real time requirements, rather than point-to-point streams and file transfers between named devices. Our proposed nodal communication model provides 5 primitives: (1) local multicast transmission, (2) event driven reception, (3) retransmission pruning, (4) aggregation, and (5) buffer management. These primitives serve as compilation targets from higher-level descriptions and as a basis for algorithmic analysis. Networking concepts such as directed diffusion are built upon them.

Traditional Internet layers and protocols are too heavy-weight, and SIS communication patterns are very different. Node identity is often unimportant relative to its role in the physical topology and the ability to precisely name data the node collects [3.1b]. Since low-power radio distance is limited, any node may have to serve as a router and the collection must self-organize into a dynamic, ad-hoc multi-hop network connecting to external tiers. While ad hoc routing algorithms can be application independent, application specific aspects must be included to extract the most value from each precious message [3.1c].

3.1.3 Aggregate Programming for Collaborative Processing. Ideally, application scientists should be able to program sensor nets with aggregate query and processing expressions, that would be translated into specific sensor acquisition on the nodes, triggers, interrupts, and communication flow. The problem is related to database query processing, if one views the sensor data as a fine-grained distributed database, with data identified by key, rather than address. Rather than regular tables and records, we have a pool of intermittent, unstructured noisy data streams. One can also view the sensor network as an extremely fine-grained tuple-space, as in Linda or JINI. Many operations are naturally data parallel but are likely to be statistical, rather than deterministic. We can borrow techniques from online query processing [3.1d] to report the result statistically and with tolerance for adversarial inputs from the nodes. These incremental techniques provide a means of implementing triggers to indicate unusual events. In general, the aggregation operations must handle outliers within the network to be robust to processing and networking errors. We will investigate the right set of data parallel operations for programming sensor networks.

3.2 Service Architecture for Distributed Systems (Faculty: Katz; Joseph)

Our overarching goal is to developing a deeper understanding of service level peering (SLP): creating end-to-end services with desirable and predictable properties, such as performance and reliability, when provisioned from multiple and independent service providers. Services, such as distribution and caching of content (and its dual of sensor-based data collection and aggregation), mediation and adaptation for diverse end devices and access networks, billing and accounting subsystems, network mechanisms for indexing, service location, redirection, and naming, etc., are the building blocks from which complex distributed applications are constructed. Service platforms like .NET [3.2a] and ONE [3.2b] are monolithic with limited cross-platform interoperability and no performance guarantees. SLP is essential for rapid SIS construction, by reducing the time to build services from independently deployed implementations, e.g., if electricity operators select alternative stand-alone systems for collecting real-time demand information, SLP allows these to interoperate under end-to-end constraints, such as sufficient bandwidth for data collection with limited latency to feed the demand management algorithms.

Our service architecture must support the dynamic confederation of collaborating and competing service providers. Most prior work has focused on a single provider [3.2c,3.2d,3.2e]. Our Clearinghouse [3.2f] offers a starting point for a scalable network resource manager based on reservations, admission, and policing, primarily for performance sensitive packet voice and video applications, that works across service providers. While focused on network bandwidth, it can be extended to manage processing and storage in the wide-area [3.2g,3.2h,3.2i] and its applications can be generalized to wide-area content distribution and assembly [3.2j]. It combines hierarchical monitoring and allocation with service provider peer-to-peer negotiation to achieve a combination of local control and scalability.

The primary challenge is to achieve SLP and resource sharing in an environment of limited trust and cooperation. The elements include: (i) an open service and resource allocation model, (ii) description of service provider resources, capabilities, and current status [3.2k], (iii) resource allocation mechanisms based on economic methods, such as electronic auctions, coupled with real-time accounting/billing/settlement systems [3.2l] for the resources used, (iv) mechanisms for managing trust relationships among clients and service providers, and between service providers, based on trusted third party monitors, and (v) general services for forming dynamic confederations, such as discovering potential confederates and managing trust relationships.

We will construct testbeds at a variety of scales to investigate how the proposed architecture generalizes: room, building, campus, and regional-scales. At room-scale, the architecture coordinates bandwidth and limited processing among collaborating sensors and access points. At the building-scale, the testbed is extended to investigate resource coordination for bandwidth, processing, and storage across independent service providers in different regions of the building. The campus-scale testbed extends the environment in the directions of greater diversity of network technologies, with overlay service providers intermixed with departmental service providers within builders. Through some of our industrial collaborators (Appendix A), we plan to obtain measurements, experiment with new services and their resource demands, and generally gain experience of the opportunities and constraints in the wide-area through a regional-scale testbed.

3.3 Adaptive Data Management and Query Processing (Faculty: Franklin and Hellerstein)

The data management components of an SIS must quickly evolve and adapt to radical changes in data availability, systems, and network characteristics, and scale to large, highly-distributed collections of information. They must also meet the demands of a diverse user base, and actively assist users in exploring vast quantities of data in a near-real time manner. This includes managing information requests from a number of information visualization applications, as discussed below in §3.4.

Traditional database systems cannot meet these challenges for several reasons. (1) Traditional database management systems assume a relatively static collection of information. In a dynamic emergency-response environment, in which the data arrives in a real-time stream, this approach fails because there are no reliable statistics about the data and because the arrival rates, order, and behavior of the data streams are too unpredictable [3.3a,3.3b]. (2) Existing approaches cannot cope with failures while processing a large query. (3) Existing approaches are optimized to deliver a complete answer, without intermediate results. When users will interact with an SIS in a fine-grained fashion, such approaches are unacceptable. Processed data must be passed on to the user as soon as they are available. Furthermore, because an SIS is interactive, users may choose to modify their queries based on previously returned information or other factors. Thus, the system must gracefully adjust to changes in user needs [3.3c].

The research plan for data management in SIS addresses these three issues:

Adaptive Data Flow Processing. The Telegraph project is developing an adaptive dataflow processing engine. Telegraph uses a novel approach to query execution based on “eddies”, which are dataflow control structures that route data to query operators on an item-by-item basis [3.3d]. Telegraph does not use a traditional query plan, but rather, allows the “plan” to develop and adapt during execution. For queries over continuous data streams, the system continually adapts to changes in data arrival rates, data characteristics, and availability of processing, storage, and communication resources. An initial prototype of Telegraph has been built, but much work remains. Challenges include: 1) developing cluster based and wide-area implementations of the processing engine, 2) supporting efficient continuous queries over streaming data from sensors and web-based sources, and 3) designing fault-tolerance mechanisms for continuous queries. These will be designed to support the development of appropriate user interfaces for manipulating and querying data flows, as described in Section 3.4 below.

Sensor Query Processing. Much of the data to be processed in SIS will stream in from low-power sensors. Techniques for querying and maniuplating these data streams will be crucial. These techniques must not only be efficient, but must also tolerate the power limitations and error characteristics of the sensors. We plan to extend the data flow query processing architecture with two techniques for sensors: 1) the ``Fjords'' operator architecture, and 2) power-sensitive ``sensor proxy'' operators. The Fjords architecture provides the functionality and interfaces to integrate erratic, streaming dataflows into query plans. It allows streaming data to be pushed through operators that pull from traditional data sources, efficiently merging streams and local data as samples flow past. Fjords also allow processing from multiple queries to share the same data stream, thereby providing huge scalability improvements. Sensor proxies are specialized query operators that mediate between sensors and query plans, using sensors to aid query processing while adapting to their power, processor, and communications limitations.

Context Aware Data and Event Dissemination. Due to the huge volume of data managed by the SIS, such a system must provide special support for the targeted and timely delivery of relevant data and notifications to users based on their interests, roles, and context at a particular time. Such dissemination must be driven by user profiles, which contain information about user requirements, priorities, and information needs [3.3e, 3.3f]. We envision a user profile language that allows the specification of three types of information: 1) Domain specification: a specification of the kinds of data that are of interest to the user. This description must be declarative in nature, so that it can encompass newly created data in addition to existing data. The description must also be flexible enough to express predicates over different types of data and media. 2)Utility specification: because of limitations on bandwidth, device-local storage, and human attention, only a small portion of available information can be sent to a user. Thus, the profile must also express the users preferences in terms of priorities among data items, desired resolutions of multi-resolution items, consistency requirements, and other properties. 3) Context specification: user context can be dynamically incorporated into the dissemination process by parameterizing the user profile with user context information, for example, as used by the CrossWeaver project described in Section 3.4. The challenges here involve language development, profile processing issues, and delivery scheduling. We plan to draw on our earlier work on large-scale XML document filtering for this purpose [3.3g].

3.3 Adaptive Data Management and Query Processing (Faculty: Franklin and Hellerstein)

Societal Information Systems present challenges that cannot be met by existing data management technology. These challenges stem from their large scale, highly-distributed nature, and the need to actively assist users in wading through vast quantities of data in a near-real time manner. A key requirement for data management in SIS is adaptability. That is, the data management components of the SIS infrastructure must be able to quickly evolve and adapt to radical changes in data availability and content, systems and network characteristics, and user needs and context.

Traditional database query processing systems break down in such environments for a number of reasons: First, they are based on static approaches to query optimization and planning. Database systems produce query plans using simple cost models and statistics about the data to estimate the cost of running particular plans. In a dynamic dataflow environment, this approach simply does not work because there are typically no reliable statistics about the data and because the arrival rates, order, and behavior of the data streams are too unpredictable [3.3aUFA98] [3.3bHFCD+00].

Second, the existing approaches cannot adequately cope with failures that arise during the processing of a query. In current database systems, if the failure of a data source goes undetected, the query processor simply blocks, waiting for the data to arrive. If a failure is detected, then a query is simply aborted and restarted. Neither of these situations is appropriate in an SIS environment in which sources and streams behave unpredictably, and queries can be extremely long-running (“continuous”).

Third, existing approaches are optimized for a batch style of processing in which the goal is to deliver an entire answer (i.e., they are optimized for the delivery of the last result). In an SIS environment, where users will be interacting with the system in a fine-grained fashion, such approaches are unacceptable. Processed data (e.g., query results, event notifications, etc.) must be passed on to the user as soon as they are available. Furthermore, because the system is interactive, a user may choose to modify her queries on the basis of previously returned information or other factors. Thus, the system must be able to gracefully adjust to changes in the needs of the users [3.3cHACO+99].

The research topics we plan to address in data management for SIS are the following:

Adaptive Data Flow Processing . The Telegraph project at UC Berkeley is developing an adaptive dataflow processing engine. Telegraph uses a novel approach to query execution based on “eddies”, which are dataflow control structures that route data to query operators on an item-by-item basis [3.3dAH00]. Telegraph does not rely upon a traditional query plan, but rather, allows the ”plan” to develop and adapt during the execution. For queries over continuous streams of data, the system can continually adapt to changes in the data arrival rates, data characteristics, and the availability of processing, storage, and communication resources. An initial prototype of Telegraph has been built, but much remains to be done. The challenges to be addressed include: 1) the development of cluster-based and wide-area implementations of the processing engine, 2) the design of fault-tolerance mechanisms, particularly for long-running queries, 3) support for continuous queries over streaming data from sensors and web-based sources, and 4) the development of appropriate user interfaces for manipulating data flows.

Sensor Query Processing. Much of the data to be processed in SIS will be continually streaming in from tiny, low-power sensors. Techniques for querying these sensor data streams will be crucial. These techniques must not only be efficient, but must also be tolerant of the power limitations and error characteristics of the sensors. We plan to extend the data flow query processing architecture with two techniques for dealing with sensors: 1) the “Fjords” operator architecture, and 2) power-sensitive “sensor proxy” operators. The Fjords architecture provides the functionality and interfaces necessary to integrate erratic, streaming dataflows into query plans. It allows streaming data to be pushed through operators that pull from traditional data sources, efficiently merging streams and local data as samples flow past. Fjords also allow processing from multiple queries to share the same data stream, thereby providing huge scalability improvements. Sensor proxies are specialized query operators that serve as mediators between sensors and query plans, using sensors to facilitate query processing while adapting to their power, processor, and communications limitations.

Context Aware Data and Event Dissemination. Due to the huge volume of data managed by the SIS, such a system must provide special support for the targeted and timely delivery of relevant data and notifications to users based on their interests, roles, and context at a particular time. Such dissemination must be driven by user profiles, which contain information about user requirements, priorities, and information needs [3.3eCFG 00] [3.3fCFZ01]. We envision a user profile language that allows the specification of three types of information: 1) Domain specification: a specification of the kinds of data that are of interest to the user. This description must be declarative in nature, so that it can encompass newly created data in addition to existing data. The description must also be flexible enough to express predicates over different types of data and media. 2)Utility specification: because of limitations on bandwidth, device-local storage, and human attention, only a small portion of available information can be sent to a user. Thus, the profile must also express the user’s preferences in terms of priorities among data items, desired resolutions of multi-resolution items, consistency requirements, and other properties. 3) Context specification: user context can be dynamically incorporated into the dissemination process by parameterizing the user profile with user context information. This context information may be obtained through on-line observation of users, or from other sources such as data stored in Personal Information Management (PIM) applications such as the calendar, contact list, and To Do list. The challenges here involve language development, profile processing issues, and delivery scheduling. We plan to draw on our earlier work on large-scale XML document filtering for this purpose3.4 .

Human Centered Computing(Target: .5 pages) (Faculty: Canny, Hearst, Landay, Saxeniaen)

Information visualization tools for disaster response should help diverse groups of users make sense of the situation as it unfolds in real time. Converting a massive, linked and distributed multi-sensor environment into human-understandable organizations will require development of fundamentally new human-computer interaction methods. Visualization will be critical both for helping system developers understand the placement and use of real-time sensor data, and for emergency response staff when responding to a disaster situation, but the interfaces for these two groups are likely to differ radically. The UCD visualization group has extensive experience in the problems of scalable, hierarchically-organized information visualization (§3.4.1), and others at UCB have been examining how to convert non-spatial information into human-understandable knowledge. [3.4a, 3.4b, 3.4c,3.4d,3.4g,3.4h]]Olsen et al. 98, Avnur et al. 98, Hearst 99, Glaser 99, Yee 01].(see

).

Innovative interfaces will be developed for the emergency response staff in the field and the public at large, both for monitoring the situation and for understanding how to respond what they specifically should do in response. The information must be accessible from many a large number of different devices with different kinds of network connectivitiesy, and weit cannot be assumed that devices available to the public will be of the same quality as those available to the emergency response staff. User interfaces techniques must be designed that are sensitive to the size and the bandwidth of the access device, and so must work across large-wall sized displays, the web and on PDAs.

We have had some early experience building wall-sized interfaces to helpsupport the management of firefighters on at the scene of a fire (see [3.4ie]). .We will continue this work and adapt this workit to other situations, such as earthquakes. Interfaces must also work seamlessly in different modalities; for example, audio input and output will be necessary for the visually impaired or for those driving emergency vehicle driverss. We are developing a design tool called CrossWeaver to help design and build such these types of cross-platform, multi-modal user interfaces (see [3.4f], 3.4j].). Interfaces should also respond to their context of use, e.g. -- for example, where the user is locationed and stress or danger what level of stress or danger they are under should be determined automatically and used to modify what is displayed on the interface. We Researchers at UCB are developing context-aware interface systems for distributed devices [3.4fgHong 01]. We will use CrossWeaver along with this context-aware infrastructure to develop a variety of interfaces for the emergency response staff as well as the public.

3.4.15 Interactive Visualization of Multisource Data Streams and Exploration of Multi-source Data Streams in Collaborative Environments Using Parallel and Distributed Computing (Target: .5 pages) (Faculty: Hamann; Max, Joy, Ma)

The required data Vvisualizatizingon and exploration technology for massive data streams generated by up to millions of sensors requires new techniques monitoring energy status and consumption in large building complex do not yet exist. How do we visualize vast amounts of data over long distances where collaborative and/or time-critical data exploration is essential? How can we combine and super-impose data streams from generated by sensor arrays and other sources and visualize them in a super-imposed fashion? What type of networking technology and architecture can help facilitate data collection and interpretation? Can topological approaches, used in scientific data analysis help "simplifying" massive sensor data?

The data sets generated by our these types of applications must be reduced and compressed, and “summary" views provided. To achieve the needed compression, we will investigate hierarchical networking architectures in support of hierarchical data representations/formats. We will design hierarchical compression schemes (multi-resolution/wavelet methods) to support progressive data transmission for this purpose. Our These are research foci will be areas we will focus on: real-time multi-resolution compression/ and decompression schemes; collaborative data exploration over high-speed networks, real-time feature extraction of topologically/qualitatively relevant subsets; rendering and immersive interaction technology for large-scale displays environments (power walls); and parallel and distributed computing technology for compression, analysis, and visualization. We will need to merge multiple views of an event, such as Image-Based Rendering with maps superimposed with damage data.

For this project, Tthe UC Davis Group will build on its experience with the development of innovative techniques for geometric modeling, hierarchical data approximation and visualization, and multi-resolution rendering in immersive environments. This gGroup has published extensively during the past five years; emphasis areas that are related directly to this ITR proposal include hierarchical representation of massive data sets, see [3.5aBertram00a], [3.5bBertram00b], [Bertram00c], [3.5cBremer01], [3.5dGieng98], [3.5eHamann99a], [3.5fHeckel99b], [3.5gKreylos99], [3.5hSchussman00], and [3.5iTrotts99]; reconstruction of geometrical information/models, see [3.5jBonnell00], [3.5kGregorski00], [3.5lHamann99b], and [3.5eHeckel99a]; rendering of massive data sets, see [3.5mKreylos00], [3.5nKuester00], and [3.5oLaMar99]; and topological, qualitative analysis of flow data, see [3.5pScheuermann00].

4.Foundations

1. System Reliability (Target 2-3 pages) (Faculty: Henzinger, Aiken, Necula, Pister, Sastry, Wagner) (no refs yet!)

Concerns over IT system reliability increase with society's reliance on these systems. From a historical standpoint, this corresponds to the maturing of new technologies when the search for additional features is replaced by the need for High Confidence Systems. While functionality, speed, and availability dominate the headlines about new IT systems, the success of SISs will ultimately depend on reliability, including safety, predictability, fault tolerance, and their ability to interact with hard real-time constraints. Hand in hand with these demands are the needs to understand the vulnerabilities of SISs to information attack. We expect SISs to be subjected to either malicious or accidental attack, and cannot afford to have societal-scale systems be compromised. We believe that there are three central issues that need to be addressed in order to achieve acceptable reliability in SISs:

Modeling, prediction, and design of complex systems.

Software quality, and Software quality. ????

The first "weak link" in current design practice stems from the intrinsic *complexity* of SISs. While "complexity" in science usually refers to the understanding of complex systems that occur in nature (such as weather prediction), we submit that a different kind of complexity arises in systems that are designed by humans, and that if properly understood, this complexity can be controlled. The complexity of SISs arises from the large number of distributed but interacting components, the heterogeneous nature of components (digital computers as well as analog devices), the many levels of abstraction used in design (such as physical, system, protocol, and application layers), the many different aspects of system performance(such as functionality, timing, fault tolerance), and the many, often unpredictable ways in which the environment (sensors, users, failures, attackers) can influence the behavior of the system. Since current design practices are hitting their limits in the design of more homogeneous complex systems, such as microprocessors and software (see below), they cannot achieve acceptable reliability in SISs. We propose research in two key directions towards managing complexity by design: Modeling and analysis of networked hybrid and embedded systems, and Multi-aspect interfaces for component-based design.

Software quality.

The second weak link in current design practice concerns the lack of quality control in the development of large, concurrent software systems. As every SIS has significant software components that interact with each other as well as with sensors and actuators in many complex (see above) ways, software quality is an area of particular vulnerability for SISs. We propose research in Combining static and dynamic software analyses for improving software quality.

Information Assurance for SISs.

While we have all been sensitized to the possible damage that can be wrought by denial of service, viral and other attacks on networks, we feel that SISs need to be even more resistant to attack. For sensor networks this translates into the need for understanding the vulnerabilities of the information gathering to appropriation of rogue nodes, attempts at deliberate mis-information and other modalities of attack. We propose a program of research on information assurance for SISs: in section 4.1.4.

4.1.1 Modeling and analysis of networked hybrid and embedded systems

The first "weak link" in current design practice stems from the intrinsic *complexity* of SISs. The complexity of SISs arises from the large number of distributed but interacting components, the heterogeneous nature of components (digital computers as well as analog devices), the many levels of abstraction used in design (such as physical, system, protocol, and application layers), the many different aspects of system performance(such as functionality, timing, fault tolerance), and the many, often unpredictable ways in which the environment (sensors, users, failures, attackers) can influence the behavior of the system. Both target applications of CITRIS involve complex digital systems (computers and networks) interacting with the physical world through distributed sensors and actuators. Such a hybrid systems hierarchically layer the characteristics of discrete and continuous models of computation. While theories and tools for modeling and analyzing hybrid systems have emerged over the past few years in control applications, this research has focused mostly on developing algorithms (and impossibility results) for solving simple mixed discrete-continuous control problems. SISs are hybrid systems of a complexity that is certainly not open to precise algorithmic analysis. The focus must therefore shift to hybrid modeling techniques for capturing composition, hierarchy, and heterogeneity in a mixed discrete-continuous setting, and to approximate and stochastic techniques for system analysis and simulation (see for example [4.1aHenzinger-Sastry98] and [4.1h, 4.1n,4.1bTLS00]).

While in classical hybrid systems theory, a single or small number of plants is modeled using differential equations, and a single or small number of controllers is modeled using state machines, such "hybrid automata" [4.1d, 4.1v] models are inadequate for networked embedded systems with a large number of sensor, processor, and actuator nodes. First, the appropriate model for a large number of nodes is continuous, not a discrete composition of individual discrete components, in the same way in which fluid dynamics and population dynamics are best studied abstractly as continuous processes [4.1y], not as collections of individual molecules or creatures. Second, for large networks, unpredictable and faulty behaviors of individual nodes need to be modeled using global stochastic assumptions. This is a paradigm shift from concrete discrete behavior, which is too complex to be analyzed, to abstract continuous behavior, for which we will develop simulation and analysis techniques. .At the same time, new discrete phenomena emerge on the abstract level, in the form of mode switches or phase transitions. For example, a sensor network may tolerate a certain number of faults without compromising its performance, but degrade quickly if that number is exceeded. This new view of modeling hybrid systems is accompanied by a paradigm shift from model construction (given the individual components and topology) to model extraction from simulation data, together with a paradigm shift from model verification (given a requirements specification) to model prediction (forecasting). In particular, active run-time monitoring and forecasting can be used for dynamically reconfiguring and optimizing the network.

Another area of special interest concerns the development of micro-protocols for the kind of real-time networks of embedded systems involved in SISs, for distribution services like synchronization, replication, consensus, leader election, etc. The classical solutions from distributed algorithms, as well as best-effort networking protocols, are no longer appropriate in a setting where individual misbehavior, such as individual sensor failure or package delay, is a common occurrence of low cost, but the cost of system wide misbehavior, such as global breakdown or global reaction delay is high. This suggests the design of group communication protocols that may tolerate erratic micro-behavior in order to optimize macro-behavior and avoid undesirable global phase transitions.

4.1.2 Multi-aspect interfaces for component-based design

Existing formal design methodologies are either optimistic or pessimistic. The optimistic approach advocates strictly top-down, stepwise refinement design, with the design team in full control of the complete design. It does not allow for some parts or aspects of the design to be unknown, unpredictable, or evolving. The pessimistic approach is strictly bottom-up, component-based design, where some components may be pre-existing, and the design of each component must consider an environment that behaves in arbitrary, possibly adversarial way.

The centerpiece of our approach [4.1cHQR00] is the development of component interfaces that are much more expressive than traditional interfaces used in software or hardware designs. First, the interfaces we envision must specify both the component does and what it expects the environment to do.not only specify, on some abstract level, what a component does, but also what the component expects the environment to do. Such "assume-guarantee interfaces" allow the designer of a component to adopt an optimistic view about some aspects of the other components, as if those aspects were under the designer's control, and at the same time adopt a pessimistic view about other aspects of the other components and the environment, which may be unknown at design time or unpredictable. Second, the interfaces we envision not only specify aspects that are traditionally specified in interfaces, such as the number and types of the arguments of a procedure, but permit the specification of wide variety of different aspects[4.1l]. There has been considerable work on functional interface languages, little on timing and security, and virtually none on other system aspects such as resource management, performance, and reliability. This lack of multi-aspect interface formalisms has forced designers to address timing, security, performance, and reliability issues at all levels of the implementation in order to attain the desired properties for the global system.

We propose to develop for SISs a theory of composition for multi-aspect interfaces [4.1g], and to which expose resource properties, such as real-time assumptions and guarantees. Based on such a theory, we will develop algorithms and tools for checking the consistency and compatibility of multi-aspect interfaces[4.1o]. Multi-aspect interfaces benefit also the validation, portability, and evolvability of a system. The validation task can be decomposed into two largely independent phases [4.1e]: the validation that each component satisfies its interface, and the validation that the overall system requirements are met, given the component interfaces.

For the latter phase, we can use the interfaces to construct simulation models that exhibit worst-case component behavior, and validate the system with respect to such simulation models.

Interfaces that Expose Real-time Behavior. Traditional API component specifications are usually based on an informal, English-based description of the functionality, the only formal information being input and output type information for function calls.

This situation is radically different for real-time systems, where the quantitative differences of the underlying architectures (in terms of speed, memory, response times) play an essential role in determining the viability of the whole design, and are not masked by programming abstractions. The quantitative timing aspects of the behavior of components, however, are captured only in minimal part by the current informal approach to component specification. The real-time component of their behavior is either specified generically in separate documents, or it has to be inferred from a detailed analysis of the architecture. By contrast, multi-aspect interfaces will permit the specification of timing assumptions and guarantees in an abstract, platform-independent way [4.1f]. An assume-guarantee interface specifies that the component guarantees certain behavior, under the assumption that the environment behaves correctly. To support the construction of systems derived from composing multi-aspect interfaces, we will develop techniques for the assume-guarantee compositional reasoning about aspects that include timing, resource, and performance properties [4.1m].

Interfaces that Expose Probabilistic Behavior. Multi-aspect interfaces will be able to capture not only deterministic guarantees, but also probabilistic properties connected to performance and reliability, such as the probability of meeting deadlines [4.1k]. In fact, many timeliness properties of real-time systems can be studied only in a probabilistic setting. We will develop assume-guarantee techniques for reasoning compositionally about probabilistic real-time interfaces. In addition, we will develop algorithms for checking the compatibility of such interfaces. The focus on the interface level, without regard to implementation details, will ensure compositionality and a degree of abstraction that makes the application of formal tools feasible [4.1x, 4.1z].

4.1.3 Software Quality: Combining static and dynamic software analysis

Our goal is to invent a new class of techniques and tools for finding and removing defects in complex software systems. With very few exceptions, the current tools available to software developers can be classified as either exclusively static (the analysis is done by reasoning about the program text without running the program [4.1s, 4.1t, 4.1u]) or exclusively dynamic (the analysis is done by executing the program). We believe there is a class of useful tools that combine static and dynamic approaches and that research in this direction is likely to be productive and to require new, basic techniques.

. For example, type checking (a static analysis) excels at reliably detecting certain classes of bugs without the need to execute the program. Software testing (executing a program on sample inputs---a dynamic analysis) is the mainstay of quality assurance departments in industry. Both approaches have inherent limitations. By their nature, static analysis tools can prove useful, but limited properties of all program executions. Dynamic analysis tools can prove very precise properties, but only of single executions. We envision a better world, made possible viaby a tight coupling of static and dynamic analysis techniques. In this world, static analysis techniques allow the construction of "active" testing infrastructures that use information derived from static analysis of the program to automatically produce customized test harnesses and tests targeted at particular semantic properties. In addition, the infrastructure provides a framework for the addition of other static or dynamic analyses, such as residual analysis. Such mixed dynamic/static analysis tools are effective because the two approaches are naturally complementary and in fact mutually reinforcing in well-designed applications

Statically Assisted Testing. This is an example of a dynamic-then-static analysis. Using static analysis techniques to infer needed information from the program source, it should be possible to automate much of the process of constructing and maintaining a test infrastructure. Given an initial test suite with some coverage, how much can coverage be improved by purely static techniques? By a combination of instrumented tests and static analysis, it should be possible to perturb given tests to improve coverage. Dually, given a set of tests, how orthogonal are they, and can we find that subset without a brute force solution that examines all possible subsets? Finally, given a program, a test suite, and a change to the program, use dependency analysis (a static analysis) to eliminate tests that do not need to be rerun because their result could not be affected by the change.

Residual Analysis. This is an example of a static-then-dynamic analysis. Static analysis is useful for proving conservative bounds on a program property. The idea of emitting residual code to cover cases at run-time that cannot be dealt with statically is well-understood from the fields of partial evaluation and soft (or dynamic) typing [4.1p,4.1q, 4.1r]. The next step is to use the residual code to drive testing: the inserted tests show where testing effort should be focused, and the information computed by the static analysis suggests a range of test cases to try. By replacing memory safety in this discussion with any other desired safety property, we see that residual analysis is one general recipe for a class of static-then-dynamic analyses.

4.1.4 Information Assurance for SISs

Information assurance includes for SIS includes both lightweight authentication and scalable key distribution techniques (see which will be discussed in § 4.3). Here we discuss information processing for possibly compromised SISs. We will study the use of traditional fault-tolerant networking techniques such as replication and partitioning of network services, redundancy of network resources and survivable network overlays (including replica creation and management). In the area of self healing of networks, we will use bring to bear techniques from statistical signal processing in two different ways: when a rogue node starts broadcasting information that is widely deviant from hypotheses that are determined to be true either on the basis of past data or from a number of other (possibly uncorrupted) nodes [4.1z], we will use decision networks to treat the rogue node information as an outlier and thereby discard it and at the same time propagate the hypothesis up the hierarchy of the SIS. In the event of a large scale attack on a group of nodes, we will explore the use of game-theoretic methods (against an intelligent adversary) to isolate the group of nodes that are under attack and propagate alarms up the SIS hierarchy about the need for intervention.

2. System Availability (Target: 2-3 pages) (Faculty: Patterson, Yelick)

High availability of all levels of an SIS is a prerequisite to wide-scale acceptance, but the techniques used to achieve high availability differ across the system. Within sensor networks, the hardware components are unreliable, and there are few extra resources available for replication – availability comes from inherent redundancy in the collected data and the algorithms that use it. Within a cluster, the individual hardware components are more reliable, but at very large scales component failures are still an issue and particular data values may be essential, so explicit measures are needed to ensure availability.

The two major components of availability are the failure rate and the repair time. Availability may be improved by either reducing the failure rate or by improving repair time. Failures may be due to hardware, software, or human errors. Better design and software technology described in the previous section will reduce software errors, but it will not eliminate them. Conventional techniques ofat fault tolerance address the failure rate of systems, primarily through the use of redundant hardware and data. While we plan to employ these techniques within an SIS cluster, we propose to investigate three new areas in high availability systems: Repair-Centric Ddesign, Availability Mmodeling, and Performance Ffault Aadaptation..

4.2.1 Repair-Centric Design

An important factor in the availability if real systems human-induced system failures. Data from the late 1970s reveals that operator error accounted for 50-70% of failures in electronic systems, 20-53% of missile system failures, and 60-70% of aircraft failures [4.2e1]. Data from Tandem [4.2g3], VAX [4.2l9], the telephone switching system [4.2j7], and Oracle [4.2k8] all place the fraction of system failures due to humans between 40% and 50% of all failures.

Our approach to this problem is called repair-centric design, which assumes that hardware, software, and human failures are certain, and provides rapid and effective mechanisms for detecting and recovering from them. These mechanisms should be designed to make as few assumptions about failure characteristics as possible, and they should provide means to recover from unanticipated catastrophic failures that make it past any standard fault-tolerance lines of defense. Furthermore, a repair-centric system design has to go beyond simply providing recovery mechanisms. To truly have a repair-centric design, recovery mechanisms have to be treated as first-class parts of the system, and integrated into a framework that manages their use and guarantees their effectiveness. Finally, rather than treating the human operator as only the last line of defense in a systems whose other recovery mechanism have failed, a repair-centric system will design-in features to aid the human operator.

A full discussion of repair-centric system design is outside the scope of this proposal, but we We identify a few important techniques for repair centric system design here. First, a repair-centric system inherently requires redundancy of hardware and data in the form of clustered hardware with replicated state. The system design must be partitionable to support fault containment and to provide the means of safely exercise ofing repair mechanisms; again, physically partitioned designs such as clustered organizations seem appropriate. To achieve the goal of quickly detecting failures, repair-centric systems should be built to incorporate extensive self-testing and checking at the component and system-wide level. As part of this detection, a repair-centric system should strive to expose and repair latent errors in the system before they are activated; the kinds of “normal accidents” analyzed by Perrow [4.2a5] often occur only when many latent errors have accumulated in the system and are all activated simultaneously in a chain-reaction cascade of failures.

.

To ensure that the repair mechanisms are trustworthy, recovery code will be periodically tested in situ as part of normal system operation, allowing automated recovery mechanisms be exercised and verified in the production environment. When repair requires human intervention, those mechanisms will be exercised as well: the human operators should be subjected to realistic “fire-drill” simulations of failures and repair, allowing them to become familiarized with the system’s failure modes, maintenance interfaces, and recovery procedures, all in the realistic context of the production environment. Such realistic, on-the-job operator training helps human operators calibrate their mental models of the system and allows them to make mistakes and learn from them, an essential part of gaining familiarity and confidence with system repair tasks. The last key guarantee for a repair-centric system is that it must tolerate further errors and failures during recovery, repair and maintenance. In the large-scale systems that are being built today, the statistical probability of double failures is becoming non-negligible. Furthermore, with human operators involved in recovery and repair procedures, human-induced failures during these procedures are inevitable. A common error in recovering from a disk failure in a RAID system, for example, is replacing the wrong disk. We propose to provide a set of undo primitives, from a single point undo to handle human errors in replacing system components, through a global mechanism that allows for the rollback and selective replay of a set of software upgrades and system configuration changes.

4.2.2 Availability Modeling

There are several problems with the current models used to measure availability when applied to SISs. First, main of the models used a binary value for availability, in which a systems is either up or down. In practice, and SIS may never be entirely up or entirely down so a more continuous model is needed which takes into account both the observed performance of the systems and the quality of its results. Just as a search engine may be either slow or incomplete, so to may an SIS make decisions of varying quality and cost, depending on which components of the system are available. We propose research on several aspects of availability models for SISs.

To achieve success we need to measure it: the factor of 10,000 in improvement in hardware performance over the last two decades is due in part to widely accepted benchmarks to evaluate results.

Availability benchmarks involve the use of fault injection to create failures and bring the system into states where maintenance is required. Accurate fault injection requires two things: a fault model that is capable of inducing a set of failures representative of what is seen in the real world, and a harness to inject those faults. In our previous work, we have defined fault models for storage systems based on disk failures [4.2b11], but we will need a much broader fault model to capture the behavior of clusters within SISs. We will use commercial failure data from existing cluster-based servers when it is available, combined with synthetic workloads to study particular fault patterns as well as randomized behavior.

Benchmarking availability will be much harder than benchmarking performance, especially for those aspects that involve measuring behavior of human subjects. There are many research areas of academia — including human centered systems — based on measuring human subjects. We will collaborate with experts in those areas to measure people and systems as maintenance events are inserted

The benchmarks will tell has what behavior occurs, but not necessarily why. High availability systems should provide assistance in diagnosing the root cause of problems afteronce they have been detectioned. Because of the complexity of SISs, the component interdependencies between components may not be unknown, and will therefore be inferred by injecting realistic test inputs and checking the resulting outputs for both correctness and performance. To aid in diagnosis, repair-systems should automatically track the health of all components, and use techniques such as dependency analysis to automatically pinpoint the root-cause of detected problems [4.2c10].

4.2.3 Performance Fault Adaptation

Nearly all extant reliable or fault-tolerant system designs attempt to mask faults to the greatest extent possible; the goal is to provide the illusion of perfectly-functioning hardware or software to higher layers of the system. Were it possible to completely mask faults, this would be a justified approach, but in practice, while many faults can be masked from a functionality point of view, they often result in a kind of performance fault. For example, a system that mirrors persistent state on two disks, but puts the secondary copy on a disk that holds the primary copy of another data set will suffer a performance fault if a disk fails so that one disks must server twice as much data [4.2d12]. While fault masking is one important source of performance faults, they may also arise from come from many other factors, such as network congestion, load spikes, or disk fragmentation. Performance faults are surprisingly common even on dedicated, homogeneous clusters, especially as the systems scale to large numbers of components. Performance faults are easily handled in clusters that are processing many small independent tasks in parallel, such as a web servers or transaction processing system. However, for decision support problems, such as predicting the effects of an earthquake on a particular building, the entire system may run at the speed of the slowest component. We propose to use dynamic load balancing techniques to address performance faults. We will investigate two techniques for detecting performance faults, the first being explicit notification of faults to higher level software from a lower level fault-masking system. The second is implicit inference of performance faults based on measurement progress of operations as they execute.

In summary, ourthe goals of our availability work are to 1) Develop principles of repair-centric design, where errors occur due to hardware, software, and people; 2) Chataracterize failures by hardware, software, and people to develop a failure workload; 3) Develop availability benchmarks to measure systems; 4) Demonstrate the benchmarking principles by running availbility benchmarks on a SIS designed according to those principles, and contrast its behavior to traditional systems; 5) Develop models to identify the cause of an observed failure; 6) Develop algorithms to detect and adapt to load imbalance due to performance faults.

3. Security and Policy (Target: 1 page) (Faculty: Tygar, Wagner, Samuelson) (no refs yet!)

A central theme throughout this proposal is the use of widespread sensors to collect and deliver information and of distributed systems to respond and act upon the information presented. This presents a number of important security challenges:

(1) We need to be able to authenticate and secure messages sent from, to, and among the sensors. It is unacceptable if a hostile party is able to trigger a major alert for the disaster response application, for example. While cryptography can solve many of these problems, sensors provide a particular difficult technical challenge is increased because sensors will typically have minimal computational abilities. We have developed a family of lightweight broadcast/multicast stream authentication and signatures algorithms, TESLA and EMSS [4.3a, 4.3bCITATIONS 1.1] that we have implemented on Berkeley's experimental SmartDust [4.3cCITATIONS 1.2] platform. This provides for security on an 8-bit device with only 8KBytes of ROM and 512 bytes of RAM. We are also exploring access control through the ELK key distribution protocol [4.3dCITATION 1.3]. In this work, we will adapt and build security protocols for sensor class devices. In particular, we will work on the problem of continual re-authentication devices so as to keep them from being subverted.

(2) The use of sensors raises significant privacy questions. How can we collect data and provide maximum assurance against that it will not be misused? In many cases, aggregate information only can be provided. We will explore both technical and policy aspects of this question [4.3e, 4.3f, 4.3g, 4.3n, 4.3o, 4.3p]CITATIONS 2.1, 2.2].

(3) Large-scale monitoring systems will need to aggregate data and distribute computation across platforms controlled by participants with diverse interests. Current approaches to security, based on perimeter security, are brittle and scale poorly: When any intruder who can bypass the firewall gains full access to the system, a single intrusion can defeat the entire system. We will seek lightweight techniques to ensure that SISs degrade gracefully even in the presence of a small number of malicious parties or intrusions. We will study two approaches to improving robustness. First, we will study algorithms and distributed data structures that are resilient even in the presence of a small fraction of maliciously chosen inputs or untrustworthy protocol participants, and we will examine techniques such as random sampling, averaging, and replication for this purpose. Second, we will study programming disciplines for reducing the risk of vulnerability in security-critical source code, and we will develop static and runtime tools for identifying violations of these "good security hygiene" guidelines. [4.3h,4.3i,4.3j,4.3k,4.3l, 4.3mCITATION 3]. Finally information processing techniques to handle “outlier quality” information is discussed in § 4.1.4 above.

(4) Recent legislation and policies in the area of cyberlaw, such as the Digital Millenium Copyright Act, have profound implications on the ability of outside organizations to legally test the security and responsiveness of security devices including distributed sensors [4.3q, 4.3rCITATIONS 4]. We will examine the consequences of these policies and highlight potential issues. When appropriate, we will also examine potential policy recommendations

4.4 Algorithms (Faculty: Papadimitriou, Demmel)

The CITRIS effort will require a new generation of algorithmic ideas and techniques, in three rough categories: (1) Algorithms related to the applications, including visualization, supported by the proposed SISs; (2) general algorithmic problems involved in the design of the SIS architecture; and (3) algorithmic problems involved in the operation of the specific SISs proposed. A strength of the investigator team is a strong interest in basic algorithmic methods for the analysis of complex systems: In this section we briefly expand on algorithms for items (2) and (3). We emphasize that algorithms for all the items will also be developed integrally as part of the proposed research in §3 and §4.1-4.3 as well.

In the area of heterogeneous networking, we have embarked on a foundational algorithmic study of the optimization problems related to congestion control and network design in the context of the Internet [see REF1]. The ab initio design of the SISs in CITRIS will be a valuable test bed for the ideas that have emerged from that work, and an opportunity for adapting them to novel environments. The SISs to be built by CITRIS will resemble the Internet in rough scale, but will differ dramatically from it in their socio-economic nature (built and operated by entities of varying degrees of cooperation and trust, but not predominantly competitive), and in their data and traffic profiles (steady streams, very infrequent bursts, mostly non-time-critical data, deep priority hierarchies). The protocols developed for these SISs, as well as their interconnection technologies and topologies, will probably not resemble Internet protocols or technologies/topology at levels below the very highest level. Of specific interest is the development of new light weight protocols for SISs and their optimized design. The design of the SIS network architecture will influence deeply (and will be based on) network operation characteristics. There will be data compression (both lossless and lossy) at every level of the network (communication protocols at the sensor level exploiting data correlations, fundamental tradeoffs to be made between computing and communicating (with a bank balance of total energy and peak power consumption) will need to be made in the nJ/ bit for transmittal or for localized algorithms, tradeoffs of latency versus energy and power consumption will need to be made online for the networks depending on the urgency of the situation being monitored and communicated, application-dependent data combining and data mining at the server level, further combining, mining, and prioritizing at higher network levels), together with background traffic of raw data for archiving whose levels of priority reflect potential criticality. These performance issues will complement concerns of trust, security, privacy, and dependability in a genre of novel algorithmic problems of unprecedented scope and scale, combining for the first time three basic aspects: communication, analysis/processing, and storage/ protection of data.

5

Education and Outreach (Target: .25 pages)

In addition to an ambitious program of curriculum develeopment at UCB and UCD Berkeley and Davis, our main outreach partner is to UCM Merced (see the attached letter from Provost Designate David Ashley in Appendix B). UC Merced will be the first new American research university built in the twenty-first century in California’s great agricultural heartland. While the new campus will draw students from all over California, it has a special mission to expand participation of underrepresented students, particularly Hispanic, first generation college-going, low income and rural students from the San Joaquin Valley. These students are not well represented in technology–oriented science and engineering . UCM Merced plans to open for instruction in Fall 2004 with 100 faculty, half in science and engineering fields. Our partnership will help contribute to UCM Merced’s ability to recruit outstanding faculty working at the cutting edge of engineering and IT information technology. Some faculty recruitment targets of recruitment will be faculty who can contribute directly to the work in SISs and would hold periodic residences at UBerkeleyCB and UCD. Davis. These faculty in turn will be able to recruit outstanding graduate students on whom UC Merced’s excellence in graduate education will be based. Since In addition, because faculty recruitment is will be phased over the next three years, UCM Merced will be able to contribute to research beginning in Fall 2002. The area of Eenergy efficiency, has special importance to UUC Merced CM as a new university. As part of a region characterized by extremes of 100(+ heat and near-freezing temperatures, UCM Merced has the opportunity to be a model for an environmentally effective design in all aspects of its building and campus operations program. Thus, beginning with the Spring 2002 groundbreaking, UCM Merced will serve as a test bed for research applications in SIS for energy efficiency. Making the building of the campus a target of research will be UCM’s UC Merced’s unique contribution to this proposal.

Another critical element of the partnership is development of undergraduate education at UCM Merced. UCM Merced faculty will collaborate with UCB Berkeley and UCDavis on developing lower division preparation courses in a technology-based format for remote delivery. In addition, this spring, UCM Merced will begin its outreach to Valley Community College students through publication of curricular pathways that will prepare students to transfer to UCM Merced’s science and engineering programs. Ultimately, these courses will support undergraduate education of all UCM Merced students, whether they begin their education on campus or prepare for UCM Merced through a Community College articulation program

References

[1a] CITRIS – Center for Information Technology Research in the Interest of Society, , 2001

[1b] BSAC – Berkeley Sensor and Actuator Center, , 2001

[2.1a] J. Rabaey, E. Arens, C. Federspiel, A. Gadgil, D. Messerschmidt, W. Nazaroff, K. Pister, S. Oren, P. Varaiya, “Smart Energy Distribution and Consumption ─ Information Technology as an Enabling Force”, , 2001

[2.1b] Interlaboratory Working Group 2000, “Scenarios for a Clean Energy Future”, Oak Ridge NL (Technical Report ORNL/CON-476) and Lawrence Berkeley NL (Technical Report LBNL-44029), November 2000, Chapter 4

[2.2a] Board on Natural Disasters, “Reducing Disaster Losses Through Better Information”, National Research Council, 1999, 72 pages,

[2.2b] Newman, M.W. and J.A. Landay. Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice as Manifested Through Artifacts. In Proceedings of ACM Conference on Designing Interactive Systems. New York City. pp. 263-274, August 2000.

[2.2c] Elliott, A. and Hearst, M. How Large Should a Digital Desk Be? Qualitative Results of a Comparative Study in the Proceedings of CHI'00, Conference Companion, The Hague, Netherlands, 2000.

[2.2d] Heaton, T. (1996). “ The TriNet Project,” Proceedings, Eleventh World Conference on Earthquake Engineering, Elsevier Science Ltd.,Paper No. 2136

[2.2e] TriNet (2001). “TriNet: Seismic System for Southern California,” .

[2.2f] Buika, J. et al. (1998). “Advances in Scientific and Engineering Post-Earthquake Operational Response and Disaster Intelligence,” Proceedings, Sixth U.S. National Conference on Earthquake Engineering.

[2.2g] Scrivner, C., Worden, C.B. Wald, D.J. (2000). “Use of TriNet ShakeMap to Manage Earthquake risk,” Proceedings, Sixth International Conference on Seismic Zonation: Managing Earthquake Risk in the 21st Century, EERI.

[2.2h] S.Sandeep Pradhan and Kannan Ramchandran, Distributed source coding using syndromes (DISCUS): Design and construction, Proc. IEEE Data Compression Conference (DCC), 1999.

[2.2i] McConnell, K.G. (1995) Vibration Testing: theory and practice, Wiley, p. 606.

[2.2j] Farrar, C.R., Doebling, S.W., Nix, D.A., (2001) Vibration-Based Structural Damage Identification, Philosophical Transactions of the Royal Society: Mathematical, Physical & Engineering Sciences, 359(1778), pp. 131 – 149

[2.2k] Farrar, C.R. and Doebling, S.W., (1997) Lessons Learned from Applications of Vibration-Based Damage Identification Methods to Large Bridge Structures, Proc. of the International Workshop on Structural Health Monitoring, Stanford, CA, Sept 1997, pp. 351-370.

[2.2l] Beck, J.L. (1978). Determining Models of Structures from Earthquake Records. Earthquake Engineering Research Laboratory 78-01. California Institute of Technology.

[2.2m] Safak, E. (1997). Propagation of Seismic Waves in Tall Buildings. Tall Buildings for the 21st Century (Proceedings of the 4th Conference on Tall Buildings in Seismic Regions), 129-154.

[2.2n] Udwadia, F. E. (1985). Some Uniqueness Results Related to Soil and Building Structural Identification, SIAM J. Applied Math., 45(4), p. 674.

[2.2o] Werner, S.D., Crouse, C.B., Katafygiotis, L.S., and Beck, J.L. (1994) Use of strong motion records for model evaluation and seismic analysis of a bridge structure, Proceedings of the Fifth U.S. National Conference on Earthquake Engineering, 1, 511-520.

[2.2p] Stewart, J.P, Fenves, G.L. (1998) “System Identification for Evaluating Soil-Structure Interaction Effects in Buildings from Strong Motion Recordings”, Earthquake Engineering and Structural Dynamics, 27(8), pp. 869-885.

[2.2q] Arici, Y. and K. M. Mosalam, (2000) System Identification and Modeling of Bridge Systems for Assessing Current Design Procedures, Proceedings of SMIP2000 Seminar, Sept. 14, Sacramento, CA

[2.2r] Baise, L.G., and Glaser, S.D. (2000), Repeatability of Site Response Estimates Made Using System Identification, Bulletin of the Seismological Society of America, 90(4), pp. 993-1009.

[2.2s] Glaser, S.D., and Baise, L.G. (2000), System Identification Estimation of Damping and Modal Frequencies at the Lotung Site, Soil Dynamics and Earthquake Engineering, 19(6), pp. 521-531.

[2.2t] Lin, J.-S. and Zhang, Y. (1994). Nonlinear structural identification using extended Kalman filter. Computers and Structures, 52(4), p.757.

[2.2u] Koh, C. G.; See, L. M, "Identification and uncertainty estimation of structural parameters," Journal of Engineering Mechanics, 120, 6, June 1994, pp. 1219-1236

[2.2v] Beck, J.L., and Katafygiotis, L.S. (1998). Updating Models and Their Uncertainties. I: Baysian Statistical Framework, Journal of Engineering Mechanics, 124(4), p. 455.

[2.2w] Smyth, A.W., Masri, S.F., Chassiakos, A.G., and Caughey, T.K. (1999). On-Line Parameter Identification of MDOF Nonlinear Hysteretic Systems, Journal of Engineering Mechanics, 125(2), p. 133.

[2.2x] Lus, H.; Betti, R.; Longman, R. W. "Identification of linear structural systems using earthquake-induced vibration data," Earthquake Engineering & Structural Dynamics, 28, 11, Nov. 1999, pp. 1449-1467.

[2.2y] Hjelmstad, K. D.; Banan, M. R.; Banan, M. R. “On Building Finite Element Models of Structures from Modal Response,” Earthquake Engineering & Structural Dynamics, 24(1), pp. 53-67.

[2.2z] DesRoches, R., Fenves, G.L. (1997). “Evaluation of Recorded Earthquake Response of a Curved Highway Bridge”, Earthquake Spectra, 13(3), pp. 363-386.

[2.2aa] Sohn, H., Law, K.H. (1997). “Bayesian Probabilistic Approach for Structure Damage Detection,” Earthquake Engineering & Structural Dynamics, 26(12), pp. 1259-1281.

[2.2bb] Archer, G.C., Fenves, G.L., Thewalt, C. (1999).“A New Object-Oriented Finite Element Analysis Program Architecture”, Computers & Structures, 70(1), pp. 63-75.

[2.2cc] McKenna, F., Fenves, G.L. (2000). “An Object-Oriented Software Design for Parallel Structural Analysis”, Advanced Technology in Structural Engineerings, Structures Congress 2000, ASCE.

[3.1a] J. Hill, R. Szewczyk, A. Woo, D. Culler, S. Hollar, K. Pister, System Architecture Directions for Networked Sensors, 9th International Conference on Architectural Support for Programming Languages and Operating Systems, Nov. 2000, pp. 93-104

[3.1b] Chalermek Intanagonwiwat, Ramesh Govindan and Deborah Estrin. Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks, ACM MobiCom 2000, August 00, Boston, MA.

[3.1c] Embedding the Internet, Special Issue of CACM, May 2000

[3.1d] Hellerstein et al. Interactive Data Analysis with CONTROL. IEEE Computer, August 1999

[3.2a] Microsoft Corporation, .Net Home Page, .

[3.2b] Sun Corporation, Java Home Page, .

[3.2c] R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, “ReSerVation Protocol (RSVP) Version 1 Functional Specification,” Internet RFC 2205, IETF Network Working Group, September 1997. .

[3.2d] N. Duffeld, P. Goyal, A. Greenberg, P. Mishra, K. K. Ramakrishnan, J. E. Van der Merwe, “A Flexible Model for Resource Management in Virtual Private Networks," Proc. of ACM Sigcomm, pp. 95-108, (September 1999).

[3.2e] D. Verma, Supporting Service Level Agreements in IP Networks, Macmillan Publishing Company, New York, 1999.

[3.2f] C. Chuah, L. Subramanian, A. D. Joseph, R. H. Katz, “QoS Provisioning Using A Clearing House Architecture,” 8th International Workshop on Quality of Service (IWQOS 2000), Pittsburgh, PA, (June 2000).

[3.2g] S. D. Gribble, M. Welsh, R. von Behren, E. A. Brewer, D. Culler, N. Borisov, S. Czerwinski, R. Gummadi, J. Hill, A. Joseph, R. H. Katz, Z. M. Mao, S. Ross, B. Zhao, “The Ninja Architecture for Robust Internet-Scale Systems and Services,” Journal of Computer Networks, Special Issue on Pervasive Computing, to appear.

[3.2h] Z. Mao, W. So, R. H. Katz, “Network Support for Mobile Multimedia Using a Self-Adaptive Distributed Proxy,” ACM NOSSDAV 2001, New York, (June 2001).

[3.2i] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, B. Zhao, “OceanStore: An Architecture for Global-Scale Persistent Storage,” Proceedings of the Ninth international Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), November 2000..

[3.2j] S. Zhuang, B. Zhao, A. Joseph, R. H. Katz, J. Kubiatowicz, “Bayeux: An Architecture for Wide-Area, Fault-Tolerant Data Dissemination Protocol,” ACM NOSSDAV 2001, New York, (June 2001).

[3.2k] 3GPP specification. .

[3.2l] B. Stiller, T. Braun, M. Gunter and B. Plattner, “The CATI project: charging and accounting technology for the Internet,” Proc. of 4th European Conference: Multimedia Applications, Services and Techniques, pp. 281-96, May 1999.

[3.3a] Tolga Urhan, Michael J. Franklin, Laurent Amsaleg. Cost Based Query Scrambling for Initial Delays,Procedings of the ACM SIGMOD Conference, Seattle, WA, June 1998, pp. 130-141.

[3.3b] Joseph M. Hellerstein, Michael J. Franklin, Sirish Chandrasekaran, Amol Deshpande, Kris Hildrum, Sam Madden, Vijayshankar Raman, Mehul Shah. Adaptive Query Processing: Technology in Evolution, IEEE Data Engineering Bulletin, June 2000, pp 7-18.

[3.3c] Joseph M. Hellerstein Ron Avnur, Andy Chou, Chris Olston, Vijayshankar Raman, Tali Roth, Christian Hidber, Peter J.Haas. Interactive Data Analysis with CONTROL, IEEE Computer 32(8), August, 1999, pp. 51-59.

[3.3d] Ron Avnur, Joseph M. Hellerstein. Eddies: Continuously Adaptive Query Processing, Procedings of the ACM SIGMOD Conference, Philadelphia, PA, June 2000,

[3.3e] Ugur Cetintemel, Michael J. Franklin, and C. Lee Giles. Self-Adaptive User Profiles for Large-Scale Data Delivery, Proceedings of the International Conference on Data Engineering, San Diego, CA, February, 2000, pp 622-633

[3.3f] Mitch Cherniack, Michael J. Franklin, Stan Zdonik. Expressing User Profiles for Data Recharging, accepted for publication, IEEE Personal Communications, April 2001 (to appear).

[3.3g] Mehmet Altinel, Michael J. Franklin. Efficient Filtering of XML Documents for Selective Dissemination of Information, Proceedings of the International Conference on Very Large Data Bases, Cairo, September 2000.

[3.4a] Olston, C., Woodruff, A., Aiken, A., Chu, M., Ercegovac, V., Lin, M., Spalding, Stonebraker, M. DataSplash, SIGMOD 1998, Seattle, Washington, June 1998.

[3.4b] Avnur, R., Hellerstein, J., Lo, B., Olston, C., Raman, B., Raman, V., Roth, T., Wylie, K. CONTROL: Continuous Output and Navigation Technology with Refinement On-Line, SIGMOD 1998, Seattle, Washington, June 1998.

[3.4c] Hearst, M., User Interfaces and Visualization, in Modern Information Retrieval, Baeza-Yates and Ribeiro-Neto (Eds), Addison-Wesley Longman, 1999.

[3.4d] Glaser, D. and Hearst, M. Space Series: Simultaneous display of spatial and temporal data, IEEE Symposium on Information Visualization, October 1999.

[3.4d] Newman, M.W. and J.A. Landay. Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice as Manifested Through Artifacts. In Proceedings of ACM Conference on Designing Interactive Systems. New York City. pp. 263-274, August 2000.

[3.4e] Elliott, A. and Hearst, M. How Large Should a Digital Desk Be? Qualitative Results of a Comparative Study in the Proceedings of CHI'00, Conference Companion, The Hague, Netherlands, 2000.

[3.4f] Hong, J.I. and J.A. Landay, An Infrastructure Approach to Context-Aware Computing. Human-Computer Interaction, 2001. 16(2).

[3.4g] “Application of Information Visualization Techniques to Networked Sensors ,” Philip Buonadonna and Jason Hill, 2000,

[3.4h] “CLOUDS: On-line database visualization”, C. Olston, T. Roth, A. Chou, J. Hellerstein, 2000.

[3.4i] “The Firewall Project”, A. Bhargava, K. Stewart, J. Archuleta, J. Cho, S. Schuman, J. Landay, 2000,

[3.4j] “Crossweaver”, J. Landay, A. Sinha, J. Lin, 2001,

[3.5a] Bertram, M., Duchaineau, M.A., Hamann, B. and Joy, K.I. (2000), Bicubic subdivision-surface wavelets for large-scale isosurface representation and visualization, in: Ertl, T., Hamann, B. and Varshney, A., eds., Visualization 2000, IEEE Computer Society Press, Los Alamitos, California, pp. 389-396.

[3.5b] Bertram, M., Duchaineau, M.A., Hamann, B. and Joy, K.I. (2000), Wavelets on planar tesselations, in: Arabnia, H.R., Coudoux, F.-X., Mun, Y., Power, G.P., Sarfraz, M. and Zhu, Q., eds., Proc. The 2000 International Conference on Imaging Science, Systems, and Technology (CISST 2000), Computer Science Research, Education, and Applications Press (CSREA), Athens, Georgia, pp. 619-625.

[3.5c] Bremer, P.-T., Hamann, B., Kreylos, O. and Wolter, F.-E. (2001), Simplification of closed triangulated surfaces using simulated annealing, in: Lyche, T. and Schumaker, L.L., eds., Mathematical Methods in CAGD: Oslo 2000, Vanderbilt University Press, Nashville, Tennessee, pp. 45-54.

[3.5d] Gieng, T.S., Hamann, B., Joy, K.I., Schussman, G.L. and Trotts, I.J. (1998), Constructing hierarchies for triangle meshes, IEEE Transactions on Visualization and Computer Graphics 4(2), pp. 145-161.

[3.5e] Hamann, B., Jordan, B.W. and Wiley, D.F. (1999), On a construction of a hierarchy of best linear spline approximations using repeated bisection, IEEE Transactions on Visualization and Computer Graphics 5(1/2), pp. 30-46, p. 190 (errata).

[3.5f] Heckel, B., Uva, A.E. and Hamann, B. (1999), Cluster-based generation of hierarchical surface models, in: Hagen, H., Nielson, G.M. and Post, F., eds., Proc. Scientific Visualization - Dagstuhl '97, second printing, IEEE Computer Society Press, Los Alamitos, California, pp. 113-122.

[3.5g] Kreylos, O. and Hamann, B. (1999), On simulated annealing and the construction of linear spline approximations for scattered data, in: Groeller, E., Loeffelmann, H. and Ribarsky, W., eds., Data Visualization '99 (Proc. "VisSym '99"), Springer-Verlag, Vienna, Austria, pp. 189-198.

[3.5h] Schussman, S.E., Bertram, M., Hamann, B. and Joy, K.I. (2000), Hierarchical data representations based on planar Voronoi diagrams, in: de Leeuw, W.C., van Liere, R., eds., Data Visualization 2000 (Proc. "VisSym '00"), Springer-Verlag, Vienna, Austria, pp. 63-72.

[3.5i] Trotts, I.J., Hamann, B. and Joy, K.I. (1999), Simplification of tetrahedral meshes with error bounds, IEEE Transactions on Visualization and Computer Graphics 5(3), pp. 224-237.

[3.5j] Bonnell, K.S., Schikore, D.R., Joy, K.I., Duchaineau, M.A. and Hamann, B. (2000), Constructing material interfaces from data sets with volume-fraction information, in: Ertl, T., Hamann, B. and Varshney, A., eds., Visualization 2000, IEEE Computer Society Press, Los Alamitos, California, pp. 367-372.

[3.5k] Gregorski, B.F., Hamann, B. and Joy, K.I. (2000), Reconstruction of B-spline surfaces from scattered data points, in: Magnenat-Thalmann, N. and Thalmann, D., eds., Proceedings of Computer Graphics International 2000, pp. 163-170.

[3.5l] Hamann, B., Kreylos, O., Monno, G. and Uva, A.E. (1999), Optimal linear spline approximation of digitized models, in: Banissi, E., Khosrowshahi, F., Sarfraz, M., Tatham, E. and Ursyn, A., eds., Proc. "1999 IEEE International Conference on Information Visualization (IV '99) - Computer Aided Geometric Design Symposium," IEEE Computer Society Press, Los Alamitos, California, pp. 244-249.

[3.5m] Kreylos, O., Ma, K.-L. and Hamann, B. (2000), A multi-resolution interactive previewer for volumetric data on arbitrary meshes, in: Outhyoung, M. and Shih, Z.-C., eds., Proc. 2000 International Computer Symposium – Workshop on Computer Graphics and Virtual Reality, Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan, R.O.C., pp. 74-81.

[3.5n] Kuester, F., Duchaineau, M.A., Hamann, B., Joy, K.I. and Ma, K. L. (2000), The DesignersWorkbench: towards real-time immersive modeling, in: Merritt, J.O., Benton, S.A., Woods, A.J. and Bolas, M.T., eds., Stereoscopic Displays and Virtual Reality Systems VII, Proc. SPIE Vol. 3957, SPIE – The International Society for Optical Engineering, Bellingham, Washington, pp. 464-472.

[3.5o] LaMar, E.C., Hamann, B. and Joy, K.I. (1999), Multiresolution techniques for interactive texture-based volume visualization, in: Ebert, D.S., Gross, M. and Hamann, B., eds., Visualization '99, IEEE Computer Society Press, Los Alamitos, California, pp. 355-361.

[3.5p] Scheuermann, G., Hamann, B., Joy, K.I. and Kollmann, W. (2000), Visualizing local vector field topology, Journal of Electronic Imaging 9(4), special section on visualization and data analysis, SPIE - The International Society for Optical Engineering, pp. 356-367.

[4.1a] T. A. Henzinger and S. S. Sastry, “Hybrid Systems: Computation and Control”, Springer Verlag Lecture Notes in CS, Vol. LNCS 1386, 1998.

[4.1b] C. J. Tomlin, J. Lygeros and S. Sastry, “A Game Theoretic Approach to Controller Design for Hybrid Systems”, Proceedings of the IEEE, Special Issue on Hybrid Systems, Vol. 88, July 2000, pp. 949-970.

[4.1c] T. Henzinger, S. Qadeer and S. Rajamani, “Decomposing refinement proofs using assume guarantee reasoning, “ Proceedings of the IEEE/ACM Conference on Computer Aided Design, ICCAD 2000, pp. 245-252.

[4.1d] T.A. Henzinger. “The theory of hybrid automata,” Proceedings of the 11th Annual IEEE Symposium on Logic in Computer Science, 1996, pp. 278-292.

[4.1e] R. Alur, L. de Alfaro, T.A. Henzinger, and F.Y.C. Mang. “Automating modular verification,”Proceedings of the Tenth International Conference on Concurrency Theory, Lecture Notes in Computer Science 1664, Springer-Verlag, 1999, pp. 82-97.

[4.1f] T.A. Henzinger. “Masaccio: a formal model for embedded components”, Proceedings of the First IFIP International Conference on Theoretical Computer Science, Lecture Notes in Computer Science 1872, Springer-Verlag, 2000, pp. 549-563.

[4.1g] R. Alur and D.L. Dill. “A theory of timed automata”, Theoretical Computer Science, 126:183--235, 1994.

[4.1h] R. Alur and T.A. Henzinger. “Modularity for timed and hybrid systems,” Proceedings of the Eighth International Conference on Concurrency Theory, Lecture Notes in Computer Science 1243, Springer-Verlag, 1997, pp. 74-88.

[4.1i] M. Abadi and L. Lamport. “Conjoining specifications”,. ACM Transactions on Programming Languages and Systems, 17:507--534, 1995.

[4.1j] N.A. Lynch.Distributed Algorithms. Morgan-Kaufmann, 1996.

[4.1k] R. Segala. Modeling and Verification of Randomized Distributed Real-Time Systems.

PhD thesis, MIT, 1995.

[4.1l] E.A. Lee. Embedded Software: An Agenda for Research. Technical Report UCB/ERL No. M99/63, University of California, Berkeley, 1999.

[4.1m] T.A. Henzinger, M. Minea, and V. Prabhu, “Assume-guarantee reasoning for hierarchical hybrid systems”,. Proceedings of the Fourth International Workshop on Hybrid Systems: Computation and Control, Lecture Notes in Computer Science 2034, Springer-Verlag, 2001, pp. 275-290.

[4.1n] N.A. Lynch, R. Segala, F. Vaandrager, H.B. Weinberg. “Hybrid I/O Automata. Hybrid Systems III”, Lecture Notes in Computer Science 1066, Springer-Verlag, 1996, pp. 496-510.

[4.1o] E.M. Clarke, O. Grumberg, and D.A. Peled. Model Checking. MIT Press, 1999.

[4.1p] A. Aiken, E. Wimmers, and T.K. Lakshman. Soft typing with conditional types.

Proceedings of the Twenty-First Annual ACM Symposium on Principles of Programming Languages,

1994, pp. 163--173,

[4.1q] D. Bjorner, A.P. Ershov, and N.D. Jones, editors. Partial Evaluation and Mixed Computation.

North-Holland, 1988.

[4.1r] R. Cartwright and M. Fagan. Soft typing. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, 1991, pp. 278-292.

[4.1s] B. Hetzel. The Complete Guide to Software Testing.

John Wiley, 1984.

[4.1t] E. Kit. Software Testing in the Real World. Addison-Wesley, 1995.

[4.1u] G. Myers. The Art of Software Testing. John Wiley, 1977.

[4.1v] G. Lafferriere, G. J. Pappas, and S. Sastry. “O-minimal Hybrid Systems,” in Mathematics of Control, Signals, and Systems, March 2000, Vol. 13, No. 1: 1-21.

[4.1w] Johansson, K.H.; Egerstedt, M.; Lygeros, J.; Sastry, S. “On the Regularization of Zeno Hybrid Automata,” Systems & Control Letters, 26 October 1999, Vol. 38, No. 3: 141-50.

[4.1x] Lygeros, J; Tomlin, C; Sastry, S. “Controllers for Reachability Specifications for Hybrid Systems, “ AUTOMATICA, March 1999, Vol. 35 No. 3: 349-370.

[4.1y] G. Pappas and S. Sastry, "Straightening Rectangular Inclusions," Systems and Control Letters, 1998, Vol. 35, No. 2: 79-85.

[4.1z] J. Hu, J. Lygeros and S. Sastry, “Towards a theory of stochastic hybrid systems,” in Proceedings of the Third Conference on Hybrid Systems: Computation and Control, edited N. Lynch and B. Krogh, Springer LNCS, 2000.

[4.2a] C. Perrow. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton Press, 1999.

[4.2b] ] A. Brown, G. Kar, and A. Keller. An Active Approach to Characterizing Dynamic Dependencies for Problem Determination in a Distributed Environment. To appear in Proceedings of the Seventh IFIP/IEEE International Symposium on Integrated Network Management (IM VII), Seattle, WA, May 2001.

[4.2c] A. Brown and D. A. Patterson. Towards Availability Benchmarks: A Case Study of Software RAID Systems. Proceedings of the 2000 USENIX Annual Technical Conference, San Diego, CA, June 2000.

[4.2d] Arpaci-Dusseau, R.; Anderson, E.; Treuhaft, N.; Culler, D.; Hellerstein, J.; Patterson, D.; Yelick, K. Cluster I/O with River: Making the Fast Case Common, Proc. IOPADS '99: Sixth Workshop on I/O in Parallel and Distributed Systems, Atlanta, Georgia, May 5, 1999.

[4.2e] J. M. Christensen and J. M. Howard. Field Experience in Maintenance. Human Detection and Diagnosis of System Failures: Proceedings of the NATO Symposium on Human Detection and Diagnosis of System Failures, J. Rasmussen and W. Rouse (Eds.). New York: Plenum Press, 1981, 111–133.

[4.2f] S. Fisher. E-business redefines infrastructure needs. InfoWorld, 7 January 2000. Available from .

[4.2g] J. Gray. Why Do Computers Stop and What Can Be Done About It? Symposium on Reliability in Distributed Software and Database Systems, 3–12, 1986.

[4.2h] J. Hamilton. Fault Avoidance vs. Fault Tolerance: Testing Doesn’t Scale. High Performance Transaction Systems (HPTS) Workshop, Asilomar, CA, 1999.

[4.2i] B. H. Kantowitz and R. D. Sorkin. Human Factors: Understanding People-System Relationships. New York: Wiley, 1983.

[4.2j] D. R. Kuhn. Sources of Failure in the Public Switched Telephone Network. IEEE Computer 30(4), April 1997.

[4.2k] J. Menn. Prevention of Online Crashes is No Easy Fix. Los Angeles Times, 2 December 1999, C-1.

[4.2l] B. Murphy and T. Gent. Measuring System and Software Reliability using an Automated Data Collection Process. Quality and Reliability Engineering International, 11:341–353, 1995.

[4.3a] A. Perrig, R. Canetti, J. D. Tygar, D. Song. "Efficient Authentication and Signing of Multicast Streams over Lossy Channels". In Proceedings of the 2000 IEEE Symposium on Security and Privacy, May 2000, pages 56 - 73.

[4.3b] A. Perrig, R. Canetti, D. Song, J. D. Tygar. "Efficient and Secure Source Authentication for Multicast". In Proceedings of the Internet Society Network and Distributed System Security Symposium, February 2001.

[4.3c] J. M. Kahn, R. H. Katz and K. S. J. Pister. "Mobile Networking for Smart Dust". ACM/IEEE Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August 17-19, 1999.

[4.3d] A. Perrig, D. Song, J. D. Tygar. "ELK: A New Protocol for Efficient Large-Group Key Distribution". To appear in Proceedings of the 2001 IEEE Symposium on Security and Privacy, May 2001.

[4.3e] National Research Council Committee on Information Systems Trustworthiness. Trust in Cyberspace. National Academy Press, 1999.

[4.3f] J. D. Tygar, A. Whitten. "Why Isn't the Internet Secure Yet?" In ASLIB Proceedings, Volume 52, Number 3, March 2000, pages 93-97.

[4.3g] L. J. Camp, J. D. Tygar. "Protecting privacy while preserving access to data". In The Information Society, Volume 10, Number 1, January 1994, pages 59-71.

[4.3h] R. Bisbey II, D. Hollingsworth. "Protection analysis project final report". Tech. report ISI/RR-78-13, USC/Information Sciences Institute, May 1978.

[4.3i] A.K. Ghosh, T. O'Connor, G. McGraw, "An automated approach for identifying potential vulnerabilities in software". Proc. IEEE Symp. on Security and Privacy, May 1998.

[4.3j] U. Shankar, K. Talwar, J.S. Foster, D. Wagner. "Detecting format string vulnerabilities with type qualifiers." To appear at USENIX Security Symp., Aug 2001.

[4.3k] J. Viega, J.T. Block, T. Kohno, G. McGraw. "ITS4: A static vulnerability scanner for C and C++ code." 16th Ann. Comp. Security Applications Conf., Dec 2000.

[4.3l] D. Wagner, D. Dean. "Intrusion Detection via Static Analysis". To appear at IEEE Symp. on Security and Privacy, May 2001.

[4.3m] D. Wagner, J.S. Foster, E.A. Brewer, A. Aiken. "A first step toward automated detection of buffer overrun vulnerabilities." Proc. Network and Dist. System Security Symp., Feb 2000.

[4.3n] P. Samuelson. "Privacy as Intellectual Property?" Stanford Law Review,

Vol. 52, No.5, May 2000, pages 1125-1173

[4.3o] P. Agre, M. Rotenberg, editors. Technology and Privacy: The New

Landscape. MIT Press, 1997.

[4.3p] J. Kang. "Information Privacy in Cyberspace Transactions." Stanford Law

Review, Vol. 50, No. 4, April 1998, pages 1193-1294.

[4.3q] P. Samuelson. "Intellectual Property and the Digital Economy: Why the

Anti-Circumvention Regulations Need To Be Revised." Berkeley Technology

Law Journal, Vol. 14, No. 2, Spring 1999, pages 519-566.

[4.3r] P. Samuelson. "Towards More Sensible Anti-Circumvention Regulations."

Proceedings of Financial Cryptography 2000, forthcoming 2001.

References submitted by section authors

[3GPP] .

[Braden 97] R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, “ReSerVation Protocol (RSVP) Version 1 Functional Specification,” Internet RFC 2205, IETF Network Working Group, September 1997. //merit.edu/~ipma/tools/lookingglass.html.

[Chuah 00] C. Chuah, L. Subramanian, A. D. Joseph, R. H. Katz, “QoS Provisioning Using A Clearing House Architecture,” 8th International Workshop on Quality of Service (IWQOS 2000), Pittsburgh, PA, (June 2000).

[Duffeld99] N. Duffeld, P. Goyal, A. Greenberg, P. Mishra, K. K. Ramakrishnan, J. E. Van der Merwe, “A Flexible Model for Resource Management in Virtual Private Networks," Proc. of ACM Sigcomm, pp. 95-108, (September 1999).

[Gribble01] S. D. Gribble, M. Welsh, R. von Behren, E. A. Brewer, D. Culler, N. Borisov, S. Czerwinski, R. Gummadi, J. Hill, A. Joseph, R. H. Katz, Z. M. Mao, S. Ross, B. Zhao, “The Ninja Architecture for Robust Internet-Scale Systems and Services,” Journal of Computer Networks, Special Issue on Pervasive Computing, to appear.

[Kubi00] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, B. Zhao, “OceanStore: An Architecture for Global-Scale Persistent Storage,” Proceedings of the Ninth international Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), November 2000.

September 1999.

[Mao 01] Z. Mao, W. So, R. H. Katz, “Network Support for Mobile Multimedia Using a Self-Adaptive Distributed Proxy,” ACM NOSSDAV 2001, New York, (June 2001).

[Microsoft 01] Microsoft Corporation, .Net Home Page, .

[Stiller 99] B. Stiller, T. Braun, M. Gunter and B. Plattner, “The CATI project: charging and accounting technology for the Internet,” Proc. of 4th European Conference: Multimedia Applications, Services and Techniques, pp. 281-96, May 1999.

[Sun 01] Sun Corporation, Java Home Page, .

[Verma 99] D. Verma, Supporting Service Level Agreements in IP Networks, Macmillan Publishing Company, New York, 1999.

[Zhuang 01] S. Zhuang, B. Zhao, A. Joseph, R. H. Katz, J. Kubiatowicz, “Bayeux: An Architecture for Wide-Area, Fault-Tolerant Data Dissemination Protocol,” ACM NOSSDAV 2001, New York, (June 2001).

[3GPP] .

[Chuah 00] C. Chuah, L. Subramanian, A. D. Joseph, R. H. Katz, “QoS Provisioning Using A Clearing House Architecture,” 8th International Workshop on Quality of Service (IWQOS 2000), Pittsburgh, PA, (June 2000).

[Mao 01] Z. Mao, W. So, R. H. Katz, “Network Support for Mobile Multimedia Using a Self-Adaptive Distributed Proxy,” ACM NOSSDAV 2001, New York, (June 2001).

[Microsoft 01] Microsoft Corporation, .Net Home Page, .

[Raman 00] B. Raman, R. H. Katz, A. Joseph, “Providing Extensible Personal Mobility and Service Mobility in an Integrated Communication Network,” 3rd IEEE Workshop on Mobile Computing Systems and Applications (WMCSA2000), Monterey, CA, (December 2000).

[Sun 01] Sun Corporation, Java Home Page, .

[Zhuang 01] S. Zhuang, B. Zhao, A. Joseph, R. H. Katz, J. Kubiatowicz, “Bayeux: An Architecture for Wide-Area, Fault-Tolerant Data Dissemination Protocol,” ACM NOSSDAV 2001, New York, (June 2001).

[AF00] Mehmet Altinel, Michael J. Franklin. Efficient Filtering of XML Documents for Selective Dissemination of Information, Proceedings of the International Conference on Very Large Data Bases, Cairo, September 2000.

[AH00] Ron Avnur, Joseph M. Hellerstein. Eddies: Continuously Adaptive Query Processing, Procedings of the ACM SIGMOD Conference, Philadelphia, PA, June 2000,

[CFG00] Ugur Cetintemel, Michael J. Franklin, and C. Lee Giles. Self-Adaptive User Profiles for Large-Scale Data Delivery, Proceedings of the International Conference on Data Engineering, San Diego, CA, February, 2000, pp 622-633

[CFZ01] Mitch Cherniack, Michael J. Franklin, Stan Zdonik. Expressing User Profiles for Data Recharging, accepted for publication, IEEE Personal Communications, April 2001 (to appear).

[HACO+99] Joseph M. Hellerstein Ron Avnur, Andy Chou, Chris Olston, Vijayshankar Raman, Tali Roth, Christian Hidber, Peter J.Haas. Interactive Data Analysis with CONTROL, IEEE Computer 32(8), August, 1999, pp. 51-59.

Archer, G.C., Fenves, G.L., Thewalt, C. (1999).“A New Object-Oriented Finite Element Analysis Program Architecture”, Computers & Structures, 70(1), pp. 63-75.

Arici, Y. and K. M. Mosalam, (2000) System Identification and Modeling of Bridge Systems for Assessing Current Design Procedures, Proceedings of SMIP2000 Seminar, Sept. 14, Sacramento, CA

Baise, L.G., and Glaser, S.D. (2000), Repeatability of Site Response Estimates Made Using System Identification, Bulletin of the Seismological Society of America, 90(4), pp. 993-1009.

Beck, J.L. (1978). Determining Models of Structures from Earthquake Records. Earthquake Engineering Research Laboratory 78-01. California Institute of Technology.

Beck, J.L., and Katafygiotis, L.S. (1998). Updating Models and Their Uncertainties. I: Baysian Statistical Framework, Journal of Engineering Mechanics, 124(4), p. 455.

Buika, J. et al. (1998). “Advances in Scientific and Engineering Post-Earthquake Operational Response and Disaster Intelligence,” Proceedings, Sixth U.S. National Conference on Earthquake Engineering.

Farrar, C.R. and Doebling, S.W., (1997) Lessons Learned from Applications of Vibration-Based Damage Identification Methods to Large Bridge Structures, Proc. of the International Workshop on Structural Health Monitoring, Stanford, CA, Sept 1997, pp. 351-370.

Farrar, C.R., Doebling, S.W., Nix, D.A., (2001) Vibration-Based Structural Damage Identification, Philosophical Transactions of the Royal Society: Mathematical, Physical & Engineering Sciences, 359(1778), pp. 131 – 149

DesRoches, R., Fenves, G.L. (1997). “Evaluation of Recorded Earthquake Response of a Curved Highway Bridge”, Earthquake Spectra, 13(3), pp. 363-386.

Glaser, S.D., and Baise, L.G. (2000), System Identification Estimation of Damping and Modal Frequencies at the Lotung Site, Soil Dynamics and Earthquake Engineering, 19(6), pp. 521-531.

Heaton, T. (1996). “ The TriNet Project,” Proceedings, Eleventh World Conference on Earthquake Engineering, Elsevier Science Ltd.,Paper No. 2136.

Hjelmstad, K. D.; Banan, M. R.; Banan, M. R. “On Building Finite Element Models of Structures from Modal Response,” Earthquake Engineering & Structural Dynamics, 24(1), pp. 53-67.

Lin, J.-S. and Zhang, Y. (1994). Nonlinear structural identification using extended Kalman filter. Computers and Structures, 52(4), p.757.

McConnell, K.G. (1995) Vibration Testing: theory and practice, Wiley, p. 606.

McKenna, F., Fenves, G.L. (2000). “An Object-Oriented Software Design for Parallel Structural Analysis”, Advanced Technology in Structural Engineerings, Structures Congress 2000, ASCE.

NRC (1999). Reducing Disaster Losses Through Better Information, Board of Natural Disaster, National Research Council.

Safak, E. (1997). Propagation of Seismic Waves in Tall Buildings. Tall Buildings for the 21st Century (Proceedings of the 4th Conference on Tall Buildings in Seismic Regions), 129-154.

Scrivner, C., Worden, C.B. Wald, D.J. (2000). “Use of TriNet ShakeMap to Manage Earthquake risk,” Proceedings, Sixth International Conference on Seismic Zonation: Managing Earthquake Risk in the 21st Century, EERI.

Sohn, H., Law, K.H. (1997). “Bayesian Probabilistic Approach for Structure Damage Detection,” Earthquake Engineering & Structural Dynamics, 26(12), pp. 1259-1281.

Stewart,J.P, Fenves, G.L. ((1998) “System Identification for Evaluating Soil-Structure Interaction Effects in Buildings from Strong Motion Recordings”, Earthquake Engineering and Structural Dynamics, 27(8), pp. 869-885.

Smyth, A.W., Masri, S.F., Chassiakos, A.G., and Caughey, T.K. (1999). On-Line Parameter Identification of MDOF Nonlinear Hysteretic Systems, Journal of Engineering Mechanics, 125(2), p. 133.

TriNet (2001). “TriNet: Seismic System for Southern California,” .

Udwadia, F. E. (1985). Some Uniqueness Results Related to Soil and Building Structural Identification, SIAM J. Applied Math., 45(4), p. 674.

Werner, S.D., Crouse, C.B., Katafygiotis, L.S., and Beck, J.L. (1994) Use of strong motion records for model evaluation and seismic analysis of a bridge structure, Proceedings of the Fifth U.S. National Conference on Earthquake Engineering, 1, 511-520.

Arici, Y. and K. M. Mosalam, (2000) System Identification and Modeling of Bridge Systems for Assessing Current Design Procedures, Proceedings of SMIP2000 Seminar, Sept. 14, Sacramento, CA

Baise, L.G., and Glaser, S.D. (2000), Repeatability of Site Response Estimates Made Using System Identification, Bulletin of the Seismological Society of America, 90(4), pp. 993-1009.

Beck, J.L. (1978). Determining Models of Structures from Earthquake Records. Earthquake Engineering Research Laboratory 78-01. California Institute of Technology.

Beck, J.L., and Katafygiotis, L.S. (1998). Updating Models and Their Uncertainties. I: Baysian Statistical Framework, Journal of Engineering Mechanics, 124(4), p. 455.

Farrar, C.R. and Doebling, S.W., (1997) Lessons Learned from Applications of Vibration-Based Damage Identification Methods to Large Bridge Structures, Proc. of the International Workshop on Structural Health Monitoring, Stanford, CA, Sept 1997, pp. 351-370.

Farrar, C.R., Doebling, S.W., Nix, D.A., (2001) Vibration-Based Structural Damage Identification, Philosophical Transactions of the Royal Society: Mathematical, Physical & Engineering Sciences, 359(1778), pp. 131 - 149

Glaser, S.D., and Baise, L.G. (2000), System Identification Estimation of Damping and Modal Frequencies at the Lotung Site, Soil Dynamics and Earthquake Engineering, 19(6), pp. 521-531.

Lin, J.-S. and Zhang, Y. (1994). Nonlinear structural identification using extended Kalman filter. Computers and Structures, 52(4), p.757.

McConnell, K.G. (1995) Vibration Testing: theory and practice, Wiley, p. 606.

Safak, E. (1997). Propagation of Seismic Waves in Tall Buildings. Tall Buildings for the 21st Century (Proceedings of the 4th Conference on Tall Buildings in Seismic Regions), 129-154.

Smyth, A.W., Masri, S.F., Chassiakos, A.G., and Caughey, T.K. (1999). On-Line Parameter Identification of MDOF Nonlinear Hysteretic Systems, Journal of Engineering Mechanics, 125(2), p. 133.

Udwadia, F. E. (1985). Some Uniqueness Results Related to Soil and Building Structural Identification, SIAM J. Applied Math., 45(4), p. 674.

Werner, S.D., Crouse, C.B., Katafygiotis, L.S., and Beck, J.L. (1994) Use of strong motion records for model evaluation and seismic analysis of a bridge structure, Proceedings of the Fifth U.S. National Conference on Earthquake Engineering, 1, 511-520.

[3GPP] .

[Braden 97] R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, “ReSerVation Protocol (RSVP) Version 1 Functional Specification,” Internet RFC 2205, IETF Network Working Group, September 1997. //merit.edu/~ipma/tools/lookingglass.html.

[Chuah 00] C. Chuah, L. Subramanian, A. D. Joseph, R. H. Katz, “QoS Provisioning Using A Clearing House Architecture,” 8th International Workshop on Quality of Service (IWQOS 2000), Pittsburgh, PA, (June 2000).

[Duffeld99] N. Duffeld, P. Goyal, A. Greenberg, P. Mishra, K. K. Ramakrishnan, J. E. Van der Merwe, “A Flexible Model for Resource Management in Virtual Private Networks," Proc. of ACM Sigcomm, pp. 95-108, (September 1999).

[Gribble01] S. D. Gribble, M. Welsh, R. von Behren, E. A. Brewer, D. Culler, N. Borisov, S. Czerwinski, R. Gummadi, J. Hill, A. Joseph, R. H. Katz, Z. M. Mao, S. Ross, B. Zhao, “The Ninja Architecture for Robust Internet-Scale Systems and Services,” Journal of Computer Networks, Special Issue on Pervasive Computing, to appear.

[Kubi00] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, B. Zhao, “OceanStore: An Architecture for Global-Scale Persistent Storage,” Proceedings of the Ninth international Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), November 2000.

September 1999.

[Mao 01] Z. Mao, W. So, R. H. Katz, “Network Support for Mobile Multimedia Using a Self-Adaptive Distributed Proxy,” ACM NOSSDAV 2001, New York, (June 2001).

[Microsoft 01] Microsoft Corporation, .Net Home Page, .

[Stiller 99] B. Stiller, T. Braun, M. Gunter and B. Plattner, “The CATI project: charging and accounting technology for the Internet,” Proc. of 4th European Conference: Multimedia Applications, Services and Techniques, pp. 281-96, May 1999.

[Sun 01] Sun Corporation, Java Home Page, .

[Verma 99] D. Verma, Supporting Service Level Agreements in IP Networks, Macmillan Publishing Company, New York, 1999.

[Zhuang 01] S. Zhuang, B. Zhao, A. Joseph, R. H. Katz, J. Kubiatowicz, “Bayeux: An Architecture for Wide-Area, Fault-Tolerant Data Dissemination Protocol,” ACM NOSSDAV 2001, New York, (June 2001).

[3GPP] .

[Chuah 00] C. Chuah, L. Subramanian, A. D. Joseph, R. H. Katz, “QoS Provisioning Using A Clearing House Architecture,” 8th International Workshop on Quality of Service (IWQOS 2000), Pittsburgh, PA, (June 2000).

[Mao 01] Z. Mao, W. So, R. H. Katz, “Network Support for Mobile Multimedia Using a Self-Adaptive Distributed Proxy,” ACM NOSSDAV 2001, New York, (June 2001).

[Microsoft 01] Microsoft Corporation, .Net Home Page, .

[Raman 00] B. Raman, R. H. Katz, A. Joseph, “Providing Extensible Personal Mobility and Service Mobility in an Integrated Communication Network,” 3rd IEEE Workshop on Mobile Computing Systems and Applications (WMCSA2000), Monterey, CA, (December 2000).

[Sun 01] Sun Corporation, Java Home Page, .

[Zhuang 01] S. Zhuang, B. Zhao, A. Joseph, R. H. Katz, J. Kubiatowicz, “Bayeux: An Architecture for Wide-Area, Fault-Tolerant Data Dissemination Protocol,” ACM NOSSDAV 2001, New York, (June 2001).

[AF00] Mehmet Altinel, Michael J. Franklin. Efficient Filtering of XML Documents for Selective Dissemination of Information, Proceedings of the International Conference on Very Large Data Bases, Cairo, September 2000.

[AH00] Ron Avnur, Joseph M. Hellerstein. Eddies: Continuously Adaptive Query Processing, Procedings of the ACM SIGMOD Conference, Philadelphia, PA, June 2000,

[CFG00] Ugur Cetintemel, Michael J. Franklin, and C. Lee Giles. Self-Adaptive User Profiles for Large-Scale Data Delivery, Proceedings of the International Conference on Data Engineering, San Diego, CA, February, 2000, pp 622-633

[CFZ01] Mitch Cherniack, Michael J. Franklin, Stan Zdonik. Expressing User Profiles for Data Recharging, accepted for publication, IEEE Personal Communications, April 2001 (to appear).

[HACO+99] Joseph M. Hellerstein Ron Avnur, Andy Chou, Chris Olston, Vijayshankar Raman, Tali Roth, Christian Hidber, Peter J.Haas. Interactive Data Analysis with CONTROL, IEEE Computer 32(8), August, 1999, pp. 51-59.

[HFCD+00] Joseph M. Hellerstein, Michael J. Franklin, Sirish Chandrasekaran, Amol Deshpande, Kris Hildrum, Sam Madden, Vijayshankar Raman, Mehul Shah. Adaptive Query Processing: Technology in Evolution, IEEE Data Engineering Bulletin, June 2000, pp 7-18.

[UFA98] Tolga Urhan, Michael J. Franklin, Laurent Amsaleg. Cost Based Query Scrambling for Initial Delays,Procedings of the ACM SIGMOD Conference, Seattle, WA, June 1998, pp. 130-141.

[Bertram00a] Bertram, M., Barnes, J.C., Hamann, B., Joy, K.I., Pottmann, H. and Wushour, D. (2000), Piecewise optimal triangulation for the approximation of scattered data in the plane, Computer Aided Geometric Design 17(8), ELSEVIER, pp. 767-787.

[Bertram00b] Bertram, M., Duchaineau, M.A., Hamann, B. and Joy, K.I. (2000), Bicubic subdivision-surface wavelets for large-scale isosurface representation and visualization, in: Ertl, T., Hamann, B. and Varshney, A., eds., Visualization 2000, IEEE Computer Society Press, Los Alamitos, California, pp. 389-396.

[Bertram00c] Bertram, M., Duchaineau, M.A., Hamann, B. and Joy, K.I. (2000), Wavelets on planar tesselations, in: Arabnia, H.R., Coudoux, F.-X., Mun, Y., Power, G.P., Sarfraz, M. and Zhu, Q., eds., Proc. The 2000 International Conference on Imaging Science, Systems, and Technology (CISST 2000), Computer Science Research, Education, and Applications Press (CSREA), Athens, Georgia, pp. 619-625.

[Bonnell00] Bonnell, K.S., Schikore, D.R., Joy, K.I., Duchaineau, M.A. and Hamann, B. (2000), Constructing material interfaces from data sets with volume-fraction information, in: Ertl, T., Hamann, B. and Varshney, A., eds., Visualization 2000, IEEE Computer Society Press, Los Alamitos, California, pp. 367-372.

[Bremer01] Bremer, P.-T., Hamann, B., Kreylos, O. and Wolter, F.-E. (2001), Simplification of closed triangulated surfaces using simulated annealing, in: Lyche, T. and Schumaker, L.L., eds., Mathematical Methods in CAGD: Oslo 2000, Vanderbilt University Press, Nashville, Tennessee, pp. 45-54.

[Gieng98] Gieng, T.S., Hamann, B., Joy, K.I., Schussman, G.L. and Trotts, I.J. (1998), Constructing hierarchies for triangle meshes, IEEE Transactions on Visualization and Computer Graphics 4(2), pp. 145-161.

[Gregorski00] Gregorski, B.F., Hamann, B. and Joy, K.I. (2000), Reconstruction of B-spline surfaces from scattered data points, in: Magnenat-Thalmann, N. and Thalmann, D., eds., Proceedings of Computer Graphics International 2000, pp. 163-170.

[Hamann99a] Hamann, B., Jordan, B.W. and Wiley, D.F. (1999), On a construction of a hierarchy of best linear spline approximations using repeated bisection, IEEE Transactions on Visualization and Computer Graphics 5(1/2), pp. 30-46, p. 190 (errata).

[Hamann99b] Hamann, B., Kreylos, O., Monno, G. and Uva, A.E. (1999), Optimal linear spline approximation of digitized models, in: Banissi, E., Khosrowshahi, F., Sarfraz, M., Tatham, E. and Ursyn, A., eds., Proc. "1999 IEEE International Conference on Information Visualization (IV '99) - Computer Aided Geometric Design Symposium," IEEE Computer Society Press, Los Alamitos, California, pp. 244-249.

[Heckel99a] Heckel, B., Uva, A.E. and Hamann, B. (1999), Cluster-based generation of hierarchical surface models, in: Hagen, H., Nielson, G.M. and Post, F., eds., Proc. Scientific Visualization - Dagstuhl '97, second printing, IEEE Computer Society Press, Los Alamitos, California, pp. 113-122.

[Heckel99b] Heckel, B., Weber, G.H., Hamann, B. and Joy, K.I. (1999), Construction of vector field hierarchies, in: Ebert, D.S., Gross, M. and Hamann, B., eds., Visualization '99, IEEE Computer Society Press, Los Alamitos, California, pp. 19-25.

[Kreylos99] Kreylos, O. and Hamann, B. (1999), On simulated annealing and the construction of linear spline approximations for scattered data, in: Groeller, E., Loeffelmann, H. and Ribarsky, W., eds., Data Visualization '99 (Proc. "VisSym '99"), Springer-Verlag, Vienna, Austria, pp. 189-198.

[Kreylos00] Kreylos, O., Ma, K.-L. and Hamann, B. (2000), A multi-resolution interactive previewer for volumetric data on arbitrary meshes, in: Outhyoung, M. and Shih, Z.-C., eds., Proc. 2000 International Computer Symposium – Workshop on Computer Graphics and Virtual Reality, Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan, R.O.C., pp. 74-81.

[Kuester00] Kuester, F., Duchaineau, M.A., Hamann, B., Joy, K.I. and Ma, K. L. (2000), The DesignersWorkbench: towards real-time immersive modeling, in: Merritt, J.O., Benton, S.A., Woods, A.J. and Bolas, M.T., eds., Stereoscopic Displays and Virtual Reality Systems VII, Proc. SPIE Vol. 3957, SPIE – The International Society for Optical Engineering, Bellingham, Washington, pp. 464-472.

[LaMar99] LaMar, E.C., Hamann, B. and Joy, K.I. (1999), Multiresolution techniques for interactive texture-based volume visualization, in: Ebert, D.S., Gross, M. and Hamann, B., eds., Visualization '99, IEEE Computer Society Press, Los Alamitos, California, pp. 355-361.

[Scheuermann00] Scheuermann, G., Hamann, B., Joy, K.I. and Kollmann, W. (2000), Visualizing local vector field topology, Journal of Electronic Imaging 9(4), special section on visualization and data analysis, SPIE - The International Society for Optical Engineering, pp. 356-367.

[Schussman00] Schussman, S.E., Bertram, M., Hamann, B. and Joy, K.I. (2000), Hierarchical data representations based on planar Voronoi diagrams, in: de Leeuw, W.C., van Liere, R., eds., Data Visualization 2000 (Proc. "VisSym '00"), Springer-Verlag, Vienna, Austria, pp. 63-72.

[Trotts99] Trotts, I.J., Hamann, B. and Joy, K.I. (1999), Simplification of tetrahedral meshes with error bounds, IEEE Transactions on Visualization and Computer Graphics 5(3), pp. 224-237.

[Henzinger-Sastry98] T. A. Henzinger and S. S. Sastry, “Hybrid Systems: Computation and Control”, Springer Verlag Lecture Notes in CS, Vol. LNCS 1386, 1998.

[TLS00] C. J. Tomlin, J. Lygeros and S. Sastry, “A Game Theoretic Approach to Controller Design for Hybrid Systems”, Proceedings of the IEEE, Special Issue on Hybrid Systems, Vol. 88, July 2000, pp. 949-970.

[HQR00] T. Henzinger, S. Qadeer and S. Rajamani, “Decomposing refinement proofs using assume guarantee reasoning, “ Proceedings of the IEEE/ACM Conference on Computer Aided Design, ICCAD 2000, pp. 245-252.

1] J. M. Christensen and J. M. Howard. Field Experience in Maintenance. Human Detection and Diagnosis of System Failures: Proceedings of the NATO Symposium on Human Detection and Diagnosis of System Failures, J. Rasmussen and W. Rouse (Eds.). New York: Plenum Press, 1981, 111–133.

2] S. Fisher. E-business redefines infrastructure needs. InfoWorld, 7 January 2000. Available from .

3] J. Gray. Why Do Computers Stop and What Can Be Done About It? Symposium on Reliability in Distributed Software and Database Systems, 3–12, 1986.

4] J. Hamilton. Fault Avoidance vs. Fault Tolerance: Testing Doesn’t Scale. High Performance Transaction Systems (HPTS) Workshop, Asilomar, CA, 1999.

5] C. Perrow. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton Press, 1999.

6] B. H. Kantowitz and R. D. Sorkin. Human Factors: Understanding People-System Relationships. New York: Wiley, 1983.

7] D. R. Kuhn. Sources of Failure in the Public Switched Telephone Network. IEEE Computer 30(4), April 1997.

8] J. Menn. Prevention of Online Crashes is No Easy Fix. Los Angeles Times, 2 December 1999, C-1.

9] B. Murphy and T. Gent. Measuring System and Software Reliability using an Automated Data Collection Process. Quality and Reliability Engineering International, 11:341–353, 1995.

[10] A. Brown, G. Kar, and A. Keller. An Active Approach to Characterizing Dynamic Dependencies for Problem Determination in a Distributed Environment. To appear in Proceedings of the Seventh IFIP/IEEE International Symposium on Integrated Network Management (IM VII), Seattle, WA, May 2001.

[11] A. Brown and D. A. Patterson. Towards Availability Benchmarks: A Case Study of Software RAID Systems. Proceedings of the 2000 USENIX Annual Technical Conference, San Diego, CA, June 2000.

[12] Arpaci-Dusseau, R.; Anderson, E.; Treuhaft, N.; Culler, D.; Hellerstein, J.; Patterson, D.; Yelick, K. Cluster I/O with River: Making the Fast Case Common, Proc. IOPADS '99: Sixth Workshop on I/O in Parallel and Distributed Systems, Atlanta, Georgia, May 5, 1999.

[Newman 00] Newman, M.W. and J.A. Landay. Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice as Manifested Through Artifacts. In Proceedings of ACM Conference on Designing Interactive Systems. New York City. pp. 263-274, August 2000.

[Elliott 00] Elliott, A. and Hearst, M. How Large Should a Digital Desk Be?Qualitative Results of a Comparative Study in the Proceedings of CHI'00, Conference Companion, The Hague, Netherlands, 2000.

[Hong 01] Hong, J.I. and J.A. Landay, An Infrastructure Approach to Context-Aware Computing. Human-Computer Interaction, 2001. 16(2).

[Hearst 99] Hearst, M., User Interfaces and Visualization, in Modern Information Retrieval, Baeza-Yates and Ribeiro-Neto (Eds), Addison-Wesley Longman, 1999.

[Glaser 99] Glaser, D. and Hearst, M. Space Series: Simultaneous display of spatial and temporal data, IEEE Symposium on Information Visualization, October 1999.

[Yee 01] Yee, P., Dhamija, R., Fischer, D., Hearst, M., Animated Exploration of Graphs with Radial Layout, Submitted for publication, 2001.

[Olston 98] Olston, C., Woodruff, A., Aiken, A., Chu, M., Ercegovac, V., Lin, M., Spalding, M., Stonebraker, M. DataSplash, SIGMOD 1998, Seattle, Washington, June 1998.

[Avnur 98] Avnur, R., Hellerstein, J., Lo, B., Olston, C., Raman, B., Raman, V., Roth, T., Wylie, K. CONTROL: Continuous Output and Navigation Technology with Refinement On-Line, SIGMOD 1998, Seattle, Washington, June 1998.

CITATIONS 1.1

A. Perrig, R. Canetti, J. D. Tygar, D. Song. "Efficient Authentication and Signing of Multicast Streams over Lossy Channels". In Proceedings of the 2000 IEEE Symposium on Security and Privacy, May 2000, pages 56 - 73.

A. Perrig, R. Canetti, D. Song, J. D. Tygar. "Efficient and Secure Source Authentication for Multicast". In Proceedings of the Internet Society Network and Distributed System Security Symposium, February 2001.

CITATIONS 1.2



J. M. Kahn, R. H. Katz and K. S. J. Pister. "Mobile Networking for Smart Dust". ACM/IEEE Intl. Conf. on Mobile Computing and Networking (MobiCom 99), Seattle, WA, August 17-19, 1999.

CITATIONS 1.3

A. Perrig, D. Song, J. D. Tygar. "ELK: A New Protocol for Efficient Large-Group Key Distribution". To appear in Proceedings of the 2001 IEEE Symposium on Security and Privacy, May 2001.

CITATIONS 2.1

National Research Council Committee on Information Systems Trustworthiness. Trust in Cyberspace. National Academy Press, 1999.

J. D. Tygar, A. Whitten. "Why Isn't the Internet Secure Yet?" In ASLIB Proceedings, Volume 52, Number 3, March 2000, pages 93-97.

L. J. Camp, J. D. Tygar. "Protecting privacy while preserving access to data". In The Information Society, Volume 10, Number 1, January 1994, pages 59-71.

CITATIONS 2.2

[Pam Samuelson may provide additional citations]

CITATIONS 3

[David Wagner may provide additional citations]

CITATIONS 4

[Pam Samuelson may provide additional ciations]

Archer, G.C., Fenves, G.L., Thewalt, C. (1999).“A New Object-Oriented Finite Element Analysis Program Architecture”, Computers & Structures, 70(1), pp. 63-75.

Arici, Y. and K. M. Mosalam, (2000) System Identification and Modeling of Bridge Systems for Assessing Current Design Procedures, Proceedings of SMIP2000 Seminar, Sept. 14, Sacramento, CA

Baise, L.G., and Glaser, S.D. (2000), Repeatability of Site Response Estimates Made Using System Identification, Bulletin of the Seismological Society of America, 90(4), pp. 993-1009.

Beck, J.L. (1978). Determining Models of Structures from Earthquake Records. Earthquake Engineering Research Laboratory 78-01. California Institute of Technology.

Beck, J.L., and Katafygiotis, L.S. (1998). Updating Models and Their Uncertainties. I: Baysian Statistical Framework, Journal of Engineering Mechanics, 124(4), p. 455.

Buika, J. et al. (1998). “Advances in Scientific and Engineering Post-Earthquake Operational Response and Disaster Intelligence,” Proceedings, Sixth U.S. National Conference on Earthquake Engineering.

Farrar, C.R. and Doebling, S.W., (1997) Lessons Learned from Applications of Vibration-Based Damage Identification Methods to Large Bridge Structures, Proc. of the International Workshop on Structural Health Monitoring, Stanford, CA, Sept 1997, pp. 351-370.

Farrar, C.R., Doebling, S.W., Nix, D.A., (2001) Vibration-Based Structural Damage Identification, Philosophical Transactions of the Royal Society: Mathematical, Physical & Engineering Sciences, 359(1778), pp. 131 – 149

DesRoches, R., Fenves, G.L. (1997). “Evaluation of Recorded Earthquake Response of a Curved Highway Bridge”, Earthquake Spectra, 13(3), pp. 363-386.

Glaser, S.D., and Baise, L.G. (2000), System Identification Estimation of Damping and Modal Frequencies at the Lotung Site, Soil Dynamics and Earthquake Engineering, 19(6), pp. 521-531.

Heaton, T. (1996). “ The TriNet Project,” Proceedings, Eleventh World Conference on Earthquake Engineering, Elsevier Science Ltd.,Paper No. 2136.

Hjelmstad, K. D.; Banan, M. R.; Banan, M. R. “On Building Finite Element Models of Structures from Modal Response,” Earthquake Engineering & Structural Dynamics, 24(1), pp. 53-67.

Lin, J.-S. and Zhang, Y. (1994). Nonlinear structural identification using extended Kalman filter. Computers and Structures, 52(4), p.757.

McConnell, K.G. (1995) Vibration Testing: theory and practice, Wiley, p. 606.

McKenna, F., Fenves, G.L. (2000). “An Object-Oriented Software Design for Parallel Structural Analysis”, Advanced Technology in Structural Engineerings, Structures Congress 2000, ASCE.

NRC (1999). Reducing Disaster Losses Through Better Information, Board of Natural Disaster, National Research Council.

Safak, E. (1997). Propagation of Seismic Waves in Tall Buildings. Tall Buildings for the 21st Century (Proceedings of the 4th Conference on Tall Buildings in Seismic Regions), 129-154.

Scrivner, C., Worden, C.B. Wald, D.J. (2000). “Use of TriNet ShakeMap to Manage Earthquake risk,” Proceedings, Sixth International Conference on Seismic Zonation: Managing Earthquake Risk in the 21st Century, EERI.

Sohn, H., Law, K.H. (1997). “Bayesian Probabilistic Approach for Structure Damage Detection,” Earthquake Engineering & Structural Dynamics, 26(12), pp. 1259-1281.

Stewart,J.P, Fenves, G.L. ((1998) “System Identification for Evaluating Soil-Structure Interaction Effects in Buildings from Strong Motion Recordings”, Earthquake Engineering and Structural Dynamics, 27(8), pp. 869-885.

Smyth, A.W., Masri, S.F., Chassiakos, A.G., and Caughey, T.K. (1999). On-Line Parameter Identification of MDOF Nonlinear Hysteretic Systems, Journal of Engineering Mechanics, 125(2), p. 133.

TriNet (2001). “TriNet: Seismic System for Southern California,” .

Udwadia, F. E. (1985). Some Uniqueness Results Related to Soil and Building Structural Identification, SIAM J. Applied Math., 45(4), p. 674.

Werner, S.D., Crouse, C.B., Katafygiotis, L.S., and Beck, J.L. (1994) Use of strong motion records for model evaluation and seismic analysis of a bridge structure, Proceedings of the Fifth U.S. National Conference on Earthquake Engineering, 1, 511-520.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download