Page 2 Thrust Area 1— Loss Modeling and …
Project Final Report Template
Reporting Years: October 1, 2003– August 1, 2010
GENERAL INFORMATION
This form contains 4 sections
• Project & Personnel Information
• Executive Summary and Research Information
• Educational Information, and
• Outreach information.
Each section has multiple questions that will help us generate an integrated report for both the RESCUE and Responsphere Annual and Final Reports. Please answer them as succinctly as possible. However, the content should contain enough details for a scientifically-interested reader to understand the scope of your work and importance of the achievements. As this form covers both an annual and final report, the form asks you to provide input on the past year’s progress as well as overall progress for the entire 7-year program.
DEADLINE
The RESCUE and Responsphere reports are due to NSF by June 30, 2010.
Completed forms MUST be submitted by May 15th, 2010. (Obviously, publications can be submitted through the website (itr-) as you get papers accepted.). It is crucial you have this finished by this date, as the Ex-Com will be meeting (some are flying in) to finalize the report.
SUBMISSION INSTRUCTIONS
The completed forms must be submitted via email to:
• Chris Davison – cbdaviso@uci.edu
Publications need to be submitted to our website in order for us to upload to the NSF:
Auxiliary Material
To help you complete this form, you should refer to both the RESCUE Strategic Plan which identifies the overall goal of the program (this information is needed in order for you to explain how your research helps to achieve the goals of the RESCUE program) and the RESCUE annual reports for Years 1 through 6, plus the strategic plan. You can find these documents on the RESCUE projects website Intranet:
SECTION A: Project & Personnel Information
Project Title: PISA
Names of Team Members:
(Include Faculty/Senior Investigators, Graduate/Undergraduate Students, Researchers; which institution they’re from; and their function [grad student, researcher, etc])
Marianne Winslett UIUC investigator
Adam Lee UIUC graduate student
Mike Rosulek UIUC graduate student
Lars Olson UIUC graduate student
Jintae Lee UIUC graduate student
Ragib Hasan UIUC graduate student
Charles Zhang UIUC graduate student
Kent Seamons BYU investigator
Tim van der Horst BYU graduate student
Phillip Hellewell BYU graduate student
Andrew Harding BYU graduate student
Jason Holt BYU graduate student
Reed Abbott BYU graduate student
Robert Bradshaw BYU undergraduate
Ryan Segeberg BYU graduate student
Chen Li UCI investigator
Alexander Behm UCI graduate student
Shengyue Ji UCI graduate student
Jiaheng Lu UCI graduate student
Kathleen Tierney UC investigator
Jeannette Sutton UC postdoctoral researcher
Christine Bevc UC graduate student
List of Collaborators on Project:
(List all collaborators [industrial, government, academic] their affiliation, title, role in the project [e.g., member of Community Advisory Board, Industry Affiliate, testbed partner, etc.], and briefly discuss their participation in your project)
• Government Partners:
(Please list)
The City of Champaign (testbed partner)
The City of Champaign provided us with the opportunity to explore challenges in crisis response and study the efficacy of IT disaster research and solutions in a smaller-city setting. Steve Carter, City Manager; Fred Halenar, IT Director; and Stephen Clarkson, Deputy Fire Chief, were particularly helpful.
Champaign Central High School, Unit 4 School District, METCAD (911), Champaign County Regional Planning Commission (testbed partners)
These organizations helped us create the derailment & chemical spill scenario.
• Academic Partners:
(Please list)
L3S
Winslett and Seamons cooperated with Wolfgang Nejdl and Daniel Olmedilla of L3S on trust management research.
National Center for Supercomputing Applications
Winslett and Seamons cooperated with Jim Basney and Von Welch of NCSA in developing a trust negotiation prototype for deployment on computational grids.
USC/ISI
Clifford Neuman and Tatyana Ryutov cooperated with Seamons to allow trust negotiation facilities to be used with GAA-API.
• Industry Partners:
(Please list)
ZoneLabs
Provided graduate student funding at BYU for trust negotiation research
Champaign Red Cross, Arrow Ambulance (testbed partners)
Helped with construction of derailment & chemical spill scenario
SECTION B: Executive Summary and Research-Related Information (2 pages per project/area – e.g., SAMI, PISA, networks, dissemination, privacy, metasim, social science contributions, artifacts, testbeds)
(This summary needs to cover the entire 7-year period of the grant. However, information on recent research progress must also be provided. Please discuss the progress of your research within the context of the following questions. Where possible, please include graphics or tables to help answer these questions.)
Executive Summary
Executive Summary: Describe major research activities, major achievements, goals, and new problems identified over the entire seven-year period:
(This will be the MAJOR section of your report. The rest of this template will provide more detailed information for the subsections of the final report).
The section should answer the following questions:
1) What was the major challenge that your project was addressing and what were your goals?
Example: Creating on site networks and bi-directional data communication instantaneously which can meet the needs of data transmission both from first responders to the incident commanders and from incident commanders to the first responders.
2) What major technological/social science research questions were identified and what approach did you identify to solve the research question?
Example: The research question in the above challenge could be (a) reliability of communication in mesh environments and in multi-carrier networks, and (b) building capacity by exploiting multiple networks.
An example of approach could be exploiting multiple carriers, and of building mechanisms for prioritization of messaging to meet application quality.
3) What were your achievements in meeting the goals and addressing the research questions which you would like to highlight?
Example: Theoretical analysis of network capacities in such networks. One can quote the main result in such a theoretical analysis. Engineering such multinetworks, coming up with mechanisms for data collection in such networks, etc.
Products and Contributions: (Artifacts, 1st Responder adopted technologies, impact, and outreach).
This section should answer the following questions:
1) What products/systems did you develop?
2) How were these products /ideas tested?
3) What were the lessons learned?
Project Achievements: (This is where you get to tout the success of your project as well as new problems identified):
Please address following questions:
a) How did your work change the state-of-the-art in the area of your project? That is, what new scientific achievements can we attribute to your work?
b) How did the achievement lead to impact on first responders if any? Clear examples of such impact would be very useful.
SECTION C: Research Activities (this section will provide us information for the detailed appendix that will be included along with the executive summary)
(Please summarize major research activities over the past 7 years using the following points as a guide)
Project Name PISA
Project Summary --- summarize again what the major objectives of the project.
This is more or less a cut and paste from Section B that goes to executive summary. Feel free to elaborate a bit more about the project and its scope and in addition address the following questions.
Describe how your research supports the RESCUE vision
(Please provide a concise statement of how your research helps to meet RESCUE’s objectives and overarching and specific strategies – for reference, please refer to the Strategic Plan).
The PISA objective was to understand data sharing and privacy policies of organizations and individuals involved in a disaster, and to devise scalable IT solutions to represent and enforce such policies to enable seamless information sharing during disaster response.
To understand the requirements for information sharing during crises in smaller cities, we partnered with the City of Champaign and local first responders to devise and study a particular hypothetical crisis scenario: a derailment with chemical spill, fire, and threat of explosion in Champaign. We used this scenario as the basis for three focus groups of first responders, facilitated by RESCUE sociologists and used as the basis for their subsequent research. The focus groups met in Champaign in July/August 2006, with each group approximately three hours in length. The focus groups explored how the community’s public safety and emergency management organizations would interact and communicate using technology. Focus group discussions sought to determine which organizations would be collaborating, how they would work to overcome potential challenges and barriers to more effective collaboration, and the types of technology and communication tools they would (or could) use. In all, a total of 28 individuals participated in these focus groups. They included representatives from the cities of Champaign, Urbana, and the University of Illinois-Urbana Champaign, reflecting a diversity of disciplinary areas including fire, police, public works, schools (public and private), public media, and various emergency and medical services.
The discussions surrounding the derailment scenario pointed out several unmet IT needs for information sharing during crises, which we addressed in our subsequent research. The first set of new needs is support for internet sites/portals for reunification of families and friends, while simultaneously meeting the privacy needs of individuals. To address these needs, we built a portal for family and friends reunification that is robust across differences in the way people refer to a particular individual. We also devised very lightweight authentication and authorization techniques that are suitable for use in reunification of families and friends, and integrated the resulting technology into the Disaster Portal.
The second set of new needs is for quick integration of new first responders into the Emergency Operations Center’s information sharing environment, without the need for setting up and managing accounts and passwords for all possible responding organizations and their key employees. To meet this need, we developed ways for people to authenticate to a role (e.g., Red Cross manager, school superintendent) by virtue of (digital versions of) the credentials they possess through their employment. The resulting trust negotiation approaches were embodied in a robust prototype that has been widely disseminated in the security research community, and is slated for a field trial over the next five years in a EU FP7 project targeting the management of health care information and job search information: “The TAS³ Integrated Project (Trusted Architecture for Securely Shared Services) aims to have a European-wide impact on services based upon personal information, which is typically generated over a human lifetime and therefore is collected & stored at distributed locations and used in a multitude of business processes.”
How did you specifically engage the end-user community in your research?
First responders created the disaster scenario that drove our sociological and IT research. Further, we used actual web postings from individuals during hurricane Katrina as the test data for the Friends and Family Reunification Portal. The resulting technology was integrated into the Disaster Portal for the City of Ontario.
How did your research address the social, organizational, and cultural contexts associated with technological solutions to crisis response?
The focus groups for the derailment scenario specifically addressed information sharing practices in Champaign, as representative of smaller US cities.
Research Findings
(Summarize major research findings over the past 7 years).)
Describe major findings highlighting what you consider to be groundbreaking scientific findings of your research.
(Especially emphasize research results that you consider to be translational, i.e., changing a major perspective of research in your area).
Discussions with the City of Champaign showed that traditional authorization and authentication approaches, such as accounts and passwords, will not work well for crisis response. First responders, victims, and their friends and families need approaches that allow them to come together in real time and start sharing information in a controlled manner, without account management headaches. During the course of the RESCUE project, we developed a number of novel approaches to authentication and authorization that are suitable for use in disaster response.
For example, in response to confidentiality concerns identified in the derailment scenario for family and friends reunification, we worked to develop lightweight approaches for establishing trust across security domains. Victims need a way to ensure that messages they post are only read by the intended family members and friends, and vice versa. Many crisis response organizations have limited information technology resources and training, especially in small to mid-size cities. Obviously PKI infrastructure and other heavyweight authentication solutions such as logins and passwords are not practical in this context. Simple Authentication for the Web (SAW) is our user-friendly alternative that eliminates passwords and their associated management headaches by leveraging popular messaging services, including email, text messages, pagers, and instant messaging. SAW (i) removes the setup and management costs of passwords at sites that use email-based password reset; (ii) provides single sign-on without a specialized identity provider; (iii) thwarts passive attacks and raises the bar for active attacks; (iv) enables easy, secure sharing and collaboration without passwords; (v) provides intuitive delegation and revocation of authority; and (vi) facilitates client-side auditing of interactions. SAW can potentially be used to simplify web logins at all web sites that currently use email to reset passwords. Additional server-side support can be used to integrate SAW with web technology (blogs, wikis, web servers) and browser toolbars for Firefox and Internet Explorer. We have also shown how a user can demonstrate ownership of an email address without allowing another party (such as a phishing web site) to learn the user’s password or to conduct a dictionary attack to learn the user’s password.
With SAW, the identities of those authorized to gain access must be known in advance. In some situations, only the attributes of those authorized to gain access to a resource are known in advance – e.g., fire chief, police chief, city manager. In such a situation, we can avoid the management headaches and insecurity associated with accounts and passwords by adopting trust negotiation, a novel approach to authorization in open distributed systems. Under trust negotiation, every resource in the open system is protected by a policy describing the attributes of those authorized for access. At run time, users present digital credentials to prove that they possess the required attributes.
To help make trust negotiation practical for use in situations such as disaster response, we designed, built, evaluated, and released the Clouseau policy compliance checker, which uses a novel approach to determine whether a set of credentials satisfies an authorization policy. That is, given some authorization policy p and a set C of credentials, determine all unique minimal subsets of C that can be used to satisfy p. Finding all such satisfying sets of credentials is important, as it enables the design of trust establishment strategies that can be guaranteed to be complete: that is, they will establish trust if at all possible. Previous solutions to this problem have relied on theorem provers, which are quite slow in practice. We have reformulated the policy compliance problem as a pattern-matching problem and embodied the resulting solution in Clouseau, which is roughly ten times faster than a traditional theorem prover. We have also shown that existing policy languages can be compiled into the intermediate policy language that Clouseau uses, so that Clouseau is a general solution to this important problem.
We also investigated an important gap that exists between trust negotiation theory and the use of these protocols in realistic distributed systems, such as information sharing infrastructures for crisis response. Trust negotiation systems lack the notion of a consistent global state in which the satisfaction of authorization policies should be checked. We have shown that the most intuitive notion of consistency fails to provide basic safety guarantees under certain circumstances and can, in fact, can cause the permission of accesses that would be denied in any system using a centralized authorization protocol. We have proposed a hierarchy with several more refined notions of consistency that provide stronger safety guarantees and developed provably-correct algorithms that allow each of these refined notions of consistency to be attained in practice with minimal overheads.
We also created and released the highly flexible and configurableTrustBuilder2 framework for trust negotiation, to encourage researchers and practitioners to experiment with trust negotiation. TrustBuilder2 builds on our insights from using the TrustBuilder implementation of trust negotiation over several years; TrustBuilder2 is more flexible, modular, extensible, tunable, and robust against attack. Since its release, TrustBuilder2 has been downloaded over 700 times. TrustBuilder2 is slated for use as the authorization system in TAS3 (Trusted Architecture for Security Shared Services, ) project, a five-year European Union project. TrustBuilder2 has been downloaded over 700 times since its release.
We have also identified and addressed a number of issues in existing approaches to trust negotiation. For example, we showed how to force a negotiating party to reveal large amounts of irrelevant information during a negotiation. We also developed new correctness criteria that help ensure that the result of a trust negotiation session matches the intuition of the user – even if the state of the world changes while the negotiation is being carried out.
During a disaster, friends and families need to share personal information. Matching requests and responses can be challenging, because there are many ways to identify a person, and typos and misspellings are common. Data from friends-and-family reunification web sites are extremely heterogeneous in terms of their structures, representations, file formats, and page layouts. A significant amount of effort is needed to bring the data into a structured database. Further, there are many missing values in the extracted data from these sites. These missing values make it harder to match queries to data. Due to the noisiness of the information, an integrated portal for friends-and-family web sites must support approximate query answering.
To address this problem, we crawled missing person web sites and collected 76,000 missing person reports, and built a search interface over these records. To support effective people search, we developed novel and efficient indexing structures and algorithms. Our techniques allow type-ahead fuzzy search, which is very useful in people search given the particular characteristics of data and queries in the domain. More precisely, the system can do search on the fly as the user types in more information. The system can also find records that may match user keywords approximately with minor differences. This feature is especially important since there are inconsistencies in crawled records, and the user may have limited knowledge about the missing person. We released the resulting portal for friends and family reunification as part of the RESCUE Disaster Portal. Our new techniques can also be used during data cleaning in other domains, in order to deal with information from heterogeneous sources that may have errors and inconsistencies.
Highlight major research findings in this final year (Year 7).
I have this from Year 6 (ed.):
During the past year, we concentrated our efforts on Wireless Authentication using Remote Passwords (WARP). Current single sign-on techniques, including our own SAW, require a user to directly contact a third party during authentication. These approaches are unsuitable for wireless access, since the user does not have the network access necessary to contact a third party. WARP is a new in-band protocol that allows a user to prove to a wireless access point that she knows her password, without the access point gaining access to her password or to data that can be used to launch an off-line attack on the password. WARP has the potential to be used beyond wireless access protocols, as well.
To demonstrate the potential of WARP, we created an advanced authentication prototype that allows a user to demonstrate ownership of an email address without disclosing enough information to an attacker (such as a phishing web site) for the attacker to receive the user’s password or to conduct a dictionary attack to learn the user’s password. We have developed one approach that strengthens existing client/server authentications on the web. A second approach serves as a single sign-on mechanism that allows the user to prove that she knows her password at a third party, such as her email provider, without leaking information to an attacker. This second approach works for web logins as well as wireless access.
Please discuss how the efficacy of your research was evaluated. Through testbeds? Through interactions with end-users? Was there any quantification of benefits performed to assess the value of your technology or research? Please summarize the outcome of this quantification.
Each of our projects was evaluated in a different manner. For example, the focus group studies used statistical techniques. The performance tests for trust negotiation used example access control policies provided by potential end users from Sandia National Laboratories, plus synthetic policies that allowed us to test scalability. The friends and family reunification portal used test data from missing persons web sites, including data from Hurricane Katrina
Responsphere - Please discuss how the Responsphere facilities (servers, storage, networks, testbeds, and drill activities) assisted your research.
We used Responsphere facilities for testing the Friends and Family Reunification Portal algorithms.
Research Contributions
(The emphasis here is on broader impacts. How did your research contribute to advancing the state-of-knowledge in your research area? Please use the following questions to guide your response).
What products or artifacts have been developed as a result of your research?
Unless otherwise, mentioned, each of these software packages is available at
1. TrustBuilder2 – Framework for trust negotiation, discussed above. Available from .
2. Hidden Credentials – Credential system for protecting credentials, policies, and resource requests. Hidden credentials allow a service provider to send an encrypted message to a user in such a way that the user can only access the information with the proper credentials. Similarly, users can encrypt sensitive information disclosed to a service provider in the request for service. Policy concealment is accomplished through a secret splitting scheme that only leaks the parts of the policy that are satisfied. Hidden credentials may have relevance in crises involving ultra sensitive resources. They may also be able to play a role in situations where organizations are extremely reluctant to open up their systems to outsiders, especially when the information can be abused before an emergency even occurs. We have observed on the UCI campus that some buildings have lock boxes that are available to emergency personnel during a crisis. The management of physical keys is a significant problem. Hidden credentials have the potential to support digital lockboxes that store critical data to be used in a crisis. The private key used to access this information during a crisis may never have to be issued until the crisis occurs, limiting the risk of unauthorized access until the crisis occurs.
3. LogCrypt – Tamper-evident log files based on hash chaining. This system provides a service similar to TripWire, except that it is targeted for log files that are being modified. Often, an attacker breaks into a system and deletes the evidence of the break-in from an audit logs. The goal of LogCrypt is to make it possible to detect an unauthorized deletion or modification to a log file. Previous systems supporting this feature have incorporated symmetric encryption and an HMAC. LogCrypt also supports a public key variant that allows anyone to verify the log file. This means that the verifier does not need to be trusted. For the public key variant, if the original private key used to create the file is deleted, then it is impossible for anyone, even system administrators, to go back and modify the contents of the log file without being detected. During this past year, we completed experiments to measure the relative performance of available public key algorithms to demonstrate that a public key variant is practical. This variant has particular relevance in circumstances where the public trusts government authorities to behave correctly, and also benefits authorities by giving them a stronger basis for defending against claims of misbehavior. This technology may allow more secure auditing during a crisis.
4. Nym - Practical Pseudonymity for Anonymous Networks. Nym is an extremely simple way to allow pseudonymous access to Internet services via anonymizing networks like Tor, without losing the ability to limit vandalism using popular techniques such as blocking owners of offending IP or email addresses. Nym uses a very straightforward application of blind signatures to create a pseudonymity system with extremely low barriers to adoption. Clients use an entirely browser-based application to pseudonymously obtain a blinded token which can be anonymously exchanged for an ordinary TLS client certificate. We designed and implemented a Javascript application and the necessary patch to use client certificates in the popular web application MediaWiki, which powers the popular free encyclopedia Wikipedia. Thus, Nym is a complete solution, able to be deployed with a bare minimum of time and infrastructure support.
5. Thor – Credential repository. Thor is a repository for storing and managing digital credentials, trusted root keys, passwords, and policies that is suitable for mobile environments. A user can download the security information that a device needs to perform sensitive transactions. The goals are ease of use and robustness.
6. SACRED – Implementation of IETF SACRED (Securely Available Credentials) protocol
7. SAW – Simple Authentication for the Web. Discussed above.
8. Friends and Family Reunification Portal: and . At the latter URL, the reunification portal has been incorporated into the Disaster Portal for the City of Ontario.
How has your research contributed to knowledge within your discipline?
We built the TrustBuilder2 framework and the associated Clouseau compliance checker to make experimentation with trust negotiation practical. Without a user-friendly, flexible, fast framework to ease the process, the startup costs of adopting trust negotiation were a significant barrier to experimentation and trial deployments. The 700 downloads of Trustbuilder2 since its release indicate that the security community was ready to try out this new technology.
How has your research contributed to knowledge in other disciplines?
Our partnership with the City of Champaign has helped to advance the state of the art in the understanding of information-sharing practices during disaster response in smaller cities. From our interactions with first responders in Champaign, we learned that disaster response in Champaign-Urbana (population 160,000) is very different from in the major metropolitan areas of southern California. In particular, the level of trust and willingness to share information is higher in Champaign.
What human resource development contributions did your research project result in (e.g., students graduated, Ph.D., MS, contributions in placement of students in industry, academia, etc.)
Graduated MS students: Adam Lee (UIUC, now a professor at the University of Pittsburgh), Ragib Hasan (UIUC, now a PhD student), Tim van der Hoorst (BYU, now where?) more?
Graduated PhD students: Adam Lee (UIUC, now a professor at the University of Pittsburgh), Tim van der Hoorst (BYU, now where?) more?
Contributions beyond science and engineering (e.g., to industry, current practice, to first responders, etc.)
Disaster Portal – The Disaster Portal is in use by several cities in the US. The code has been released under the GNU license and is available here:
Please update your publication list for this project by going to:
(Include journal publications, technical reports, books, or periodicals). NSF must be referenced in each publication. DO NOT LIST YOUR PUBLICATIONS HERE. PLEASE PUT THEM ON THE WEBSITE.
Remaining Research Questions or Challenges
(In order to help develop a research agenda based on RESCUE after the project ends, please list remaining research questions or challenges and why they are significant within the context of the work you have done in RESCUE. Please also explain how the research that has been performed under the current RESCUE project has been used to identify these research opportunities).
Success Stories / Major Scientific Achievements
(Use this section to highlight what your project has achieved over the last 7 years. This is your opportunity to publicize your advancements and look back over our many years together and find those nuggets that really made a difference to science, first responders, etc.)
SECTION D: Education-Related Information
Educational activities:
(RESCUE-related activities you and members of your team are involved in. Include courses, projects in your existing courses, etc. Descriptions must have [if applicable] the following: quarter/semester during which the course was taught, the course name and number, university this course was taught in, course instructor, course project name)
Training and development:
(Internships, seminars, workshops, etc., provided by your project. Seminars/workshops should include date, location, and presenter. Internships should include intern name, duration, and project topic.) What PhD students have graduated?
Workshops Organized:
Databases in Virtual Organizations. Workshop held at the SIGMOD annual conference, Paris, June 2004. Marianne Winslett, Sharad Mehrotra, and Ramesh Jain co-organized this workshop. A report of the workshop appeared in SIGMOD Record, March 2005.
Trust, Security, and Reputation on the Semantic Web. Workshop at the International Semantic Web Conference, Hiroshima, November 2004. Marianne Winslett, Wolfgang Nejdl, Piero Bonatti, and Jennifer Golbeck organized this workshop.
Short courses and invited lectures on Trust Negotiation:
• M. Winslett. An Introduction to Trust Negotiation, at Brown University (October 2004), University of Pittsburgh (March 2004), University of Illinois at Chicago (April 2004), North Carolina State University (May 2004), Purdue University (2004).
• M. Winslett, Trust Negotiation, one-week course at the University of Trento, Italy, February 2004.
• K. Seamons. TrustBuilder: Automated Trust Negotiation in Open Systems. CERIAS Security Seminar, Purdue University, February 11, 2004.
• Tutorial on Security of Shared Data in Large Systems (including a section on trust negotiation) at the SIGMOD 2004 conference, Paris, June 2004, by Marianne Winslett and Arnie Rosenthal.
• Tutorial on Security of Shared Data in Large Systems (including a section on trust negotiation) at the Very Large Databases (VLDB) conference, Toronto, Sept. 2004, by Marianne Winslett and Arnie Rosenthal.
Education Materials:
(Please list courses introduced, taught, tutorials, data sets, creation of any education material of pedagogical significance that is a direct result of the RESCUE project).
Courses:
CS 665, Advanced Computer Security, Winter Semester 2008, Brigham Young University, Instructor: Kent Seamons, Project: Access Control in Open Systems.
Internships:
(Please list)
SECTION E: Outreach Related Information
Additional outreach activities:
(RESCUE-related conference presentations, participation in community activities, workshops, products or services provided to the community, etc.)
Conferences:
(Please list)
Group Presentations:
(Please list)
Impact of products or artifacts created from this project on first responders, industry, etc.
(Are they currently being used by a first-responder group? In what capacity? Are they industry groups that are interested in licensing the technology or investing in further development?).
The activities related to the derailment scenario in Champaign had a very strong community outreach component. We worked with the first responder community in Champaign to put together the scenario, and the focus groups that we facilitated helped the community to understand its own information sharing practices. We analyzed the detailed scenario, and identified gaps between responders’ expectations of one another and what can actually be delivered. We have shared those findings with the city of Champaign. We also looked for opportunities for technology insertion, wrote up those findings, and shared them with RESCUE project participants. The city planned to use the derailment scenario as the basis for tabletop exercises. As we neared the completion of the RESCUE project, the City of Champaign asked to deploy its own copy of the Disaster Portal developed for the City of Ontario.
The RESCUE project has also given the City of Champaign three network-in-a-box nodes, which the city has used in conjunction with its new high-tech mobile networking trailer to extend networking out into the field during disaster response.
PISA Year 6 Annual Report
Project 3: Policy-Driven Information Sharing (PISA)
Project Summary
The objective of PISA is to understand data sharing and privacy policies of organizations and
individuals; and devise scalable IT solutions to represent and enforce such policies to enable
seamless information sharing across all entities involved in a disaster. We are working to
design, develop, and evaluate a flexible, customizable, dynamic, robust, scalable, policy-driven
architecture for information sharing that ensures the right information flows to the right person at
the right time with minimal manual human intervention and automated enforcement of
information-sharing policies, all in the context of a particular disaster scenario: a derailment with
chemical spill, fire, and threat of explosion in Champaign.
Activities and Findings
During year 6, we integrated our simple web authentication (SAW) approach with RESCUE’s
friends and family reunification portal to demonstrate a new mechanism for easily and safely
sharing personal information on the portal, so that only acquaintances are able to access certain
information. For use with the friends and family reunification portal, we also studied how to
improve query performance of fuzzy text search using list-compression techniques.
Simple Authentication for the Web (SAW) is our user-friendly alternative that eliminates
passwords and their associated management headaches by leveraging popular messaging
services, including email, text messages, pagers, and instant messaging. SAW (i) removes the
setup and management costs of passwords at sites that use email-based password reset; (ii)
provides single sign-on without a specialized identity provider; (iii) thwarts passive attacks and
raises the bar for active attacks; (iv) enables easy, secure sharing and collaboration without
passwords; (v) provides intuitive delegation and revocation of authority; and (vi) facilitates clientside auditing of interactions. SAW can potentially be used to simplify web logins at all web sites that currently use email to reset passwords. Additional server-side support can be used to
integrate SAW with web technology (blogs, wikis, web servers) and browser toolbars for Firefox
and Internet Explorer. We have also shown how a user can demonstrate ownership of an email
address without allowing another party (such as a phishing web site) to learn the user’s
password or to conduct a dictionary attack to learn the user’s password.
With SAW, the identities of those authorized to gain access must be known in advance. In some
situations, only the attributes of those authorized to gain access to a resource are known in
advance – e.g., fire chief, police chief, city manager. In such a situation, we can avoid the
management headaches and insecurity associated with accounts and passwords by adopting
trust negotiation, a novel approach to authorization in open distributed systems. Under trust
negotiation, every resource in the open system is protected by a policy describing the attributes
of those authorized for access. At run time, users present digital credentials to prove that they
possess the required attributes. .
During the past year, we concentrated our efforts on Wireless Authentication using Remote
Passwords (WARP). Current single sign-on techniques, including our own SAW, require a user
to directly contact a third party during authentication. These approaches are unsuitable for
wireless access, since the user does not have the network access necessary to contact a third
party. WARP is a new in-band protocol that allows a user to prove to a wireless access point
that she knows her password, without the access point gaining access to her password or to
data that can be used to launch an off-line attack on the password. WARP has the potential to
be used beyond wireless access protocols, as well.
To demonstrate the potential of WARP, we created an advanced authentication prototype that
allows a user to demonstrate ownership of an email address without disclosing enough
information to an attacker (such as a phishing web site) for the attacker to receive the user’s
password or to conduct a dictionary attack to learn the user’s password. We have developed
one approach that strengthens existing client/server authentications on the web. A second
approach serves as a single sign-on mechanism that allows the user to prove that she knows
her password at a third party, such as her email provider, without leaking information to an
attacker. This second approach works for web logins as well as wireless access.
Products and Contributions
Unless otherwise, mentioned, each of these software packages is available at
:
TrustBuilder2
A Framework for trust negotiation, discussed above. Available from
Hidden Credentials
A credentialing system for protecting credentials, policies, and resource requests. Hidden
credentials allow a service provider to send an encrypted message to a user in such a way that
the user can only access the information with the proper credentials. Similarly, users can
encrypt sensitive information disclosed to a service provider in the request for service. Policy
concealment is accomplished through a secret splitting scheme that only leaks the parts of the
policy that are satisfied. Hidden credentials may have relevance in crises involving ultra
sensitive resources. They may also be able to play a role in situations where organizations are
extremely reluctant to open up their systems to outsiders, especially when the information can
be abused before an emergency even occurs. We have observed on the UCI campus that some
buildings have lock boxes that are available to emergency personnel during a crisis. The
management of physical keys is a significant problem. Hidden credentials have the potential to
support digital lockboxes that store critical data to be used in a crisis. The private key used to
access this information during a crisis may never have to be issued until the crisis occurs,
limiting the risk of unauthorized access until the crisis occurs.
LogCrypt
A Tamper-evident log file system based on hash chaining. This system provides a service
similar to TripWire, except that it is targeted for log files that are being modified. Often, an
attacker breaks into a system and deletes the evidence of the break-in from an audit logs. The
goal of LogCrypt is to make it possible to detect an unauthorized deletion or modification to a
log file. Previous systems supporting this feature have incorporated symmetric encryption and
an HMAC. LogCrypt also supports a public key variant that allows anyone to verify the log file.
This means that the verifier does not need to be trusted. For the public key variant, if the original
private key used to create the file is deleted, then it is impossible for anyone, even system
administrators, to go back and modify the contents of the log file without being detected. During
this past year, we completed experiments to measure the relative performance of available
public key algorithms to demonstrate that a public key variant is practical. This variant has
particular relevance in circumstances where the public trusts government authorities to behave
correctly, and also benefits authorities by giving them a stronger basis for defending against
claims of misbehavior. This technology may allow more secure auditing during a crisis.
Nym
A practical Pseudonymity for Anonymous Networks. Nym is an extremely simple way to allow
pseudonymous access to Internet services via anonymizing networks like Tor, without losing the
ability to limit vandalism using popular techniques such as blocking owners of offending IP or
email addresses. Nym uses a very straightforward application of blind signatures to create a
pseudonymity system with extremely low barriers to adoption. Clients use an entirely browserbased
application to pseudonymously obtain a blinded token which can be anonymously
exchanged for an ordinary TLS client certificate. We designed and implemented a Javascript
application and the necessary patch to use client certificates in the popular web application
MediaWiki, which powers the popular free encyclopedia Wikipedia. Thus, Nym is a complete
solution, able to be deployed with a bare minimum of time and infrastructure support.
Thor Credential Repository
Thor is a repository for storing and managing digital credentials, trusted root keys, passwords,
and policies that is suitable for mobile environments. A user can download the security
information that a device needs to perform sensitive transactions. The goals are ease of use
and robustness.
SACRED
An implementation of IETF SACRED (Securely Available Credentials) protocol.
SAW
Simple Authentication for the Web. Discussed above.
Friends and Family Reunification Portal:
and
6EF8 At the latter URL, the reunification portal has been incorporated into the Disaster Portal
for the City of Ontario.
Clouseau Policy Compliance Checker
To help make trust negotiation practical for use in situations such as disaster response, we
designed, built, evaluated, and released the Clouseau policy compliance checker, which uses a
novel approach to determine whether a set of credentials satisfies an authorization policy. That
is, given some authorization policy p and a set C of credentials, determine all unique minimal
subsets of C that can be used to satisfy p. Finding all such satisfying sets of credentials is
important, as it enables the design of trust establishment strategies that can be guaranteed to
be complete: that is, they will establish trust if at all possible. Previous solutions to this problem
have relied on theorem provers, which are quite slow in practice. We have reformulated the
policy compliance problem as a pattern-matching problem and embodied the resulting solution
in Clouseau, which is roughly ten times faster than a traditional theorem prover. We have also
shown that existing policy languages can be compiled into the intermediate policy language that
Clouseau uses, so that Clouseau is a general solution to this important problem
PISA Year 5 Annual Report
Project 3: Policy-Driven Information Sharing (PISA)
Project Summary
The objective of PISA is to understand data sharing and privacy policies of organizations and
individuals; and devise scalable IT solutions to represent and enforce such policies to enable
seamless information sharing across all entities involved in a disaster. We are working to
design, develop, and evaluate a flexible, customizable, dynamic, robust, scalable, policy-driven
architecture for information sharing that ensures the right information flows to the right person at
the right time with minimal manual human intervention and automated enforcement of
information-sharing policies, all in the context of a particular disaster scenario: a derailment with
chemical spill, fire, and threat of explosion in Champaign.
Activities and Findings
During this year, we addressed technical and sociological problems that were identified through
the derailment crisis scenario that is the focus point for PISA. As discussed below, our main
efforts during the past year were 1) continuing analysis of the sociology focus groups in
Champaign; 2) a released version of the TrustBuilder2 software for runtime trust establishment;
3) work to provide user-friendly authorization facilities for crisis victims and their friends and
family, and 4) work to provide information integration facilities that can amalgamate friends-andfamily
notices taken from grass-roots and government-sponsored family reunification web sites.
We describe each of these in more detail below.
In August 2006, Rescue sociologists facilitated three focus groups for 28 first responders and
other stakeholders in Champaign, exploring how the community’s public safety and emergency
management organizations would interact and communicate using technology in response to
the derailment with chemical spill scenario. From the data collected, several key observations
have been identified, including challenges and problems the community faces in this scenario,
and technology solutions and suggestions. Analysis of the results of the focus groups is
ongoing.
During the past year, we crawled three more autonomous sites and collected 46,000 more
records. Currently the system has 76,000 records about missing people. One unique feature of
the interface is that it can support interactive, fuzzy search. That is, the system can do search
on the fly as the user types in more keywords. The system can find also records that may
match user keywords approximately with minor differences. This feature is especially important
since there are inconsistencies in crawled records from the Web sites, and the user may have
limited knowledge about the missing people he is looking for. We are also adding an
authorization mechanism to the system so that we can keep track of the users who submit
information about missing people. In addition, we are adding a feature of displaying person
information on a map using Google Maps.
Products and Contributions
Trustbuilder 2 - We have released the TrustBuilder2 runtime system for trust negotiation
under a BSD-style open source license. Since its release, it has been downloaded over 170
times and is currently being used by several research laboratories. Last year, we designed,
built, and evaluated an efficient solution to the trust negotiation policy compliance checker
problem, called Clouseau, and incorporated it into TrustBuilder 2. Given some authorization
policy p and a set C of credentials, Clouseau quickly determines all unique minimal subsets
of C that satisfy p. In the last year, we have developed compilation procedures that
translate policies written in the RT0, RT1, and WS-SecurityPolicy languages into a format
suitable for analysis by Clouseau. We have also proven the correctness and completeness
of these compilation procedures, which shows that Clouseau can safely and efficiently
analyze policies written in existing language
User-friendly Authorization - In response to confidentiality concerns identified in the
derailment scenario for family and friends reunification, we worked to develop lightweight
approaches for establishing trust across security domains. Victims need a way to ensure
that messages they post are only read by the intended family members and friends, and
vice versa. Obviously logins, passwords, PKI infrastructure, and other heavyweight
authentication solutions are not practical in this context. SAW is our user-friendly alternative
that eliminates passwords and their associated management headaches by leveraging
popular messaging services, including email, text messages, pagers, and instant
messaging. Additional server-side support integrates SAW with web technology (blogs,
wikis, web servers) and browser toolbars for Firefox and Internet Explorer.
WARP - During the past year, we concentrated our efforts on Wireless Authentication using
Remote Passwords (WARP). Current single sign-on techniques, including our own Simple
Authentication for the Web, require a user to directly contact a third party during
authentication. These approaches are unsuitable for wireless access since the user does
not have the network access necessary to contact a third party. WARP is a new in-band
protocol that allows a user to prove to a wireless access point that he/she knows his/her
password without the access point gaining access to his/her password or data that can be
used to launch an off-line attack on the password. WARP has the potential to be used
beyond wireless access protocols, as well.
Data Integration Facilities - Data from friends-and-family reunification web sites are
extremely heterogeneous in terms of their structures, representations, file formats, and page
layouts. A significant amount of effort is needed in order to bring the data into a structured
database. Further, there are many missing values in the extracted data from these sites.
These missing values make it harder to match queries to data. Due to the noisiness of the
information, an integrated portal for friends-and-family web sites must support approximate
query answering. At the start of this reporting year, we already had a working demo of our
information integration technology that addresses these problems, available at
, with data from 16,000 missing person reports taken from three
web sites.
Future Research Directions
Effective crisis response requires the ability to nimbly reconfigure the way that functions are
accomplished. One aspect of this is the ability to quickly reassign powers and privileges, i.e., to
change the authorizations in a system. It can be very hard to understand the short- and longterm
ramifications of such changes. During the RESCUE project, we developed techniques to
compile high-level authorization policies into a form that can be checked very efficiently at run
time. We would like to leverage this approach to create a framework that enables one to quickly
grasp the ramifications of changes in authorization policies. Such a policy analysis framework
will have utility both during and after a crisis, by allowing post-mortem analysis to quickly zero in
on actions that would not normally have been permitted. Such a framework would also be
extremely helpful in any large organization during normal daily operations, as authorization
policies are changed.
PISA Year 4 Annual Report
Project 3: Policy-Driven Information Sharing (PISA)
The objectives of PISA are to understand data sharing and privacy policies of organizations and
individuals and to devise scalable IT solutions to represent and enforce such policies, enabling
seamless information sharing across all entities involved in a disaster. We are working to design,
develop, and evaluate a flexible, customizable, dynamic, robust, scalable, policy-driven
architecture for information sharing that ensures that the right information flows to the right person
at the right time with minimal manual human intervention and automated enforcement of
information-sharing policies, all in the context of a particular disaster scenario: a derailment with
chemical spill, fire, and threat of explosion in Champaign.
PISA’s deliverables and milestones for Year 4 include:
1. Continue trust negotiation scalability and availability work.
2. Develop lightweight authentication approach.
3. Develop family-and-friends reunification portal
4. Conduct focus groups in Champaign.
Activities and Findings. During the past year, we addressed technical and sociological problems
that were identified through the derailment crisis scenario that is the focus point for PISA. As
discussed below, our main efforts during the past year were: (1) sociology focus groups in
Champaign; (2) a completed version of TrustBuilder2 trust establishment software that will be
integrated with DHS’s Disaster Management Interoperability Services (DMIS) software to
demonstrate how flexible policy-driven authorization services can be incorporated into a disaster
information broker; and new work that addresses information sharing needs in family
reunification (a need identified in the derailment scenario) by (3) providing user-friendly
authorization facilities for crisis victims and their friends and family, and (4) by providing
information integration facilities that can amalgamate friends-and-family notices taken from grassroots
and government-sponsored family reunification web sites. We describe each of these in
more detail below.
(1) In August 2006, RESCUE sociologists facilitated three focus groups for 28 first responders
and other stakeholders in Champaign, exploring how the community’s public safety and
emergency management organizations would interact and communicate using technology in
response to the derailment with chemical spill scenario. From the data collected, several key
observations have been identified, including challenges and problems the community faces in this
scenario, and technology solutions and suggestions.
(2) We decided to use DHS’s DMIS as the underlying information-sharing infrastructure for PISA,
with policy management facilities layered atop DMIS. Given that SAMI’s analysis facilities are not
particularly relevant for the derailment scenario, we will not integrate SAMI with DMIS. We
determined that DMIS has rigid authorization requirements that may limit its effectiveness in a
crisis, so we have chosen to concentrate on the addition of flexible authorization facilities to
DMIS, as described below.
We re-architected and rebuilt TrustBuilder, our runtime system for authorization in open systems.
TrustBuilder 2 is more flexible, modular, extensible, tunable, and robust against attack, and we
are now integrating it with DMIS. We also designed, built, and evaluated an efficient solution to
the trust negotiation policy compliance checker problem, and incorporated it into TrustBuilder 2.
That is, given some authorization policy p and a set C of credentials, we quickly determine all
unique minimal subsets of C that satisfy p.
(3) In response to confidentiality concerns identified in the derailment scenario for family and
friends reunification, we worked to develop lightweight approaches for establishing trust across
security domains. Victims need a way to ensure that messages they post are only read by the
intended family members and friends, and vice versa. Obviously logins, passwords, PKI
infrastructure, and other heavyweight authentication solutions are not practical in this context.
SAW is our user-friendly alternative that eliminates passwords and their associated management
headaches by leveraging popular messaging services, including email, text messages, pagers,
and instant messaging. Additional server-side support integrates SAW with web technology
(blogs, wikis, web servers) and browser toolbars for Firefox and Internet Explorer.
(4) Data from friends-and-family reunification web sites are extremely heterogeneous in terms of
their structures, representations, file formats, and page layouts. A significant amount of effort is
needed in order to bring the data into a structured database. Further, there are many missing
values in the extracted data from these sites. These missing values make it harder to match
queries to data. Due to the noisiness of the information, an integrated portal for friends-andfamily
web sites must support approximate query answering. We have worked on these and
related issues in the past year; the resulting demo of our information integration technology is
available at , with data from 16,000 missing person reports taken from three web
sites.
PISA’s Deliverables and Plans for Year 5 include completing TrustBuilder 2 scalability and
availability work; demonstrating and benchmarking the derailment scenario that integrates DMIS,
TrustBuilder 2, and selected other RESCUE artifacts, with acceptable scalability; showing
robustness under attack as well as under “normal” operating conditions; and demonstrating
friends-and-family reunification portal that automatically integrates information crawled from
missing-persons sites and incorporates lightweight user-friendly techniques for authorization.
We also plan to include the Family Reunification Portal in SAMI’s Disaster Portal.
PISA Year 3 Annual Report
Project 3: Policy-Driven Information Sharing (PISA)
Our vision & long term goals in PISA is to design a scalable, flexible, customizable, dynamic, robust system that enables seamless policy-based information sharing across all entities involved in a disaster. Policies in such a system determine what, when, and where information is collected, with whom the collected information can be shared and how. It also specifies what processes can be applied to the information and the obligations that result from performing these actions on the information (e.g., logging of access). We recognize that the above vision is very broad, and from the start, we planned to reduce our scope by confining PISA to the aspects of the system needed for the crisis scenarios developed in Champaign.
During the past year, we worked with over a dozen organizations in the city of Champaign to develop a detailed crisis scenario involving a derailment with chemical spill, fire, and risk of explosion in Champaign. The resulting 35-page scenario serves several purposes for RESCUE: (1) scenario for discussion in focus groups run by RESCUE sociologists; (2) opportunity to identify opportunities for RESCUE technology insertion, and to integrate RESCUE artifacts in response to a single realistic crisis scenario; (3) motivating scenario for our planned work on policy-based information sharing; and (4) a source of a previously unrecognized need for privacy research (discussed in Project 5 on Privacy). We describe progress and plans for each of these in more detail below.
Sociology focus groups. Three focus groups led by RESCUE sociologists will take place in Champaign on July 31 and August 1 of this year. The discussion of the derailment scenario in these groups will lead to valuable insights into the management of disasters in mid-size cities.
Opportunities for RESCUE technology insertion. The following RESCUE research results and artifacts can be helpful in responding to the derailment scenario: how to disseminate information to the public; next-generation 911 services; information integration and privacy for family-and-friends welfare inquiry web sites; extreme networking; MetaSIM; flexible authorization services for information sharing; event analysis; InLET loss estimation; ManPacks (based on CalMesh) and EvacPacks, including data analysis and reduction facilities. During the coming year, we will work to include these technologies in an extended demonstration focusing on the derailment scenario.
Policy-based information sharing. We have analyzed the derailment scenario and identified the real-world policies for inter-organizational information sharing in the scenario. Based on our analysis, we decided to focus PISA research along the following directions – significant progress was made along each of the following directions in Year 3.
• Run-time scalability and robustness. The goal is to ensure that the authorization server and policy engine is not the bottleneck of the system or an attractive point for attack.
6
• The theoretical underpinning of policy-based information sharing: i.e., an approach to represent and reason about authorization, obligations, and release policies in a system with many peers.
• Trustworthy audit trails. The goal is to ensure that insiders cannot take advantage of a crisis to access information inappropriately, then cover up their tracks
The research described above will culminate in a PISA artifact that provides flexible, policy based information sharing capabilities. The PISA artifact is being designed on top of the US government’s Disaster Management Information System (DMIS) which will serve as the information broker substrate for our system. The City of Champaign has been experimenting with DMIS; DMIS is cumbersome to set up and its authorization facilities are rigid enough to limit its use in a crisis. PISA artifact will extend DMIS to support policies for authorization, obligations (audit, notification) and release of information. It will also support enforcement of multiplicity of kinds of authentication and authorization policies (including trust negotiation) rather than just accounts and passwords. Furthermore, the artifact will integrate our research results to achieve scalable, robust policy management that supports trustworthy audit facilities. In Year 3 we have initiated the design of the basic architecture for the PISA artifact. Our focus in the next year will be to both extend the design and incorporate extensions to DMIS to realize the PISA artifact
Information Sharing Year 2 Annual Report
INFORMATION SHARING
This thrust area addresses related research activities that facilitate seamless sharing of
information among decision-makers across organizational boundaries. Special emphasis
is placed on seamless information-sharing and collective decision-making across highlydynamic
emergent virtual organizations. The research can be classified into two parts:
social science research (S1) provides context; and S2-S4 relate to IT research. Table 2-
4 presents a listing of active research projects and tasks in the area of Information
Sharing.
Table 2-4 Information Sharing - Project Areas, Tasks and Investigators
Project Area Task No. Task Description Institution/
Investigator
S1.1 Emergent Multi-Organizational
Networks
CU/ K. Tierney, J.
Sutton
S1.2 Interviews with City of Los
Angeles
CU/ K. Tierney, J.
Sutton
S1: Understanding
Emergent
Networks
S1.3 Network Analysis of the Incident
Command System (ICS)
UCI/ C. Butts
S2.1 TrustBuilder BYU/ K. Seamons &
UIUC/ M. Winslett
S2.2 Hidden Credentials BYU/ K. Seamons
S2.3 Trust Credentials BYU/ K. Seamons
S2.4 Trust Negotiation Facilities for
Legacy Clients and Services
UIUC/ M. Winslett
S2: Trust
Management in
Crisis Networks
S2.5 Support for Friendly Third
Parties during Trust Negotiation
UIUC/ M. Winslett
S3.1 Secure XML Publishing UCI/ C. Li
S3.2 Authentication and Access
Control
UCI/ G. Tsudik
S3.3 LogCrypt BYU/ K. Seamons
S3.4 Phishing Warden BYU/ K. Seamons
S3: Security and
Privacy Concerns
in Information
Sharing
S3.5 Secure Object Store UCI/ S. Mehrotra
S4.1 Peer-Based Information Sharing UCI/ C. Li
S4.2 Similarity Search on Peer-
Based Sharing Systems
UCI/ C. Li
S4: Mediation,
Integration,
Querying, Filtering
S4.3 GIS-Based Search UCI/ C. LI
Major Activities and Findings:
S1 Understanding Emergent Networks
S1.1 Emergent Multi-Organizational Networks (EMONs) (CU/ K. Tierney, J. Sutton)
Research has focused on developing and preparing a large data set regarding EMONs
in the WTC disaster for RESCUE use. UCI (Butts) has been collaborating on data45
representation issues, and will be working with UCB on subsequent analysis. UCI
(Butts) has also initiated work on a network analysis of the Incident Command System,
using the MetaMatrix framework for organizational analysis; this work also includes the
analysis of EMONs drawn from search and rescue activities associated with six crisis
events. These data and associated analyses contribute to the RESCUE project's
ongoing objective of assessing (and ultimately improving) DEVO structure within crisis
response.
S1.2 Interviews with City of Los Angeles (CU/ K. Tierney, J. Sutton)
We completed 37 interviews with personnel from six departments in the City of Los
Angeles whose general managers are representatives on the Emergency Operations
Board. These departments included the Emergency Preparedness Department,
Information Technology Agency, Los Angeles Police Department, Los Angeles Fire
Department, Recreation and Parks, and the Board of Public Works – Bureau of
Engineers. Additional interviews are being scheduled with personnel from the
Department of Transportation, Department of Water and Power, General Services,
Airport, and Harbor. UCSD (Pasco) collaborated with UCB on interviews with LAPD,
focusing on issues relating to interoperability and intra-departmental communication
systems. These interviews have lead to preliminary findings regarding current
technology use within the city of Los Angeles for information/data gathering, sharing,
analysis and dissemination to the public, as well as security concerns and protocols for
data sharing between agencies.
S1.3 Network Analysis of the Incident Command System (ICS) (UCI/ Butts)
Practitioner documentation on the ICS was obtained from FEMA training manuals and
other sources during Year 2. This documentation was hand-coded to obtain a census of
standard vertex classes (positions, tasks, resources, etc.) for a typical ICS structure.
Some difficulty was encountered during this process due to the presence of considerable
disagreement among primary sources. For this reason, it was decided to proceed with a
relatively basic list of organizational elements for an initial analysis. Based on this list,
adjacency matrices were constructed based on practitioner accounts (e.g., task
assignment, authority/reporting relations, task/resource dependence). We accomplished
all of our research objectives, which were identification of relevant documentation,
identification of Metamatrix vertex sets, and construction of adjacency matrices. Further
validation of these relationships by domain experts will be conducted in Year 3, prior to
analysis of the system as a whole.
S2 TRUST MANAGEMENT IN CRISIS NETWORKS
S2.1 TrustBuilder (BYU/ K. Seamons and UIUC/ M. Winslett)
These researchers have continued work on the development of TrustBuilder, a prototype
system supporting trust negotiation. There are two aspects of the current work in
TrustBuilder at BYU that has relevance to helpful third parties. The first is the design and
implementation of surrogate trust negotiation as one approach for someone to store her
credentials on her home computer and have a service negotiate for access to her
credentials at run-time. The second is based on an examination of existing approaches
to storing credentials in local or remote repositories. BYU identified the advantages and
disadvantages of each approach, and then proposed a hybrid model that leverages the
best of both worlds. The result is Thor, a hybrid online repository for storing and
managing credentials in a mobile environment. The system incorporates the IETF
SACRED standard. As part of this effort, BYU collaborated with NCSA to implement the
first freely available SACRED server. The long-term goal of this work is an architecture
that emergency personnel will find easy to use to securely access sensitive data during
a crisis.
S2.2 Hidden Credentials (BYU/ K. Seamons)
We introduced hidden credentials as a revolutionary approach to trust negotiation where
sensitive information is encrypted in such a way that a recipient could only access it with
the proper credentials. Hidden credentials were built using identity-based encryption
(IBE). They permitted traditional trust negotiations that required several rounds of
communication to be accomplished in a single message. A server could send a resource
to an unknown client encrypted according to a complex policy, requiring the user to have
a certain set of credentials in order to access the resource. In addition to reducing the
cost of a negotiation, hidden credentials also provide greater privacy protection because
a credential owner can now access sensitive resources without ever revealing an ultrasensitive
credential. More recently, BYU has enhanced hidden credentials to offer
significantly improved performance when using Boneh/Franklin Identity Based
Encryption and to offer improved concealment of policies from unsatisfied recipients. We
accomplished policy concealment by using a secret-splitting scheme that only leaks
those parts of a policy that the recipient satisfies.
S2.3 Trust Credentials (BYU/ K. Seamons)
Our research explored ways to incorporate more contextual knowledge in a trust
negotiation so that an agent can make better decisions about what credentials are
needed to establish trust. The goal was to support improved privacy and performance to
trust negotiation. With this extra knowledge, phishing attacks can be detected and
thwarted. Also, trust negotiation could be streamlined if the participants can eagerly push
credentials that they anticipate will be highly likely to establish trust. We utilize ontologies
during trust negotiation as a mechanism to describe credentials that are relevant to a
certain context. The importance to emergency response is that a priori knowledge about
crises can be captured in ontologies and automatically incorporated into a system at
runtime to modify behavior during an emergency.
S2.4 Trust Negotiation Facilities for Legacy Clients and Services
(UIUC/ Winslett)
We have been experimenting with the use of trust negotiation technology in real-world
situations by cooperating with Jim Basney and Von Welch at NCSA, who are
responsible for certain aspects of the Grid Security Infrastructure. We have developed
an approach to making trust negotiation facilities available to applications on the Grid or
elsewhere, and embodied it in the Traust prototype. Traust provides clients with the
ability to acquire access tokens for networked resources dynamically at run-time. Traust
uses automated trust negotiation to support bilateral trust establishment, the discovery of
resource access control policies, and the protection of client and server privacy. The
Traust service has been designed in such a way as to support both loose integration
with existing “legacy” services and tighter integration with newer trust-aware resources.
A session between a client and a Traust server consists of five distinct stages. First, the
client generates a resource request and classifies its sensitivity level using a local
resource classifier. Next, the client carries out a trust negotiation with the Traust server
to ensure that the server can satisfy the disclosure policies the client has placed on her
sensitive resource request. This prevents inadvertent disclosure of sensitive resource
requests to untrusted servers. If this negotiation succeeds, the client discloses her
resource request. The server now uses trust negotiation to determine whether the client
is authorized to access the requested resource. The final stage of the Traust interaction
involves the server generating the access credentials needed for the client to access the
resource and disclosing these credentials to the client. The credential generation could
be as simple as a static credential lookup or as complicated as dynamic account
creation or interaction with MyProxy, Community Authorization Service, or Kerberos
servers. All communications between clients and the Traust server occur inside of a
TLS tunnel used to provide confidentiality and integrity for the session. It is important to
note that this tunnel does not provide any notion of authentication or authorization
control; the trust negotiation sessions serve this purpose.
S2.5 Support for Friendly Third Parties during Trust Negotiation
(UIUC/ WInslett).
With current trust negotiation software, manual intervention is required to carry out
interactions with these third parties. During the past year, we have examined ways of
automating these interactions and of reasoning about the properties of the resulting
system. For example, an emergency analyst may want to ask, in advance, “During a
code 3 disaster, will all rescue dog handlers be able to access the same one person’s
information service? During a code 3 disaster, who from outside a hospital (not
ordinarily authorized to) look at medical records may do so?”
To help automate the inclusion of third parties and provide a framework for answering
analysis questions, we developed PeerTrust2, a language based on distributed logic and
designed to solve distributed authorization problems. PeerTrust2 policies support
delegation, purpose, exposure control, proof hints and sticky policies. We worked with
European colleagues W. Nejdl and P. Bonatti to revamp and extend PeerTrust
semantics, resulting in PeerTrust2, and have shown how to use PeerTrust2 to write
simple yet expressive access control policies that involve helpful third parties.
PeerTrust2 can serve as a trust negotiation language, but our long-term intent is to use
the powerful new features in PeerTrust2 and add them to other policy languages such as
XACML so users of those languages can also take advantage of helpful third parties.
There are two aspects of our current work in TrustBuilder that has relevance to helpful
third parties. We have designed and implemented surrogate trust negotiation as one
approach for someone to store her credentials on her home computer and have a
service negotiate for access to her credentials at run-time. Next, we examined existing
approaches to storing credentials in local or remote repositories. We identified the
advantages and disadvantages of each approach, and then proposed a hybrid model
that leverages the best of both worlds. We designed Thor, the hybrid online repository
for storing and managing credentials in a mobile environment. Our long-term goal is an
architecture that emergency personnel will find easy to use to securely access sensitive
data during a crisis.
S3 SECURITY AND PRIVACY CONCERNS IN INFORMATION SHARING
S3.1 Secure XML Publishing (UCI/ C. Li)
This research involves protecting sensitive information when XML data is exchanged
between two business partners in order to meet precise security requirements. The
focus is on data-sharing applications where the owner specifies what information is
sensitive and should be protected. If a partial document is shared carelessly, users of
the other company can use common knowledge (e.g., “all patients in the same ward
have the same disease”) to infer additional data, which can cause leakage of sensitive
information. Our goal is to protect such information in the presence of data inference
with common knowledge. Common knowledge is represented as semantic XML
constraints. We formulated the process for how users can infer data using three types of
well-known XML constraints. Interestingly, no matter what sequences users follow to
infer data, there is a unique, maximal document that contains all possible inferred
documents, even if different inference sequences could produce documents that may
look different. We developed algorithms to find a partial document of a given XML
document without causing information leakage, while publishing as much data as
possible. We experimented with real data sets to show the effect of inference on data
security, and demonstrated that the proposed techniques can prevent such leakage from
happening.
S3.2 Authentication and Access Control (UCI/ Tsudik)
This research group developed novel techniques for secure and efficient processing of
authenticated query replies for queries against outsourced data. The work specifically
included the investigation of several new signature aggregation methods that greatly
speed up digital signature verification without sacrificing security.
We made several novel advancements in authentication and access-control techniques.
First, several new anonymous authentication protocols (secret handshake) were
constructed. These protocols are provably secure, based on well-known cryptographic
assumptions and offer efficiency superior to that of prior art. Second, techniques for
anonymous role-based communication are under development. Known also under the
name OSBE (Oblivious Signature-Based Envelopes), such techniques allow anyone to
anonymously communicate information to an entity who claims a certain role (e.g., an
FBI agent) without finding out whether the recipient is indeed in that role. Prior work is
limited to the RSA- and ID-based OSBEs. Third, we constructed (and proved security of)
two new secure group admission protocols: one based on ID-based cryptography and
one on threshold Schnorr signatures. We developed a full-blown prototype of Bouncer –
the group admission control toolkit. Bouncer allows fully distributed secure admission
based on thresholds. It includes 5 different cryptographic protocols varying in the level of
security, efficiency and state requirements.
S3.3 Logcrypt (BYU/ Seamons)
Seamons has developed LogCrypt, support for tamper-evident log files using hash
chaining. This system provides a service similar to TripWire, except that it is targeted for
log files that are being modified. Often, an attacker breaks into a system and deletes the
evidence of the break-in from an audit log. The goal of LogCrypt is to make it possible to
detect an unauthorized deletion or modification to a log file. Previous systems supporting
this feature have incorporated symmetric encryption and an HMAC. LogCrypt also
supports a public key variant that allows anyone to verify the log file, meaning that the
verifier does not need to be trusted. For the public key variant, if the original private key
used to create the file is deleted, then it is impossible for anyone, even system
administrators, to go back and modify the contents of the log file without being detected.
S3.4 Phishing Warden (BYU/ Seamons)
Research in this group involves development of technology that employs trust
negotiation to prevent phishing attacks. One approach was to use a client-side filter to
dynamically detect when sensitive information is disclosed to a server and demand
suitable credentials from the server to authorize the disclosure. We developed a browser
extension that identifies personal information being disclosed to a stranger before any
downloaded code in a Java applet can obfuscate the data to bypass the filtering
software. Prior approaches to solving this problem have been vulnerable to this threat.
Phishing Warden is designed to establish trust in a web server before disclosing
sensitive information. A second approach was to redirect a server that wants personal
information to a broker that negotiates on behalf of the user to disclose the client’s
personal information.
S3.5 Secure Object Store (UCI/ Mehrotra)
Today’s applications are generating gargantuan amounts of information and data. Data
intensive applications such as radio telescopes and sensor applications in pervasive
environments produce constant streams of data. Even though storage costs have
dropped exponentially, the cost of managing stored data in a safe and fault-tolerant
manner has not dropped. A solution to the above challenge is storage outsourcing via a
service-oriented interface. Such a storage solution can significantly reduce operation
due to cost amortization of managing storage, including human cost. Other advantages
include mobile access to data and facilitation of data-sharing across organizational
boundaries.
There are many security challenges that need to be addressed for a service of this kind.
Since data is the most valuable asset for organizations/individuals, the data owners
would like to protect themselves from both insider/outsider attacks before outsourcing
their data. Encryption offers a natural solution that offers security against the above
attacks. Our research focuses on developing techniques that will allow the service
provider to cater query-management and data-sharing services on encrypted data.
The secure sharing enables multiple interesting applications. One such application is
password sharing. Today, users use the same passwords across different sites,
including secure and insecure sites which cannot be trusted. Pfishing attacks are
common as a result. Now imagine that secure sharing is available. In this case, one can
build a service in which passwords can be stored at the server. Since the service
remembers the passwords, users can use extremely complex passwords which they do
not need to remember. We have built such a service and it has been operational for
some time now. The service allows password generation, storage on server, retrieval of
passwords, and also filling out passwords on the web.
S4 MEDIATION, INTEGRATION, QUERYING, FILTERING
S4.1 Peer-Based Information Sharing (UCI/ C. Li)
In a crisis-response situation, many participating organizations collaborate in the context
of a variety of tasks related to disaster mitigation (e.g., rescue and evacuation,
maintaining law and order). Seamless mechanisms to inter-organization information
sharing can revolutionize how such emergent collaborations are established, resulting in
dramatic improvements to crisis response. Such information may have been dynamically
collected and analyzed, or pre-existing in organizational knowledge and databases.
Challenges in information-sharing across different organizations arise due to frequent
structural and functional changes (e.g., expansion, extension) within organizations,
emergence of complex inter-organizational relationships, lack of centralized control, and
an element of the surprise resulting in unexpected inter-organization relationships and
data needs.
To study how to support distributed data-sharing in crisis-response organizations, we
have developed a system called “RACCOON,” in which different sources can publish
and share their data. In the system, peers (data owners) are connected in an overlay
network, in which semantic mappings can be created between peers. Existing peers
can leave the system, and new peers can join the network. A user can search for
relations that are similar to a given relation. For such a search query, the system adopts
schema-mapping techniques to locate relevant sources, and creates source mappings
on the fly. The system provides a visualization tool to show the neighbors of each peer,
which makes it easier for users to browse peer contents and issue queries. The system
supports queries that request information from multiple peers. The system also allows a
query to be “expanded” by accessing other peers that have schema mappings with the
peers used in the query. In this way a user can retrieve as much information as possible
to answer a query.
S4.2 Similarity Search on Peer-Based Sharing Systems (UCI/ C. Li)
Data sharing in crisis situations can be supported using a peer-to-peer (P2P)
architecture, in which each data owner publishes its data and shares it with other
owners. The research studied how to support similarity queries in such a system.
Similarity queries ask for the most relevant objects in a P2P network, where relevance is
based on a predefined similarity function, and the user is interested in obtaining objects
with the highest relevance. Because retrieving all objects and computing the exact
answer over a large-scale network is impractical, we created a novel approximate
answering framework that computed an answer by visiting only a subset of network
peers. Users were presented with progressively refined answers consisting of the best
objects seen so far with continuously improving quality guarantees providing feedback
about the search’s progress. We developed statistical techniques to determine quality
guarantees in this framework and mechanisms to incorporate quality estimators into the
search process. We developed a framework to support progressive query-answering in
P2P systems and techniques to estimate answer qualities based on objects retrieved in
a search process. We conducted experiments to evaluate these techniques, and
concluded that similarity queries are very important in P2P systems. By accessing a
small number of peers, a similarity query can be approximately answered with a certain
quality guarantee.
S4.3 GIS-based Search (UCI/ C. Li)
In crisis situations there are various kinds of GIS information stored at different places,
such as map information about pipes, gas stations, hospitals, etc. Since first-responders
need fast access to important, relevant information, it becomes important to provide an
easy-to-use interface to support keyword search on GIS data. The goal of this project is
to provide such a system, which crawls and/or archives GIS data files from different
places, and builds index structures. It provides a user-friendly interface, that allows a
user to type in a few keywords, and the system can return all the relevant objects (in a
ranked order) stored in the files. For instance, if a user types in “Irvine school,” the
system will return schools in Irvine stored in the GIS files, the corresponding GIS files,
possibly displayed on a map. This feature is similar to online services such as Google
Local, with more emphasis on information stored in GIS data files.
Our goal is to implement such a system prototype and study related research
challenges. Last November, we started the project of “keyword queries on GIS data,”
since it is very relevant to the RESCUE project. We are still in the early stage, and we
already have built a very simple prototype. We are currently working on technical
problems.
Information Sharing Year 1 Annual Report
INFORMATION SHARING
This thrust area addresses related research activities that facilitate seamless sharing of
information among decision-makers across organizational boundaries. Special emphasis
is placed on seamless information-sharing and collective decision-making across highlydynamic
emergent virtual organizations. The research can be classified into two parts:
social science research (S1) provides context; and S2-S4 relate to IT research. Table 2-
4 presents a listing of active research projects and tasks in the area of Information
Sharing.
Table 2-4 Information Sharing - Project Areas, Tasks and Investigators
Project Area Task No. Task Description Institution/
Investigator
S1.1 Emergent Multi-Organizational
Networks
CU/ K. Tierney, J.
Sutton
S1.2 Interviews with City of Los
Angeles
CU/ K. Tierney, J.
Sutton
S1: Understanding
Emergent
Networks
S1.3 Network Analysis of the Incident
Command System (ICS)
UCI/ C. Butts
S2.1 TrustBuilder BYU/ K. Seamons &
UIUC/ M. Winslett
S2.2 Hidden Credentials BYU/ K. Seamons
S2.3 Trust Credentials BYU/ K. Seamons
S2.4 Trust Negotiation Facilities for
Legacy Clients and Services
UIUC/ M. Winslett
S2: Trust
Management in
Crisis Networks
S2.5 Support for Friendly Third
Parties during Trust Negotiation
UIUC/ M. Winslett
S3.1 Secure XML Publishing UCI/ C. Li
S3.2 Authentication and Access
Control
UCI/ G. Tsudik
S3.3 LogCrypt BYU/ K. Seamons
S3.4 Phishing Warden BYU/ K. Seamons
S3: Security and
Privacy Concerns
in Information
Sharing
S3.5 Secure Object Store UCI/ S. Mehrotra
S4.1 Peer-Based Information Sharing UCI/ C. Li
S4.2 Similarity Search on Peer-
Based Sharing Systems
UCI/ C. Li
S4: Mediation,
Integration,
Querying, Filtering
S4.3 GIS-Based Search UCI/ C. LI
Major Activities and Findings:
S1 Understanding Emergent Networks
S1.1 Emergent Multi-Organizational Networks (EMONs) (CU/ K. Tierney, J. Sutton)
Research has focused on developing and preparing a large data set regarding EMONs
in the WTC disaster for RESCUE use. UCI (Butts) has been collaborating on data45
representation issues, and will be working with UCB on subsequent analysis. UCI
(Butts) has also initiated work on a network analysis of the Incident Command System,
using the MetaMatrix framework for organizational analysis; this work also includes the
analysis of EMONs drawn from search and rescue activities associated with six crisis
events. These data and associated analyses contribute to the RESCUE project's
ongoing objective of assessing (and ultimately improving) DEVO structure within crisis
response.
S1.2 Interviews with City of Los Angeles (CU/ K. Tierney, J. Sutton)
We completed 37 interviews with personnel from six departments in the City of Los
Angeles whose general managers are representatives on the Emergency Operations
Board. These departments included the Emergency Preparedness Department,
Information Technology Agency, Los Angeles Police Department, Los Angeles Fire
Department, Recreation and Parks, and the Board of Public Works – Bureau of
Engineers. Additional interviews are being scheduled with personnel from the
Department of Transportation, Department of Water and Power, General Services,
Airport, and Harbor. UCSD (Pasco) collaborated with UCB on interviews with LAPD,
focusing on issues relating to interoperability and intra-departmental communication
systems. These interviews have lead to preliminary findings regarding current
technology use within the city of Los Angeles for information/data gathering, sharing,
analysis and dissemination to the public, as well as security concerns and protocols for
data sharing between agencies.
S1.3 Network Analysis of the Incident Command System (ICS) (UCI/ Butts)
Practitioner documentation on the ICS was obtained from FEMA training manuals and
other sources during Year 2. This documentation was hand-coded to obtain a census of
standard vertex classes (positions, tasks, resources, etc.) for a typical ICS structure.
Some difficulty was encountered during this process due to the presence of considerable
disagreement among primary sources. For this reason, it was decided to proceed with a
relatively basic list of organizational elements for an initial analysis. Based on this list,
adjacency matrices were constructed based on practitioner accounts (e.g., task
assignment, authority/reporting relations, task/resource dependence). We accomplished
all of our research objectives, which were identification of relevant documentation,
identification of Metamatrix vertex sets, and construction of adjacency matrices. Further
validation of these relationships by domain experts will be conducted in Year 3, prior to
analysis of the system as a whole.
S2 TRUST MANAGEMENT IN CRISIS NETWORKS
S2.1 TrustBuilder (BYU/ K. Seamons and UIUC/ M. Winslett)
These researchers have continued work on the development of TrustBuilder, a prototype
system supporting trust negotiation. There are two aspects of the current work in
TrustBuilder at BYU that has relevance to helpful third parties. The first is the design and
implementation of surrogate trust negotiation as one approach for someone to store her
credentials on her home computer and have a service negotiate for access to her
credentials at run-time. The second is based on an examination of existing approaches
to storing credentials in local or remote repositories. BYU identified the advantages and
disadvantages of each approach, and then proposed a hybrid model that leverages the
best of both worlds. The result is Thor, a hybrid online repository for storing and
managing credentials in a mobile environment. The system incorporates the IETF
SACRED standard. As part of this effort, BYU collaborated with NCSA to implement the
first freely available SACRED server. The long-term goal of this work is an architecture
that emergency personnel will find easy to use to securely access sensitive data during
a crisis.
S2.2 Hidden Credentials (BYU/ K. Seamons)
We introduced hidden credentials as a revolutionary approach to trust negotiation where
sensitive information is encrypted in such a way that a recipient could only access it with
the proper credentials. Hidden credentials were built using identity-based encryption
(IBE). They permitted traditional trust negotiations that required several rounds of
communication to be accomplished in a single message. A server could send a resource
to an unknown client encrypted according to a complex policy, requiring the user to have
a certain set of credentials in order to access the resource. In addition to reducing the
cost of a negotiation, hidden credentials also provide greater privacy protection because
a credential owner can now access sensitive resources without ever revealing an ultrasensitive
credential. More recently, BYU has enhanced hidden credentials to offer
significantly improved performance when using Boneh/Franklin Identity Based
Encryption and to offer improved concealment of policies from unsatisfied recipients. We
accomplished policy concealment by using a secret-splitting scheme that only leaks
those parts of a policy that the recipient satisfies.
S2.3 Trust Credentials (BYU/ K. Seamons)
Our research explored ways to incorporate more contextual knowledge in a trust
negotiation so that an agent can make better decisions about what credentials are
needed to establish trust. The goal was to support improved privacy and performance to
trust negotiation. With this extra knowledge, phishing attacks can be detected and
thwarted. Also, trust negotiation could be streamlined if the participants can eagerly push
credentials that they anticipate will be highly likely to establish trust. We utilize ontologies
during trust negotiation as a mechanism to describe credentials that are relevant to a
certain context. The importance to emergency response is that a priori knowledge about
crises can be captured in ontologies and automatically incorporated into a system at
runtime to modify behavior during an emergency.
S2.4 Trust Negotiation Facilities for Legacy Clients and Services
(UIUC/ Winslett)
We have been experimenting with the use of trust negotiation technology in real-world
situations by cooperating with Jim Basney and Von Welch at NCSA, who are
responsible for certain aspects of the Grid Security Infrastructure. We have developed
an approach to making trust negotiation facilities available to applications on the Grid or
elsewhere, and embodied it in the Traust prototype. Traust provides clients with the
ability to acquire access tokens for networked resources dynamically at run-time. Traust
uses automated trust negotiation to support bilateral trust establishment, the discovery of
resource access control policies, and the protection of client and server privacy. The
Traust service has been designed in such a way as to support both loose integration
with existing “legacy” services and tighter integration with newer trust-aware resources.
A session between a client and a Traust server consists of five distinct stages. First, the
client generates a resource request and classifies its sensitivity level using a local
resource classifier. Next, the client carries out a trust negotiation with the Traust server
to ensure that the server can satisfy the disclosure policies the client has placed on her
sensitive resource request. This prevents inadvertent disclosure of sensitive resource
requests to untrusted servers. If this negotiation succeeds, the client discloses her
resource request. The server now uses trust negotiation to determine whether the client
is authorized to access the requested resource. The final stage of the Traust interaction
involves the server generating the access credentials needed for the client to access the
resource and disclosing these credentials to the client. The credential generation could
be as simple as a static credential lookup or as complicated as dynamic account
creation or interaction with MyProxy, Community Authorization Service, or Kerberos
servers. All communications between clients and the Traust server occur inside of a
TLS tunnel used to provide confidentiality and integrity for the session. It is important to
note that this tunnel does not provide any notion of authentication or authorization
control; the trust negotiation sessions serve this purpose.
S2.5 Support for Friendly Third Parties during Trust Negotiation
(UIUC/ WInslett).
With current trust negotiation software, manual intervention is required to carry out
interactions with these third parties. During the past year, we have examined ways of
automating these interactions and of reasoning about the properties of the resulting
system. For example, an emergency analyst may want to ask, in advance, “During a
code 3 disaster, will all rescue dog handlers be able to access the same one person’s
information service? During a code 3 disaster, who from outside a hospital (not
ordinarily authorized to) look at medical records may do so?”
To help automate the inclusion of third parties and provide a framework for answering
analysis questions, we developed PeerTrust2, a language based on distributed logic and
designed to solve distributed authorization problems. PeerTrust2 policies support
delegation, purpose, exposure control, proof hints and sticky policies. We worked with
European colleagues W. Nejdl and P. Bonatti to revamp and extend PeerTrust
semantics, resulting in PeerTrust2, and have shown how to use PeerTrust2 to write
simple yet expressive access control policies that involve helpful third parties.
PeerTrust2 can serve as a trust negotiation language, but our long-term intent is to use
the powerful new features in PeerTrust2 and add them to other policy languages such as
XACML so users of those languages can also take advantage of helpful third parties.
There are two aspects of our current work in TrustBuilder that has relevance to helpful
third parties. We have designed and implemented surrogate trust negotiation as one
approach for someone to store her credentials on her home computer and have a
service negotiate for access to her credentials at run-time. Next, we examined existing
approaches to storing credentials in local or remote repositories. We identified the
advantages and disadvantages of each approach, and then proposed a hybrid model
that leverages the best of both worlds. We designed Thor, the hybrid online repository
for storing and managing credentials in a mobile environment. Our long-term goal is an
architecture that emergency personnel will find easy to use to securely access sensitive
data during a crisis.
S3 SECURITY AND PRIVACY CONCERNS IN INFORMATION SHARING
S3.1 Secure XML Publishing (UCI/ C. Li)
This research involves protecting sensitive information when XML data is exchanged
between two business partners in order to meet precise security requirements. The
focus is on data-sharing applications where the owner specifies what information is
sensitive and should be protected. If a partial document is shared carelessly, users of
the other company can use common knowledge (e.g., “all patients in the same ward
have the same disease”) to infer additional data, which can cause leakage of sensitive
information. Our goal is to protect such information in the presence of data inference
with common knowledge. Common knowledge is represented as semantic XML
constraints. We formulated the process for how users can infer data using three types of
well-known XML constraints. Interestingly, no matter what sequences users follow to
infer data, there is a unique, maximal document that contains all possible inferred
documents, even if different inference sequences could produce documents that may
look different. We developed algorithms to find a partial document of a given XML
document without causing information leakage, while publishing as much data as
possible. We experimented with real data sets to show the effect of inference on data
security, and demonstrated that the proposed techniques can prevent such leakage from
happening.
S3.2 Authentication and Access Control (UCI/ Tsudik)
This research group developed novel techniques for secure and efficient processing of
authenticated query replies for queries against outsourced data. The work specifically
included the investigation of several new signature aggregation methods that greatly
speed up digital signature verification without sacrificing security.
We made several novel advancements in authentication and access-control techniques.
First, several new anonymous authentication protocols (secret handshake) were
constructed. These protocols are provably secure, based on well-known cryptographic
assumptions and offer efficiency superior to that of prior art. Second, techniques for
anonymous role-based communication are under development. Known also under the
name OSBE (Oblivious Signature-Based Envelopes), such techniques allow anyone to
anonymously communicate information to an entity who claims a certain role (e.g., an
FBI agent) without finding out whether the recipient is indeed in that role. Prior work is
limited to the RSA- and ID-based OSBEs. Third, we constructed (and proved security of)
two new secure group admission protocols: one based on ID-based cryptography and
one on threshold Schnorr signatures. We developed a full-blown prototype of Bouncer –
the group admission control toolkit. Bouncer allows fully distributed secure admission
based on thresholds. It includes 5 different cryptographic protocols varying in the level of
security, efficiency and state requirements.
S3.3 Logcrypt (BYU/ Seamons)
Seamons has developed LogCrypt, support for tamper-evident log files using hash
chaining. This system provides a service similar to TripWire, except that it is targeted for
log files that are being modified. Often, an attacker breaks into a system and deletes the
evidence of the break-in from an audit log. The goal of LogCrypt is to make it possible to
detect an unauthorized deletion or modification to a log file. Previous systems supporting
this feature have incorporated symmetric encryption and an HMAC. LogCrypt also
supports a public key variant that allows anyone to verify the log file, meaning that the
verifier does not need to be trusted. For the public key variant, if the original private key
used to create the file is deleted, then it is impossible for anyone, even system
administrators, to go back and modify the contents of the log file without being detected.
S3.4 Phishing Warden (BYU/ Seamons)
Research in this group involves development of technology that employs trust
negotiation to prevent phishing attacks. One approach was to use a client-side filter to
dynamically detect when sensitive information is disclosed to a server and demand
suitable credentials from the server to authorize the disclosure. We developed a browser
extension that identifies personal information being disclosed to a stranger before any
downloaded code in a Java applet can obfuscate the data to bypass the filtering
software. Prior approaches to solving this problem have been vulnerable to this threat.
Phishing Warden is designed to establish trust in a web server before disclosing
sensitive information. A second approach was to redirect a server that wants personal
information to a broker that negotiates on behalf of the user to disclose the client’s
personal information.
S3.5 Secure Object Store (UCI/ Mehrotra)
Today’s applications are generating gargantuan amounts of information and data. Data
intensive applications such as radio telescopes and sensor applications in pervasive
environments produce constant streams of data. Even though storage costs have
dropped exponentially, the cost of managing stored data in a safe and fault-tolerant
manner has not dropped. A solution to the above challenge is storage outsourcing via a
service-oriented interface. Such a storage solution can significantly reduce operation
due to cost amortization of managing storage, including human cost. Other advantages
include mobile access to data and facilitation of data-sharing across organizational
boundaries.
There are many security challenges that need to be addressed for a service of this kind.
Since data is the most valuable asset for organizations/individuals, the data owners
would like to protect themselves from both insider/outsider attacks before outsourcing
their data. Encryption offers a natural solution that offers security against the above
attacks. Our research focuses on developing techniques that will allow the service
provider to cater query-management and data-sharing services on encrypted data.
The secure sharing enables multiple interesting applications. One such application is
password sharing. Today, users use the same passwords across different sites,
including secure and insecure sites which cannot be trusted. Pfishing attacks are
common as a result. Now imagine that secure sharing is available. In this case, one can
build a service in which passwords can be stored at the server. Since the service
remembers the passwords, users can use extremely complex passwords which they do
not need to remember. We have built such a service and it has been operational for
some time now. The service allows password generation, storage on server, retrieval of
passwords, and also filling out passwords on the web.
S4 MEDIATION, INTEGRATION, QUERYING, FILTERING
S4.1 Peer-Based Information Sharing (UCI/ C. Li)
In a crisis-response situation, many participating organizations collaborate in the context
of a variety of tasks related to disaster mitigation (e.g., rescue and evacuation,
maintaining law and order). Seamless mechanisms to inter-organization information
sharing can revolutionize how such emergent collaborations are established, resulting in
dramatic improvements to crisis response. Such information may have been dynamically
collected and analyzed, or pre-existing in organizational knowledge and databases.
Challenges in information-sharing across different organizations arise due to frequent
structural and functional changes (e.g., expansion, extension) within organizations,
emergence of complex inter-organizational relationships, lack of centralized control, and
an element of the surprise resulting in unexpected inter-organization relationships and
data needs.
To study how to support distributed data-sharing in crisis-response organizations, we
have developed a system called “RACCOON,” in which different sources can publish
and share their data. In the system, peers (data owners) are connected in an overlay
network, in which semantic mappings can be created between peers. Existing peers
can leave the system, and new peers can join the network. A user can search for
relations that are similar to a given relation. For such a search query, the system adopts
schema-mapping techniques to locate relevant sources, and creates source mappings
on the fly. The system provides a visualization tool to show the neighbors of each peer,
which makes it easier for users to browse peer contents and issue queries. The system
supports queries that request information from multiple peers. The system also allows a
query to be “expanded” by accessing other peers that have schema mappings with the
peers used in the query. In this way a user can retrieve as much information as possible
to answer a query.
S4.2 Similarity Search on Peer-Based Sharing Systems (UCI/ C. Li)
Data sharing in crisis situations can be supported using a peer-to-peer (P2P)
architecture, in which each data owner publishes its data and shares it with other
owners. The research studied how to support similarity queries in such a system.
Similarity queries ask for the most relevant objects in a P2P network, where relevance is
based on a predefined similarity function, and the user is interested in obtaining objects
with the highest relevance. Because retrieving all objects and computing the exact
answer over a large-scale network is impractical, we created a novel approximate
answering framework that computed an answer by visiting only a subset of network
peers. Users were presented with progressively refined answers consisting of the best
objects seen so far with continuously improving quality guarantees providing feedback
about the search’s progress. We developed statistical techniques to determine quality
guarantees in this framework and mechanisms to incorporate quality estimators into the
search process. We developed a framework to support progressive query-answering in
P2P systems and techniques to estimate answer qualities based on objects retrieved in
a search process. We conducted experiments to evaluate these techniques, and
concluded that similarity queries are very important in P2P systems. By accessing a
small number of peers, a similarity query can be approximately answered with a certain
quality guarantee.
S4.3 GIS-based Search (UCI/ C. Li)
In crisis situations there are various kinds of GIS information stored at different places,
such as map information about pipes, gas stations, hospitals, etc. Since first-responders
need fast access to important, relevant information, it becomes important to provide an
easy-to-use interface to support keyword search on GIS data. The goal of this project is
to provide such a system, which crawls and/or archives GIS data files from different
places, and builds index structures. It provides a user-friendly interface, that allows a
user to type in a few keywords, and the system can return all the relevant objects (in a
ranked order) stored in the files. For instance, if a user types in “Irvine school,” the
system will return schools in Irvine stored in the GIS files, the corresponding GIS files,
possibly displayed on a map. This feature is similar to online services such as Google
Local, with more emphasis on information stored in GIS data files.
Our goal is to implement such a system prototype and study related research
challenges. Last November, we started the project of “keyword queries on GIS data,”
since it is very relevant to the RESCUE project. We are still in the early stage, and we
already have built a very simple prototype. We are currently working on technical
problems.
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- computadores e historia de la antigüedad
- dissertation uni essen
- recommended web sites last revised mar
- the e tools 1 report pedagogic assessment and
- page 2 thrust area 1— loss modeling and
- ancient greek literature kul
- alphabet of biozhena
- performance and security in mobile ad hoc
- unit name nom de l unité canadian military
Related searches
- 1 or 2 374 374 1 0 0 0 1 168 1 1 default username and password
- 1 or 2 711 711 1 0 0 0 1 168 1 1 default username and password
- 1 or 2 693 693 1 0 0 0 1 168 1 1 default username and password
- 1 or 2 593 593 1 0 0 0 1 or 2dvchrbu 168 1 1 default username and password
- 1 or 2 910 910 1 0 0 0 1 168 1 1 default username and password
- 192 1 or 2 33 33 1 0 0 0 1 1 1 default username and password
- 1 or 2 364 364 1 0 0 0 1 168 1 1 admin username and password
- 1 or 2 633 633 1 0 0 0 1 168 1 1 admin username and password
- 192 1 or 2 735 735 1 0 0 0 1 1 1 default username and password
- 1 or 2 297 297 1 0 0 0 1 168 1 1 username and password verizon
- 1 or 2 948 948 1 0 0 0 1 168 1 1 admin username and password
- 192 1 or 2 372 372 1 0 0 0 1 1 1 default username and password