Semantically Rich Application-Centric Security in Android

Smartphones are now ubiquitous. However, the security requirements of these relatively new systems and the applications they support are still being understood. As a result, the security infrastructure available in current smartphone operating systems is largely underdeveloped. In this paper, we consider the security requirements of smartphone applications and augment the existing Android operating system with a framework to meet them. We present Secure Application INTeraction (Saint), a modified infrastructure that governs install-time permission assignment and their run-time use as dictated by application provider policy. An in-depth description of the semantics of application policy is presented. The architecture and technical detail of Saint are given, and areas for extension, optimization, and improvement are explored. We demonstrate through a concrete example and study of real-world applications that Saint provides necessary utility for applications to assert and control the security decisions on the platform.

Paper: secnet11.pdf ( 2012)

Scalable Web Content Attestation

The web is a primary means of information sharing for most organizations and people. Currently, a recipient of web content knows nothing about the environment in which that information was generated other than the specific server from whence it came (and even that information can be unreliable). In this paper, we develop and evaluate the Spork system that uses the Trusted Platform Module (TPM) to tie the web server integrity state to the web content delivered to browsers, thus allowing a client to verify that the origin of the content was functioning properly when the received content was generated and/or delivered. We discuss the design and implementation of the Spork service and its browser-side Firefox validation extension. In particular, we explore the challenges and solutions of scaling the delivery of mixed static and dynamic content to a large number of clients using exceptionally slow TPM hardware. We perform an in-depth empirical analysis of the Spork system within Apache web servers. This analysis shows Spork can deliver nearly 8,000 static or over 6,500 dynamic integrity-measured web objects per second. More broadly, we identify how TPM-based content web services can scale to large client loads with manageable overheads and deliver integrity-measured content with manageable overhead.

From mobile phones to responsible devices

Mobile phones have evolved from simple voice terminals into highly-capable, general-purpose computing platforms. While people are becoming increasingly more dependent on such devices to perform sensitive operations, protect secret data, and be available for emergency use, it is clear that phone operating systems are not ready to become mission-critical systems. Through a pair of vulnerabilities and a simulated attack on a cellular network, we demonstrate that there are a myriad of unmanaged mechanisms on mobile phones, and that control of these mechanisms is vital to achieving reliable use. Through such vectors, mobile phones introduce a variety of new threats to their own applications and the telecommunications infrastructure itself. In this paper, we examine the requirements for providing effective mediation and access control for mobile phones. We then discuss the convergence of cellular networks with the Internet and its impact on effective resource management and quality of service. Based on these results, we argue for user devices that enable predictable behavior in a network - where their trusted computing bases can protect key applications and create predictable network impact.

Paper: scn10b.pdf (June 2011)

Network-Based Root of Trust for Installation

A network-based system installation method that binds a file system to its installer and disk image thwarts many known attacks against the installation process.

Paper: nroti11.pdf (Jan.-Feb. 2011)

New Security Architectures Based on Emerging Disk Functionality

Advances in hard disk technologies can help manage the complexity of operating system security and enforce security policies. The SwitchBlade architecture provides isolation for multiple OSs running on a machine by confining them into segments that users can only access using a physical token.

Secure attribute-based systems

Attributes define, classify, or annotate the datum to which they are assigned. However, traditional attribute architectures and cryptosystems are ill-equipped to provide security in the face of diverse access requirements and environments. In this paper, we introduce a novel secure information management architecture based on emerging attribute-based encryption (ABE) primitives. A policy system that meets the needs of complex policies is defined and illustrated. Based on the needs of those policies, we propose cryptographic optimizations that vastly improve enforcement efficiency. We further explore the use of such policies in two proposed applications: a HIPAA compliant distributed file system and a social network. A performance analysis and characterization of ABE primitives demonstrates the ability to reduce cryptographic costs by as much as 98% over previously proposed constructions. Through this, we demonstrate that our attribute system is an efficient solution for securely managing information in large, loosely-coupled, distributed systems.

Paper: citation.cfm?id=1841962.1841966 ( )

A Logical Specification and Analysis for SELinux MLS Policy

The SELinux mandatory access control (MAC) policy has recently added a multilevel security (MLS) model which is able to express a fine granularity of control over a subject's access rights. The problem is that the richness of the SELinux MLS model makes it impractical to manually evaluate that a given policy meets certain specific properties. To address this issue, we have modeled the SELinux MLS model, using a logical specification and implemented that specification in the Prolog language. Furthermore, we have developed some analyses for testing information flow properties of a given policy as well an algorithm to determine whether one policy is compliant with another. We have implemented these analyses in Prolog and compiled our implementation into a tool for SELinux MLS policy analysis, called PALMS. Using PALMS, we verified some important properties of the SELinux MLS reference policy, namely that it satisfies the simple security condition and *- property defined by Bell and LaPadula. We also evaluated whether the policy associated to a given application is compliant with the policy of the SELinux system in which it would be deployed.

malnets: Large-Scale Malicious Networks via Compromised Wireless Access Points

Densely populated areas are increasingly filled with vulnerable wirelessrouters set up by unsophisticated users. In isolation, such routersappear to represent only a minor threat, but in aggregate, the threatcan be much greater. We introduce the notion of malnets: networksof adversary-controlled wireless routers targeted to a physicalgeography. Similar to Internet worms such as Slammer and Code-Red,malnets are created by the recursive compromise of targeted devices.However, unlike their traditionally wired counterparts, malnet wormsexploit only other routers that are within their transmission range.The malnet thus creates a parallel wireless infrastructure that is a)completely under control of the adversary, and b) spans a targetedphysical area, creating a valuable infrastructure for a variety ofvirtual and physical attacks. We initially study the propagationcharacteristics of commercial routers and model inter-routerconnectivity using publicly available war-driving data. The resultingcharacterization is applied to well-known epidemiological models toexplore the success rates and speeds of malnet creation across citiessuch as New York, Atlanta, and Los Angeles. Finally, we use a samplingof available exploits to demonstrate the construction of multi-vector,multi-platform worms capable of targeting wireless routers. Ouranalysis show that an adversary can potentially deploy a malnet of over24,000 routers in Manhattan in less than two hours. Through this workwe show that malnets are not only feasible but can be efficientlydeployed.

As the Internet's de facto interdomain routing protocol, the Border Gateway Protocol (BGP) is the glue that holds the disparate parts of the Internet together. A major limitation of BGP is its failure to adequately address security. Recent high-profile outages and security analyses clearly indicate that the Internet routing infrastructure is highly vulnerable. Moreover, the design of BGP and the ubiquity of its deployment have frustrated past efforts at securing interdomain routing. This paper considers the current vulnerabilities of the interdomain routing system and surveys both research and standardization efforts relating to BGP security. We explore the limitations and advantages of proposed security extensions to BGP, and explain why no solution has yet struck an adequate balance between comprehensive security and deployment cost.

Paper: ieee10.pdf (Jan. 2010)

Leveraging Identity-Based Cryptography for Node ID Assignment in Structured P2P Systems

Structured peer-to-peer (P2P) systems have grown enormously because of their scalability, efficiency, and reliability. These systems assign a unique identifier to each user and object. However, current assignment schemes allow an adversary to carefully select user IDs and/or simultaneously obtain many pseudo-identities-ultimately leading to an ability to disrupt the P2P system in very targeted and dangerous ways. In this paper, we propose novel ID assignment protocols based on identity-based cryptography. This approach permits the acquisition of node IDs to be tightly regulated without many of the complexities and costs associated with traditional certificate solutions. We broadly consider the security requirements of ID assignment and present three protocols representing distinct threat and trust models. A detailed empirical study of the protocols is given. Our analysis shows that the cost of our identity-based protocols is nominal, and that the associated identity services can scale to millions of users using a limited number of servers.

Paper: nodeid-tpdsfinal.pdf (Dec. 2009)

Configuration Management at Massive Scale: System Design and Experience

The development and maintenance of network device configurations is one of the central challenges faced by large network providers. Current network management systems fail to meet this challenge primarily because of their inability to adapt to rapidly evolving customer and provider-network needs, and because of mismatches between the conceptual models of the tools and the services they must support. In this paper, we present the PRESTO configuration management system that attempts to address these failings in a comprehensive and flexible way. Developed for and used during the last 5 years within a large ISP network, PRESTO constructs device-native configurations based on the composition of configlets representing different services or service options. Configlets are compiled by extracting and manipulating data from external systems as directed by the PRESTO configuration scripting and template language. We outline the configuration management needs of large-scale network providers, introduce the PRESTO system and configuration language, and reflect upon our experiences developing PRESTO configured VPN and VoIP services. In doing so, we describe how PRESTO promotes healthy configuration management practices.

Understanding Android Security

Google's Android platform is a widely anticipated open source operating system for mobile phones. This article describes Android's security model and attempts to unmask the complexity of secure application development. The authors conclude by identifying lessons and opportunities for future enhancements.

Paper: sp09.pdf (Jan.-Feb. 2009)

Mitigating Attacks on Open Functionality in SMS-Capable Cellular Networks

The transformation of telecommunications networks from homogeneous closed systems providing only voice services to Internet-connected open networks that provide voice and data services presents significant security challenges. For example, recent research illustrated that a carefully crafted DoS attack via text messaging could incapacitate all voice communications in a metropolitan area with little more than a cable modem. This attack highlights a growing threat to these systems; namely, cellular networks are increasingly exposed to adversaries both in and outside the network. In this paper, we use a combination of modeling and simulation to demonstrate the feasibility of targeted text messaging attacks. Under realistic network conditions, we show that adversaries can achieve blocking rates of more than 70% with only limited resources. We then develop and characterize five techniques from within two broad classes of countermeasures - queue management and resource provisioning. Our analysis demonstrates that these techniques can eliminate or extensively mitigate even the most intense targeted text messaging attacks. We conclude by considering the tradeoffs inherent to the application of these techniques in current and next generation telecommunications networks.

Paper: traynor_ton08.pdf ( 2008)

Non-Invasive Methods for Host Certification

Determining whether a user or system is exercising appropriate security practices is difficult in any context. Such difficulties are particularly pronounced when uncontrolled or unknown platforms join public networks. Commonly practiced techniques used to vet these hosts, such as system scans, have the potential to infringe upon the privacy of users. In this paper, we show that it is possible for clients to prove both the presence and proper functioning of security infrastructure without allowing unrestricted access to their system. We demonstrate this approach, specifically applied to anti-virus security, by requiring clients seeking admission to a network to positively identify the presence or absence of malcode in a series of puzzles. The implementation of this mechanism and its application to real networks are also explored. In so doing, we demonstrate that it is not necessary for an administrator to be invasive to determine whether a client implements required security practices.

Exploiting Open Functionality in SMS-Capable Cellular Networks

Cellular networks are a critical component of the economic and social infrastructures in which we live. In addition to voice services, these networks deliver alphanumeric text messages to the vast majority of wireless subscribers. To encourage the expansion of this new service, telecommunications companies offer connections between their networks and the Internet. The ramifications of such connections, however, have not been fully recognized. In this paper, we evaluate the security impact of the SMS interface on the availability of the cellular phone network. Specifically, we describe the ability to deny voice service to cities the size of Washington D.C. and Manhattan with little more than a cable modem. Moreover, attacks targeting the entire United States are feasible with resources available to medium-sized zombie networks. This analysis begins with an exploration of the structure of cellular networks. We then characterize network behavior and explore a number of reconnaissance techniques aimed at effectively targeting attacks on these systems. We conclude by discussing countermeasures that mitigate or eliminate the threats introduced by these attacks.

Paper: traynor_jcs08.pdf ( 2007)

Round-Eye: A System for Tracking Nearest Surrounders in Moving Object Environments

This paper presents "Round-Eye", a system for tracking nearest surrounding objects (or nearest surrounders) in moving object environments. This system provides a platform for surveillance eapplications. The core part of this system is continuous nearest surrounder (NS) query that maintains views of the nearest objects at distinct angles from query points. This query differs from conventional spatial queries such as range queries and nearest neighbor queries as NS query considers both distance and angular aspects of objects with respect to a query point at the same time. In our system framework, a centralized server is dedicated (1) to collect locationu pdates of both objects and queries, (2)to determine which NS queries are invalidated in presence of object/query location changes and corresponding result changes if any, and (3) to refresh the affected query answers. To enhance the system performance in terms of processing time and network bandwidth consumption, we propose various techniques, namely, safe region, partial query reevaluation, and incremental query result update. Through simulations, we evaluate our system with the proposed techniques over a wide range of settings.

TARP: Ticket-based Address Resolution Protocol

IP networks fundamentally rely on the Address Resolution Protocol (ARP) for proper operation. Unfortunately, vulnerabilities in ARP enable a raft of Internet Protocol (IP)-based impersonation, man-in-the-middle, or Denial of Service (DoS) attacks. Proposed countermeasures to these vulnerabilities have yet to simultaneously address backward compatibility and cost requirements. This paper introduces the Ticket-based Address Resolution Protocol (TARP). TARP implements security by distributing centrally issued secure IP/Medium Access Control (MAC) address mapping attestations through existing ARP messages. We detail TARP and its implementation within the Linux operating system. We also detail the integration of TARP with the Dynamic Host Configuration Protocol (DHCP) for dynamic ticket distribution. Our experimental analysis shows that TARP improves the costs of implementing ARP security by as much as two orders of magnitude over existing protocols. We conclude by exploring a range of operational issues associated with deploying and administering ARP security.

Paper: sdarticle.pdf ( 2007)

Efficient Hybrid Security Mechanisms for Heterogeneous Sensor Networks

Many applications that make use of sensor networks require secure communication. Because asymmetric-key solutions are difficult to implement in such a resource-constrained environment, symmetric-key methods coupled with a priori key distribution schemes have been proposed to achieve the goals of data secrecy and integrity. These approaches typically assume that all nodes are similar in terms of capabilities and, hence, deploy the same number of keys in all sensors in a network to provide the aforementioned protections. In this paper, we demonstrate that a probabilistic unbalanced distribution of keys throughout the network that leverages the existence of a small percentage of more capable sensor nodes can not only provide an equal level of security, but also reduce the consequences of node compromise. To fully characterize the effects of the unbalanced key management system, we design, implement, and measure the performance of a complementary suite of key establishment protocols known as LIGER. Using their predeployed keys, nodes operating in isolation from external networks can securely and efficiently establish keys with each other. Should resources such as a backhaul link to a key distribution center (KDC) become available, networks implementing LIGER automatically incorporate and benefit from such facilities. Detailed experiments demonstrate that the unbalanced distribution in combination with the multimodal LIGER suite offers a robust and practical solution to the security needs in sensor networks.

Paper: tkczc07.pdf (June 2007)

ASR: Anonymous and Secure Reporting of Traffic Forwarding Activity in Mobile Ad Hoc Networks

Nodes forward data on behalf of each other in mobile ad hoc networks. In a civilian application, nodes are assumed to be selfish and rational, i.e., they pursue their own self-interest. Hence, the ability to accurately measure traffic forwarding is critical to ensure proper network operation. These measurements are also often used to credit nodes based on their level of participation, or to detect loss. Past solutions employ neighbor monitoring and reporting on traffic forwarding of nodes. These methods are not applicable in civilian networks in which neighbor nodes lack the desire or ability to perform the monitoring function. Such environments occur frequently in which neighbor hosts are resource constrained, or in networks where directional antennas are used and reliable eavesdropping is difficult or impossible.

In this paper, we propose a protocol that uses nodes on the data path to securely produce packet forwarding reports. Reporting nodes are chosen randomly and secretly so that malicious nodes cannot modify their behavior based upon the monitoring point. The integrity and authenticity of reports are preserved through the use of secure link layer acknowledgments and monitoring reports. The robustness of the reporting mechanism is strengthened by forwarding the report to multiple destinations (source and destination). We explore the security, cost, and accuracy of our protocol.

Paper: asr_winet.pdf (May 2007)

Origin Authentication in Interdomain Routing

Attacks against Internet routing are increasing in number and severity. Contributing greatly to these attacks is the absence of origin authentication; there is no way to validate claims of address ownership or location. The lack of such services not only enables attacks by malicious entities, but also indirectly allows seemingly inconsequential misconfigurations to disrupt large portions of the Internet. This paper considers the semantics, design, and costs of origin authentication in interdomain routing. We formalize the semantics of address delegation and use on the Internet, and develop and characterize original, broad classes of origin authentication proof systems. We estimate the address delegation graph representing the current use of IPv4 address space using available routing data. This effort reveals that current address delegation is dense and relatively static: as few as 16 entities perform 80% of the delegation on the Internet. We conclude by evaluating the proposed services via trace-based simulation, which demonstrates that the enhanced proof systems can significantly reduce resource costs associated with origin authentication.

Paper: comnet06.pdf (November 2006)

Enforcing Provisioning and Authorization Policy in the Antigone System

Works in communication security policy have recently focused on general-purpose policy languages and evaluation algorithms. However, because the supporting frameworks often defer enforcement, the correctness of a realization of these policies in software is limited by the quality of domain-specific implementations. This paper introduces the Antigone communication security policy enforcement framework. The Antigone framework fills the gap between representations and enforcement by implementing and integrating the diverse security services needed by policy. Policy is enforced by the run-time composition, configuration, and regulation of security services. We present the Antigone architecture, and demonstrate non-trivial applications and policies. A profile of policy enforcement performance is developed, and key architectural enhancements identified. We conclude by considering the advantages and disadvantages of a broad range of software architectures appropriate for policy enforcement.

Paper: jcs05.pdf (November 2006)

Methods and Limitations of Security Policy Reconciliation

A security policy specifies session participant requirements. However, existing frameworks provide limited facilities for the automated reconciliation of participant policies. This paper considers the limits and methods of reconciliation in a general-purpose policy model. We identify an algorithm for efficient two-policy reconciliation, and show that, in the worst-case, reconciliation of three or more policies is intractable. Further, we suggest efficient heuristics for the detection and resolution of intractable reconciliation. Based upon the policy model, we describe the design and implementation of the Ismene policy language. The expressiveness of Ismene, and indirectly of our model, is demonstrated through the representation and exposition of policies supported by existing policy languages. We conclude with brief notes on the integration and enforcement of Ismene policy within the Antigone communication system.

Paper: tissec06.pdf (August 2006)

Security Policy Enforcement in the Antigone System

Works in communication security policy have recently focused on general-purpose policy languages and evaluation algorithms. However, because the supporting frameworks often defer enforcement, the correctness of a realization of these policies in software is limited by the quality of domain-specific implementations. This paper introduces the Antigone communication security policy enforcement framework. The Antigone framework fills the gap between representations and enforcement by implementing and integrating the diverse security services needed by policy. Policy is enforced by the run-time composition, configuration, and regulation of security services. We present the Antigone architecture, and demonstrate non-trivial applications and policies. A profile of policy enforcement performance is developed, and key architectural enhancements identified. We conclude by considering the advantages and disadvantages of a broad range of software architectures appropriate for policy enforcement.

Paper: jcs05.pdf ( 2005)

The Sleep Deprivation Attack in Sensor Networks: Analysis and Methods of Defense

The ability of sensor nodes to enter a low power sleep mode is very useful for extending network longevity. We show how adversary nodes can exploit clustering algorithms to ensure their selection as cluster heads for the purpose of launching attacks that prevent victim nodes from sleeping. We present two such attacks: the barrage attack and the sleep deprivation attack. The barrage attack bombards victim nodes with legitimate requests, whereas the sleep deprivation attack makes requests of victim nodes only as often as is necessary to keep the victims awake. We show that while the barrage attack causes its victims to spend slightly more energy, it is more easily detected and requires more effort on behalf of the attacker. Thus we have focused our research on the sleep deprivation attack. Our analysis indicates that this attack can nullify any energy savings obtained by allowing sensor nodes to enter sleep mode. We also analyze three separate methods for mitigating this attack: the random vote scheme, the round robin scheme, and the hash-based scheme. We have evaluated these schemes based upon their ability to reduce the adversary's attack, the amount of time required to select a cluster head, and the amount of energy required to perform each scheme. We have found that of the three clustering methods analyzed, the hash-based scheme is the best at mitigating the sleep deprivation attack.

Paper: ijdsn05.pdf (June 2005)

Analysis of Security Vulnerabilities in the Movie Production and Distribution Process

Unauthorized copying of movies is a major concern for the motion picture industry. While unauthorized copies of movies have been distributed via portable physical media for some time, low-cost, high-bandwidth Internet connections and peer-to-peer file sharing networks provide highly efficient distribution media. Many movies are showing up on file sharing networks shortly after, and in some cases prior to, theatrical release. It has been argued that the availability of unauthorized copies directly affects theater attendance and DVD sales, and hence represents a major financial threat to the movie industry. Our research attempts to determine the source of unauthorized copies by studying the availability and characteristics of recent popular movies in file sharing networks. We developed a data set of 312 popular movies and located one or more samples of 183 of these movies on file sharing networks, for a total of 285 movie samples. 77% of these samples appear to have been leaked by industry insiders. Most of our samples appeared on file sharing networks prior to their official consumer DVD release date. Indeed, of the movies that had been released on DVD as of the time of our study, only 5% first appeared after their DVD release date on a web site that indexes file sharing networks, indicating that consumer DVD copying currently represents a relatively minor factor compared with insider leaks. We perform a brief analysis of the movie production and distribution process and identify potential security vulnerabilities that may lead to unauthorized copies becoming available to those who may wish to redistribute them. Finally, we offer recommendations for reducing security vulnerabilities in the movie production and distribution process.

Paper: tcpolicy04.pdf (August 2004)

Consistency Analysis of Authorization Hook Placement in the Linux Security Modules Framework

We present a consistency analysis approach to assist the Linux community in verifying the correctness of authorization hook placement in the Linux Security Modules (LSM) framework. The LSM framework consists of a set of authorization hooks inserted into the Linux kernel to enable additional authorizations to be performed (e.g., for mandatory access control). When compared to system call interposition, authorization within the kernel has both security and performance advantages, but it is more difficult to verify that placement of the LSM hooks ensures that all the kernel's security-sensitive operations are authorized. Static analysis has been used previously to verified mediation (i.e., that some hook mediates access to a security-sensitive operation), but that work did not determine whether the necessary set of authorizations were checked. In this paper, we develop an approach to test the consistency of the relationships between security-sensitive operations and LSM hooks. The idea is that whenever a security-sensitive operation is performed as part of specifiable event, a particular set of LSM hooks must have mediated that operation. This work demonstrates that the number of events that impact consistency is manageable and that the notion of consistency is useful for verifying correctness. We describe our consistency approach for performing verification, the implementation of run-time tools that implement this approach, the anomalous situations found in an LSM-patched Linux 2.4.16 kernel, and an implementation of a static analysis version of this approach.

Paper: tissec2004.pdf (May 2004)

Policy Management Using Access Control Spaces

We present the concept of an access control space and investigate how it may be useful in managing access control policies. An access control space represents the permission assignment state of a subject or role. For example, the set of permissions explicitly assigned to a role defines its specified subspace, and the set of constraints precluding assignment to that role defines its prohibited subspace. In analyzing these subspaces, we identify two problems: (1) often a significant portion of an access control space has unknown assignment semantics, which indicates that the policy is underspecified; and (2) often high-level assignments and constraints that are easily understood result in conflicts, where resolution often leads to significantly more complex specifications. We have developed a prototype system, called Gokyo, that computes access control spaces. Gokyo identifies the unknown subspace to assist system administrators in developing more complete policy specifications. Also, Gokyo identifies conflicting subspaces and enables system administrators to resolve conflicts in a variety of ways in order to preserve the simplicity of constraint specification. We demonstrate Gokyo by analyzing a Web server policy example and examine its utility by applying it to the SELinux example policy. Even for the extensive SELinux example policy, we find that only eight additional expressions are necessary to resolve Apache administrator policy conflicts.

Paper: tissec2003.pdf (August 2003)

Practical safety in flexible access control models

Assurance that an access control configuration will not result in the leakage of a right to an unauthorized principal, called safety, is fundamental to ensuring that the most basic of access control policies can be enforced. It has been proven that the safety of an access control configuration cannot be decided for a general access control model, such as Lampson's access matrix, so safety is achieved either through the use of limited access control models or the verification of safety via constraints. Currently, almost all safety critical systems use limited access control models, such as Bell--LaPadula or Domain and Type Enforcement, because constraint expression languages are far too complex for typical administrators to use properly. However, researchers have identified that most constraints belong to one of a few basic types, so our goal is to develop a constraint expression model in which these constraints can be expressed in a straightforward way and extensions can be made to add other constraints, if desired. Our approach to expressing constraints has the following properties: (1) an access control policy is expressed using a graphical model in which the nodes represent sets (e.g., of subjects, objects, etc.) and the edges represent binary relationships on those sets and (2) constraints are expressed using a few, simple set operators on graph nodes. The basic graphical model is very simple, and we extend this model only as necessary to satisfy the identified constraint types. Since the basic graphical model is also general, further extension to support other constraints is possible, but such extensions should be made with caution as each increases the complexity of the model. Our hope is that by keeping the complexity of constraint expression in check, flexible access control models, such as role-based access control, may also be used for expressing access control policy for safety-critical systems.

Paper: tissec2001.pdf ( 2001)

Demand Transformation Analysis for Concurrent Constraint Programs

This paper presents a demand transformation analysis that maps apredicate's output demands to its input demands. This backward dataflowanalysis for concurrent constraint programs is constructed in theframework of abstract interpretation. In the context of streamparallelism, this analysis identifies an amount of input data for whichpredicate execution can safely wait without danger of introducingdeadlock. We assume that programs are well-moded and prove that ouranalysis is safe. We have constructed an implementation of this analysisand tested it on some small, illustrative programs and have determinedthat it gives useful results in practice. We identify severalapplications of the analysis results to distributed implementations ofconcurrent constraint languages, including thread construction andcommunication granularity. This analysis will enable exisitingcomputation cost estimation analyses to be applied to stream-parallellogic languages.

Higher-Order Uncurrying

We present a formal specification of unCurrying for a higher-order,functional language with ML-style let-polymorphism. This specificationsupports the general unCurrying of functions, even for functions thatare passed as arguments or returned as values. The specification alsosupports partial unCurrying of any consecutive parameters of a function,rather than only unCurrying all of a function's parameters. We presentthe specification as a deductive system that axiomatizes a judgmentrelating a source term with an unCurried form of the term. We prove thatthis system relates only typable terms and that it is correct withrespect to an operational semantics. We define a practical algorithm,based on algorithm W, that implements unCurrying and prove thisalgorithm sound and complete with respect to the deductive system. Thisalgorithm generates maximally unCurried forms of source terms. Theseresults provide a declarative framework for reasoning about unCurryingand support a richer form of unCurrying than is currently found incompilers for functional languages.

Flexible Control of Downloaded Executable Content

We present a security architecture that enables system and application a ccess control requirements to be enforced on applications composed from downloaded executable content. Downloaded executable content consists of messages downloaded from remote hosts that contain executables that run, upon receipt, on the downloading principal's machine. Unless restricted, this content can perform malicious actions, including accessing its downloading principal's private data and sending messages on this principal's behalf. Current security architectures for controlling downloaded executable content (e.g., JDK 1.2) enable specification of access control requirements for content based on its provider and identity. Since these access control requirements must cover every legal use of the class, they may include rights that are not necessary for a particular application of content. Therefore, using these systems, an application composed from downloaded executable content cannot enforce its access control requirements without the addition of application-specific security mechanisms. In this paper, we define an access control model with the following properties: (1) system administrators can define system access control requirements on applications and (2) application developers can use the same model to enforce application access control requirements without the need for ad hoc security mechanisms. This access control model uses features of role-based access control models to enable (1) specification of a single role that applies to multiple application instances; (2) selection of a content's access rights based on the content's application and role in the application; (3) consistency maintained between application state and content access rights; and (4) control of role administration. We detail a system architecture that uses this access control model to implement secure collaborative applications. Lastly, we describe an implementation of this architecture, called the Lava security architecture.

Paper: tissec1999.pdf (May 1999)

A flexible security system for using Internet content

The Internet and World Wide Web have introduced a powerful new way to acquire content. Previously, users had to either buy or order content on coded disks. They can now download executable code and applications directly. From the users' viewpoint, this decreases the amount of software stored on their machines. It also lets content providers customize applications by combining different vendors' content. Downloading content from the Internet brings risks: content that appears reputable may be malicious, given system access it can damage or destroy data. To contend with this risk, the author developed FlexxGuard, a flexible interpreter that dynamically derives protection domains and uses those domains to authorize content operations.

Measuring the Impact and Perception of Acceptable Advertisements

In 2011, Adblock Plus-the most widely-used ad blocking software-began to permit some advertisements as part of their Acceptable Ads program. Under this program, some ad networks and content providers pay to have their advertisements shown to users. Such practices have been controversial among both users and publishers. In a step towards informing the discussion about these practices, we present the first comprehensive study of the Acceptable Ads program. Specifically, we characterize which advertisements are allowed and how the whitelisting has changed since its introduction in 2011. We show that the list of filters used to whitelist acceptable advertisements has been updated on average every 1.5 days and grew from 9 filters in 2011 to over 5,900 in the Spring of 2015. More broadly, the current whitelist triggers filters on 59% of the top 5,000 websites. Our measurements also show that the program allows advertisements on 2.6 million parked domains. Lastly, we take the lessons learned from our analysis and suggest ways to improve the transparency of the whitelisting process.

Paper: imc15.pdf (October 2015)

Composite Constant Propagation: Application to Android Inter-Component Communication Analysis

Many program analyses require statically inferring the possible values of composite types. However, current approaches either do not account for correlations between object fields or do so in an ad hoc manner. In this paper, we introduce the problem of composite constant propagation. We develop the first generic solver that infers all possible values of complex objects in an interprocedural, flow and context-sensitive manner, taking field correlations into account. Composite constant propagation problems are specified using COAL, a declarative language. We apply our COAL solver to the problem of inferring Android Inter-Component Communication (ICC) values, which is required to understand how the components of Android applications interact. Using COAL, we model ICC objects in Android more thoroughly than the state-of-the-art. We compute ICC values for 460 applications from the Play store. The ICC values we infer are substantially more precise than previous work. The analysis is efficient, taking slightly over two minutes per application on average. While this work can be used as the basis for many whole-program analyses of Android applications, the COAL solver can also be used to infer the values of composite objects in many other contexts.

Paper: octeau-icse15.pdf (May 2015)

IccTA: Detecting Inter-Component Privacy Leaks in Android Apps

Shake Them All is a popular “Wallpaper” application exceeding millions of downloads on Google Play. At installation, this application is given permission to (1) access the Internet (for updating wallpapers) and (2) use the device microphone (to change background following noise changes). With these permissions, the application could silently record user conversations and upload them remotely. To give more confidence about how Shake Them All actually processes what it records, it is necessary to build a precise analysis tool that tracks the flow of any sensitive data from its source point to any sink, especially if those are in different components. Because Android applications may leak private data carelessly or maliciously, we propose IccTA, a static taint analyzer to detect privacy leaks between components in Android applications. IccTA goes beyond state-of-the-art approaches by supporting inter-component detection. By propagating context information between components, IccTA improves the precision of the analysis. IccTA outperforms existing tools on two benchmarks for ICC-leak detectors: DroidBench and ICC-Bench. Moreover, our approach detects 534 ICC leaks in 108 apps from MalGenome and 2,395 ICC leaks in 337 apps in a set of 15,000 Google Play apps.

Building an Ontology of Cyber Security

Situation awareness depends on a reliable perception of the environment and comprehension of its semantic structures. In this respect, the cyberspace presents a unique challenge to the situation awareness of users and analysts, since it is a unique combination of human and machine elements, whose complex interactions occur in a global communication network. Accordingly, we outline the underpinnings of an ontology of secure operations in cyberspace, presenting the ontology framework and providing two modeling examples. We make the case for adopting a rigorous semantic model of cyber security to overcome the current limits of the state of the art.

JIGSAW : Protecting Resource Access by Inferring Programmer Expectations

Processes retrieve a variety of resources, such as files, from the operating system to function. However, securely accessing resources has proven to be a challenging task due to a variety of resource access attacks, accounting for 10-15% of vulnerabilities reported each year. Current defenses address only a subset of these attacks in ad-hoc and incomplete ways. In this paper, we provide a comprehensive defense against resource access attacks. First, we identify a fundamental reason that resource access attacks exist – a mismatch between program expectations and the actual environment the program runs in. To address such mismatches, we propose JIGSAW, a system that can automatically derive program expectations and enforce it on the deployment. We evaluated JIGSAW on widely-used programs and found that programmers have a lot of implicit expectations. These mismatches led us to discover two previously-unknown vulnerabilities and a default misconfiguration in the Apache webserver. We also found that we could efficiently enforce protection for programs by enforcing programmer expectation on the deployment with few false positives, thus providing a principled defense against resource access attacks.

Duet: Library Integrity Verification for Android Applications

In recent years, the Android operating system has had an explosive growth in the number of applications containing third-party libraries for different purposes. In this paper, we identify three library-centric threats in the real-world Android application markets: (i) the library modification threat, (ii) the masquerading threat and (iii) the aggressive library threat. These three threats cannot effectively be fully addressed by existing defense mechanisms such as software analysis, anti-virus software and anti-repackaging techniques. To mitigate these threats, we propose Duet, a library integrity verification tool for Android applications at application stores. This is non-trivial because the Android application build process merges library code and application-specific logic into a single binary file. Our approach uses reverse-engineering to achieve integrity verification. We implemented a full working prototype of Duet. In a dataset with 100,000 Android applications downloaded from Google Play between February 2012 and September 2013, we verify integrity of 15 libraries. On average, 80.50% of libraries can pass the integrity verification. In-depth analysis indicates that code insertion, obfuscation, and optimization on libraries by application developers are the primary reasons for not passing integrity verification. The evaluation results not only indicate that Duet is an effective tool to mitigate library-centric attacks, but also provide empirical insight into the library integrity situation in the wild.

Policy Models to Protect Resource Retrieval

Processes need a variety of resources from their operating environment in order to run properly, but adversary may control the inputs to resource retrieval or the end resource itself, leading to a variety of vulnerabilities. Conventional access control methods are not suitable to prevent such vulnerabilities because they use one set of permissions for all system call invocations. In this paper, we define a novel policy model for describing when resource retrievals are unsafe, so they can be blocked. This model highlights two contributions: (1) the explicit definition of adversary models as adversarial roles, which list the permissions that dictate whether one subject is an adversary of another, and (2) the application of data-flow to determine the adversary control of the names used to retrieve resources. An evaluation using multiple adversary models shows that while data-flow is necessary to authorize resource retrieval in over 90% of system calls. By making adversary models and the adversary accessibility of all aspects of resource retrieval explicit, we can block resource access attacks system-wide.

A Trusted Safety Verifier for Process Controller Code

Attackers can leverage security vulnerabilities in control systems to make physical processes behave unsafely. Currently, the safe behavior of a control system relies on a Trusted Computing Base (TCB) of commodity machines, firewalls, networks, and embedded systems. These large TCBs, often containing known vulnerabilities, expose many attack vectors which can impact process safety. In this paper, we present the Trusted Safety Verifier (TSV), a minimal TCB for the verification of safety-critical code executed on programmable controllers. No controller code is allowed to be executed before it passes physical safety checks by TSV. If a safety violation is found, TSV provides a demonstrative test case to system operators. TSV works by first translating assembly-level controller code into an intermediate language, ILIL. ILIL allows us to check code containing more instructions and features than previous controller code safety verification techniques. TSV efficiently mixes symbolic execution and model checking by transforming an ILIL program into a novel temporal execution graph that lumps together safety- equivalent controller states. We implemented TSV on a Raspberry Pi computer as a bump-in-the-wire that intercepts all controller-bound code. Our evaluation shows that it can test a variety of programs for common safety properties in an average of less than three minutes, and under six minutes in the worst case—a small one-time addition to the process engineering life cycle.

Pitfalls in the automated strengthening of passwords

Passwords are the most common form of authentication for computer systems, and with good reason: they are simple, intuitive and require no extra device for their use. Unfortunately, users often choose weak passwords that are easy to guess. Various methods of helping users select strong passwords have been deployed, often in the form of requirements for the minimum length and number of character classes to use. Alternatively, a site could modify a user's password in order to make it more secure; strengthening algorithms have been proposed that extend/modify a user-supplied password until achieving sufficient strength. Researchers have suggested that it may be possible to balance password strength with memorability by limiting automated changes to one or two characters while evaluating the generated passwords' strength against known cracking algorithms. This paper shows that passwords that were strengthened against the best known cracking algorithms are still susceptible to attack, provided the adversary knows the strengthening algorithm. We propose two attacks: (1) by strengthening the data sets with the known algorithm, which increases the probability of recovering passwords by a factor of 3-8, and (2) by a brute-force attack on the initial passwords and space of possible changes, recovering all passwords produced when a sufficiently weak initial password was suggested. As a result, we find that the proposed strengthening algorithms do not yet satisfy Kerckhoffs's principle.

Stateful Policy Enforcement for Control System Device Usage

Networked control systems used in energy, manufacturing, and transportation combine large, vulnerable attack surfaces with far overprovisioned privileges. Often, compromising a single computer or user account is sufficient to give an attacker free reign over physical machinery. Significant reduction of attack surface size is an ongoing problem, so we shift our focus to reducing the privileges granted to system operators and embedded controllers. To this end, we introduce C2, an enforcement mechanism for policies governing the usage of electromechanical devices. In presenting C2, we address two basic problems: i. How should a policy for physical device usage be expressed and enforced? This is a challenging question, as the safe usage of physical devices is dependent on mechanical limitations and the behavior of nearby devices. ii. What actions should be taken if a physical machine is issued an operation that violates the policy? C2 takes measures to ensure unsafe behaviors are not caused when denying slightly erroneous yet legitimate operations. We evaluate C2 against six representative control systems, and show that it can efficiently perform policy checks with less than 3.7% overhead, while not introducing new unsafe behavior into a control system.

Effective Inter-Component Communication Mapping in Android with Epicc: An Essential Step Towards Holistic Security Analysis

Many threats present in smartphones are the result of interactions between application components, not just artifacts of single components. However, current techniques for identifying inter-application communication are ad hoc and do not scale to large numbers of applications. In this paper, we reduce the discovery of inter-component communication (ICC) in smartphones to an instance of the Interprocedural Distributive Environment (IDE) problem, and develop a sound static analysis technique targeted to the Android platform. We apply this analysis to 1,200 applications selected from the Play store and characterize the locations and substance of their ICC. Experiments show that full specifications for ICC can be identified for over 93% of ICC locations for the applications studied. Further the analysis scales well; analysis of each application took on average 113 seconds to complete. Epicc, the resulting tool, finds ICC vulnerabilities with far fewer false positives than the next best tool. In this way, we develop a scalable vehicle to extend current security analysis to entire collections of applications as well as the interfaces they export.

Process Firewalls: Protecting Processes During Resource Access

Processes retrieve a variety of resources from the operating system in order to execute properly, but adversaries have several ways to trick processes into retrieving resources of the adversaries' choosing. Such resource access attacks use name resolution, race conditions, and/or ambiguities regarding which resources are controlled by adversaries, accounting for 5-10% of CVE entries over the last four years. Programmers have found these attacks extremely hard to eliminate because resources are managed externally to the program, but the operating system does not provide a sufficiently rich system call API to enable programs to block such attacks. In this paper, we present the Process Firewall, a kernel mechanism that protects processes in manner akin to a network firewall for the system call interface. Because the Process Firewall only protects processes it can examine their internal state to identify the protection rules necessary to block many of these attacks without the need for program modification or user configuration. We built a prototype Process Firewall for Linux demonstrating: (1) the prevention of several vulnerabilities, including two that were previously unknown; (2) that this defense can be provided system-wide for less than 4% overhead in a variety of macrobenchmarks; and (3) that it can also improve program performance, as Apache throughput is increased by 3-8% when program resource access checks are replaced by Process Firewall rules. These results show that it is practical for the operating system to protect processes by preventing a variety of resource access attacks system-wide.

Using available security policies to automate placement of network intrusion detection

System administrators frequently use Intrusion Detection and Prevention Systems (IDPS) and host security mechanisms, such as firewalls and mandatory access control, to protect their hosts from remote adversaries. The usual techniques for placing network monitoring and intrusion prevention apparatuses in the network do not account for host flows and fail to defend against vulnerabilities resulting from minor modifications to host configurations. Therefore, despite widespread use of these methods, the task of security remains largely reactive. In this paper, we propose an approach to automate a minimal mediation placement for network and host flows. We use Intrusion Prevention System (IPS) as a replacement for certain host mediations. Due to the large number of flows at the host level, we summarize information flows at the composite network level, using a conservative estimate of the host mediation. Our summary technique reduces the number of relevant network nodes in our example network by 80% and improves mediation placement speed by 87.5%. In this way, we effectively and efficiently compute network-wide defense placement for comprehensive security enforcement.

The Right Files at the Right Time

Programs fetch resources, such as files, from the operating system through the process of name resolution. However, name resolution can be subverted by adversaries to redirect victim processes to resources chosen by the adversaries, leading to a variety of attacks. These attacks are possible because traditional access control treats processes as black boxes, permitting all process permissions to all process system calls, enabling adversaries to trick victims into using resources that are not appropriate for particular system calls. Researchers have examined methods for enforcing distinct policies on individual system calls, but these methods are difficult to use because programmers must specify which permissions apply when manually. In this work, we examine the generation of system call-specific program policies to augment access control to defend against such name resolution attacks. Our insight in this paper is that system calls can be classified by the properties of the resources accessed to produce policies automatically. Given specific knowledge about name resolution attacks, such a classification may be refined further to prevent many name resolution attacks with little chance of false positives. In this paper, we produce a policy using runtime analysis for an Ubuntu 12.04 distribution, finding that 98.7% of accesses can be restricted to prevent typical name resolution attacks and more than 65% of accesses can be restricted to a single file without creating false positives. We also examine three programs in detail to evaluate the efficacy of using the provided package test suites to generate policies, finding that administrators can produce effective policies automatically.

AMIDS: A Multi-Sensor Energy Theft Detection Framework for Advanced Metering Infrastructures

The advanced metering infrastructure (AMI) is a crucial component of the smart grid, replacing traditional analog devices with computerized smart meters. Smart meters have not only allowed for efficient management of many end-users, but also have made AMI an attractive target for remote exploits and local physical tampering with the end goal of stealing energy. While smart meters posses multiple sensors and data sources that can indicate energy theft, in practice, the individual methods exhibit many false positives. In this paper, we present AMIDS, an AMI intrusion detection system that uses information fusion to combine the sensors and consumption data from a smart meter to more accurately detect energy theft. AMIDS combines meter audit logs of physical and cyber events with consumption data to more accurately model and detect theft-related behavior. Our experimental results on normal and anomalous load profiles show that AMIDS can identify energy theft efforts with high accuracy. Furthermore, AMIDS correctly identified legitimate load profile changes that more elementary analyses classified as malicious.

Hi-Fi: Collecting High-Fidelity Whole-System Provenance

Data provenance - a record of the origin and evolution of data in a system - is a useful tool for forensic analysis. However, existing provenance collection mechanisms fail to achieve sufficient breadth or fidelity to provide a holistic view of a system's operation over time. We present Hi-Fi, a kernel-level provenance system which leverages the Linux Security Modules framework to collect high-fidelity whole-system provenance. We demonstrate that Hi-Fi is able to record a variety of malicious behavior within a compromised system. In addition, our benchmarks show the collection overhead from Hi-Fi to be less than 1% for most system calls and 3% in a representative workload, while simultaneously generating a system measurement that fully reflects system evolution. In this way, we show that we can collect broad, high-fidelity provenance data which is capable of supporting detailed forensic analysis.

Transforming Commodity Security Policies to Enforce Clark-Wilson Integrity

Modern distributed systems are composed from several off-the-shelf components, including operating systems, virtualization infrastructure, and application packages, upon which some custom application code (e.g., web application) is deployed. While several commodity systems now include mandatory access control (MAC) enforcement to protect the individual system, the complexity of distributed systems composed in this manner makes it difficult to identify how threats may be propagated throughout the system. As a result, security practitioners react to vulnerabilities when they are identified by adversaries, rather than proactivelyprotecting the system's data integrity. In this paper, the goal is to develop a mostly-automated method to transform a set of commodity MAC policies into a system-widepolicy that proactively provides classical integrity protection, in particular satisfying an approximation of the Clark-Wilson integrity model.The Clark-Wilson model prescribes integrity verification of security-critical data and mediation at program entrypoints to protectthe integrity of processing that operates on security-critical data; features that are currently missing in commodity system MAC policies. By solving a graph-cut problem over the information flows produced by available MAC policies, we identify a near-minimal placement for integrity verification and program mediation necessary to satisfy Clark-Wilson. We demonstrate the practicality of producing Clark-Wilson policies for distributed systems on a web application running on virtualized SELinux hosts, where our method finds that only 27 additional entrypoint mediators are sufficient to mediate the threats of remote adversaries, mediators for possible local threats can be computed automatically, and 20 integrity verification procedures are necessary to provide information flow integrity approximating Clark-Wilson integrity.

Leveraging "Choice" to Automate Authorization Hook Placement

When servers manage resources on behalf of multiple, mutually-distrusting clients, they must mediate access to those resources to ensure that each client request complies with an authorization policy. This goal is typically achieved by placing authorization hooks at appropriate locations in server code. The goal of authorization hook placement is to completely mediate all security-sensitive operations on shared resources. To date, authorization hook placement in code bases, such as the X server and postgresql, has largely been a manual procedure, driven by informal analysis of server code and discussions on developer forums. Often, there is a lack of consensus about basic concepts, such as what constitutes a security-sensitive operation. In this paper, we propose an automated hook placement approach that is motivated by a novel observation --- that the deliberate choices made by clients for objects from server collections and for processing those objects must all be authorized. We have built a tool that uses this observation to statically analyze the server source. Using real-world examples (the X server and postgresql), we show that the hooks placed by our method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers.

SABOT: Specification-based Payload Generation for Programmable Logic Controllers

Programmable Logic Controllers (PLCs) drive the behavior of industrial control systems according to uploaded programs. It is now known that PLCs are vulnerable to the uploading of malicious code that can have severe physical consequences. What is not understood is whether an adversary with no knowledge of the PLC's interface to the control system can execute a damaging, targeted, or stealthy attack against a control system using the PLC. In this paper, we present SABOT, a tool that automatically maps the control instructions in a PLC to an adversary-provided specification of the target control system's behavior. This mapping recovers sufficient semantics of the PLC's internal layout to instantiate arbitrary malicious controller code. This lowers the prerequisite knowledge needed to tailor an attack to a control system. SABOT uses an incremental model checking algorithm to map a few plant devices at a time, until a mapping is found for all adversary-specified devices. At this point, a malicious payload can be compiled and uploaded to the PLC. Our evaluation shows that SABOT correctly compiles payloads for all tested control systems when the adversary correctly specifies full system behavior, and for 4 out of 5 systems in most cases where there where unspecified features. Furthermore, SABOT completed all analyses in under 2 minutes.

Retargeting Android Applications to Java Bytecode

The Android OS has emerged as the leading platform for SmartPhone applications. However, because Android applications are compiled from Java source into platform-specific Dalvik bytecode, existing program analysis tools cannot be used to evaluate their behavior. This paper develops and evaluates algorithms for retargeting Android applications received from markets to Java class files. The resulting Dare tool uses a new intermediate representation to enable fast and accurate retargeting. Dare further applies strong constraint solving to infer typing information and translates the 257 DVM opcodes using only 9 translation rules. It also handles cases where the input Dalvik bytecode is unverifiable. We evaluate Dare on 1,100 of the top applications found in the free section of the Android market and successfully retarget 99.99% of the 262,110 associated classes. Further, whereas existing tools can only fully retarget about half of these applications, Dare can recover over 99% of them. In this way, we open the door to users, developers and markets to use the vast array of program analysis tools to ensure the correct operation of Android applications.

STING: Finding Name Resolution Vulnerabilities in Programs

The process of name resolution, where names are resolved into resource references, is fundamental to computer science, but its use has resulted in several classes of vulnerabilities. These vulnerabilities are difficult for programmers to eliminate because their cause is external to the program: the adversary changes namespace bindings in the system to redirect victim programs to a resource of the adversary's choosing. Researchers have also found that these attacks are very difficult to prevent systematically. Any successful defense must have both knowledge about the system namespace and the program intent to eradicate such attacks. As a result, finding and fixing program vulnerabilities to such as attacks is our best defense. In this paper, we propose the STING test engine, which finds name resolution vulnerabilities in programs by performing a dynamic analysis of name resolution processing to produce directed test cases whenever an attack may be possible. The key insight is that such name resolution attacks are possible whenever an adversary has write access to a directory shared with the victim, so STING automatically identifies when such directories will be accessed in name resolution to produce test cases that are likely to indicate a true vulnerability if undefended. Using STING, we found 21 previously-unknown vulnerabilities in a variety of Linux programs on Ubuntu and Fedora systems, demonstrating that comprehensive testing for name resolution vulnerabilities is practical.

Structured Security Testing in the Smartgrid

The advanced metering infrastructure (AMI) isrevolutionizing electrical grids. Intelligent AMI "smart meters" report real time usage data that enables efficient energy generation and use. However, aggressive deployments often outpace security efforts: new devices from a dizzying array of vendors are being introduced into grids with limited understanding of the security problems they represent. In this paper we develop an archetypal attack tree approach to guide penetration testing across multiple-vendor implementations of a technology class. In this, we graft archetypal attack trees modeling broad adversary goals and attack vectors to vendor-specific concrete attack trees. Evaluators then use the grafted trees as a roadmap to penetration testing. Our experiments with multiple vendors generate real attack scenarios using vulnerabilities identified during directed penetration testing, e.g., manipulation of energy usage data, spoofing meters, and extracting sensitive data from internal registers. We provide a detailed example of one such attack as tested using our developed methodology.

Scalable Integrity-Guaranteed AJAX

Interactive web systems are the de facto vehicle for implementing sensitive applications, e.g., personal banking, business workflows. Existing web services provide little protection against compromised servers, leaving users to blindly trust that the system is functioning correctly, without being able to verify this trust. Document integrity systems support stronger guarantees by binding a document to the (non-compromised) integrity state of the machine from whence it was received, at the cost of substantially higher latencies. Such latencies render interactive applications unusable. This paper explores cryptographic constructions and systems designs for providing document integrity in AJAX-style interactive web systems. The Sporf systems exploits pre-computation to offset runtime costs to support negligible latencies. We detail the design of an Apache-based server supporting content integrity proofs, and perform a detailed empirical study of realistic web workloads. Our evaluation shows that a software-only solution results in latencies of just over 200 milliseconds on a loaded system. An analytical model reveals that with a nominal hardware investment, the latency can be lowered to just over 81 milliseconds, achieving nearly the same throughput as an unmodified system.

Integrity Walls: Finding Attack Surfaces from Mandatory Access Control Policies

Adding new programs or configuration options to a system often leads to new exploits because it provides adversaries with new ways to access possible vulnerabilities. As a result, application developers often must react to exploits as they are exploited. One proactive defense is to protect programs at their attack surfaces, the program entry points (e.g., system calls) accessible to adversaries. However, experience has shown that developers often fail to defend these entry points because they do not locate all of these points where programs access system resources controlled by attackers. In this paper, we develop a runtime analysis method to compute program attack surfaces in system deployments, which uses a novel approach to computing program adversaries to determine which program entry points access adversary-controlled objects. We implemented our design as a Linux kernel mechanism capable of identifying entry points for both binary and interpreted programs. Using this mechanism, we computed the attack surfaces for all the programs in the Ubuntu Linux 10.04 Desktop distribution automatically. Our results indicate that accurate location of attack surface entry points requires considering a program in relation to the system's access control policy. On examining located attack surfaces, we discovered previously unknown vulnerabilities in an X Windows startup script available since 2006 and the GNU Icecat web browser. Our tools enable developers to find attack surfaces for their programs quickly and to produce defenses prior to the emergence of attacks, potentially moving us away from the penetrate-and-patch rut.

Protecting Consumer Privacy from Electric Load Monitoring

The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non-Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence’s aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.

A Rose by Any Other Name or an Insane Root? Adventures in Name Resolution

Namespaces are fundamental to computing systems. Each namespace maps the names that clients use to retrieve resources to the actual resources themselves. However, the indirection that namespaces provide introduces avenues of attack through the name resolution process. Adversaries can trick programs into accessing unintended resources by changing the binding between names and resources and by using names whose target resources are ambiguous.In this paper, we explore whether a unified system approach may be found to prevent many name resolution attacks. For this, we examine attacks on various namespaces and use these to derive invariants to defend against these attacks. Four prior techniques are identified that enforce aspects of name resolution, so we explore how these techniques address the proposed invariants. We find that each of these techniques are incomplete in themselves, but a combination could provide effective enforcement of the invariants. We implement a prototype system that can implement these techniques for the Linux filesystem namespace, and show that invariant rules specific to each, individual program system call can be enforced with a small overhead (less than 3%), indicating that fine-grained name resolution enforcement may be practical.

A Study of Android Application Security

The fluidity of application markets complicate smartphone security. Although recent efforts have shed light on particular security issues, there remains little insight into broader security characteristics of smartphone applications. This paper seeks to better understand smartphone application security by studying 1,100 popular free Android applications. We introduce the ded decompiler, which recovers Android application source code directly from its installation image. We design and execute a horizontal study of smartphone applications based on static analysis of 21 million lines of recovered code. Our analysis uncovered pervasive use/misuse of personal/phone identifiers, and deep penetration of advertising and analytics networks. However, we did not find evidence of malware or exploitable vulnerabilities in the studied applications. We conclude by considering the implications of these preliminary findings and offer directions for future analysis.

Multi-vendor Penetration Testing in the Advanced Metering Infrastructure

The advanced metering infrastructure (AMI) is revolutionizing electrical grids. Intelligent AMI "smart meters" report real time usage data that enables efficient energy generation and use. However, aggressive deployments are outpacing security efforts: new devices from a dizzying array of vendors are being introduced into grids with little or no understanding of the security problems they represent. In this paper we develop an archetypal attack tree approach to guide penetration testing across multiple-vendor implementations of a technology class. In this, we graft archetypal attack trees modeling broad adversary goals and attack vectors to vendor-specific concrete attack trees. Evaluators then use the grafted trees as a roadmap to penetration testing. We apply this approach within AMI to model attacker goals such as energy fraud and denial of service. Our experiments with multiple vendors generate real attack scenarios using vulnerabilities identified during directed penetration testing, e.g., manipulation of energy usage data, spoofing meters, and extracting sensitive data from internal registers. More broadly, we show how we can reuse efforts in penetration testing to efficiently evaluate the increasingly large body of AMI technologies being deployed in the field.

Porscha: policy oriented secure content handling in Android

The penetration of cellular networks worldwide and emergence of smart phones has led to a revolution in mobile content. Users consume diverse content when, for example, exchanging photos, playing games, browsing websites, and viewing multimedia. Current phone platforms provide protections for user privacy, the cellular radio, and the integrity of the OS itself. However, few offer protections to protect the content once it enters the phone. For example, MP3-based MMS or photo content placed on Android smart phones can be extracted and shared with impunity. In this paper, we explore the requirements and enforcement of digital rights management (DRM) policy on smart phones. An analysis of the Android market shows that DRM services should ensure: a) protected content is accessible only by authorized phones b) content is only accessible by provider-endorsed applications, and c) access is regulated by contextual constraints, e.g., used for a limited time, a maximum number of viewings, etc. The Porscha system developed in this work places content proxies and reference monitors within the Android middleware to enforce DRM policies embedded in received content. A pilot study controlling content obtained over SMS, MMS, and email illustrates the expressibility and enforcement of Porscha policies. Our experiments demonstrate that Porscha is expressive enough to articulate needed DRM policies and that their enforcement has limited impact on performance.

Paper: acsac10b.pdf (December 2010)

Kells: a protection framework for portable data

Portable storage devices, such as key-chain USB devices, are ubiquitous. These devices are often used with impunity, with users repeatedly using the same storage device in open computer laboratories, Internet cafes, and on office and home computers. Consequently, they are the target of malware that exploit the data present or use them as a means to propagate malicious software. This paper presents the Kells mobile storage system. Kells limits untrusted or unknown systems from accessing sensitive data by continuously validating the accessing host's integrity state. We explore the design and operation of Kells, and implement a proof-of-concept USB 2.0 storage device on experimental hardware. Our analysis of Kells is twofold. We first prove the security of device operation (within a freshness security parameter Δt) using the LS2 logic of secure systems. Second, we empirically evaluate the performance of Kells. These experiments indicate nominal overheads associated with host validation, showing a worst case throughput overhead of 1.22% for read operations and 2.78% for writes.

Paper: acsac10.pdf (December 2010)

Constructing Secure Localization Systems with Adjustable Granularity

Proof of a user's identity is not always a sufficient means for making an authorization decision. In an increasing set of circumstances, knowledge of physical location provides additional and necessary context for making decisions about resource access. For example, sensitive information stored on a laptop (e.g. customer records, social security numbers, etc), may require additional protections if a user operates outside of an approved area. However, current localization techniques based on signal strength reporting or specialized hardware fail to achieve this goal. In this paper, we design, develop, deploy and measure a system which securely determines the location of a user to within one meter through using only off-the-shelf 802.11 and Bluetooth equipment. We apply this equipment in a two-phased challenge- response protocol: first determining the general area of the client in the Regionalization phase and then pinpointing it in the Localization phase. Using nearly 32,000 data points collected over 75 days, we argue that the stability of wireless networks over time creates easily distinguishable location profiles by which a client can be positioned. Additionally, we demonstrate the inherent ability of a two-phased protocol to discern a client's location information at a level of granularity no finer than is necessitated by policy. After discussing a number of applications, we build a location-based access control framework that automatically protects a white-listed set of resources through encryption when the user leaves specified areas. Our analyses show that this system provides a realistic and efficient means of incorporating unforgeable location information at the appropriate level of granularityinto many authorization decisions.

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones

Today’s smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides realtime analysis by leveraging Android's virtualized execution environment. TaintDroid incurs only 14% performance overhead on a CPU-bound micro-benchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of potential misuse of users' private information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.

DAuth: Fine-grained Authorization Delegation for Distributed Web Application Consumers

Web applications are becoming the predominant means by which users interact with online content. However, current authentication approaches use a single authentication credential to manage access permissions, which is too inflexible for distributed programs with unique security and privacy requirements for each component. In this paper, we introduce DAuth, an authorization mechanism that allows fine-grained and flexible control of access permissions derived from a single authentication credential for distributed consumers of web applications. We implement DAuth as a proxy for a Twitter social networking application within our distributed Elastic Application framework and find it introduces negligible overhead and requires only minor modification of existing applications. Through our evaluation, we demonstrate DAuth improves on existing web authentication mechanisms to support distributed web application consumers and can be implemented as a proxy to web applications that do not wish to develop their own implementation.

Automating Security Mediation Placement

We present a framework that automatically produces resolution placement suggestions for type errors in security-typed programs, enabling legacy code to be retrofit with comprehensive security policy mediation. Resolving such type errors requires selecting a placement of mediation statements that implement runtime security decisions, such as declassifiers and authorization checks. Manually placing mediation statements in legacy code can be difficult, as there may be several, interacting type errors. In this paper, we solve this problem by constructing a graph that has the property that a vertex cut is equivalent to the points at which mediation statements can be inserted to allow the program to satisfy the type system. We build a framework that produces suggestions that are minimum cuts of this graph, and the framework can be customized to find suggestions that satisfy functional and security requirements. We have built a tool that implements this framework for Java programs that computes suggestions for 20,000 line programs in less than 100 seconds. Even without additional guidance, we find that this tool selects suggestions where more than 80 percent of the selected mediation points were classified as similar to those placed by expert programmers, and reduces the number of locations that a programmer would otherwise have to inspect using current security-typed language tools by approximately 90 percent.

An Architecture for Enforcing End-to-End Access Control Over Web Applications

The web is now being used as a general platform for hosting distributed applications like wikis, bulletin board messaging systems and collaborative editing environments. Data from multiple applications originating at multiple sources all intermix in a single web browser, making sensitive data stored in the browser subject to a broad milieu of attacks (cross-site scripting, cross-site request forgery and others). The fundamental problem is that existing web infrastructure provides no means for enforcing end-to-end security on data. To solve this we design an architecture using mandatory access control (MAC) enforcement. We overcome the limitations of traditional MAC systems, implemented solely at the operating system layer, by unifying MAC enforcement across virtual machine, operating system, networking and application layers. We implement our architecture using Xen virtual machine management, SELinux at the operating system layer, labeled IPsec for networking and our own label-enforcing web browser, called FlowwolF. We tested our implementation and find that it performs well, supporting data intermixing while still providing end-to-end security guarantees.

Disk-Enabled Authenticated Encryption

Storage is increasingly becoming a vector for data compromise. Solutions for protecting on-disk data confidentiality and integrity to date have been limited in their effectiveness. Providing authenticated encryption, or simultaneous encryption with integrity information, is important to protect data at rest. In this paper, we propose that disks augmented with non-volatile storage (e.g., hybrid hard disks) and cryptographic processors (e.g., FDE drives) may provide a solution for authenticated encryption, storing security metadata within the drive itself to eliminate dependences on other parts of the system. We augment the DiskSim simulator with a flash simulator to evaluate the costs associated with managing operational overheads. These experiments show that proper tuning of system parameters can eliminate many of the costs associated with managing security metadata, with less than a 2% decrease in IOPS versus regular disks.

Justifying Integrity Using a Virtual Machine Verifier

Emerging distributing computing architectures, such as grid and cloud computing, depend on the high integrity execution of each system in the computation. While integrity measurement enables systems to generate proofs of their integrity to remote parties, we find that current integrity measurement approaches are insufficient to prove runtime integrity for systems in these architectures. Integrity measurement approaches that are flexible enough have an incomplete view of runtime integrity, possibly leading to false integrity claims, and approaches that provide comprehensive integrity do so only for computing environments that are too restrictive. In this paper, we propose an architecture for building comprehensive runtime integrity proofs for general purpose systems in distributed computing architectures. In this architecture, we strive for classical integrity, using an approximation of the Clark-Wilson integrity model as our target. Key to building such integrity proofs is a carefully crafted host system whose long-term integrity can be justified easily using current techniques and a new component, called a VM verifier, that can enforce our integrity target on VMs comprehensively. We have built a prototype based on the Xen virtual machine system for SELinux VMs, and find distributed compilation can be implemented, providing accurate proofs of our integrity target with less than 4% overhead.

Scalable Web Content Attestation

The web is a primary means of information sharing for most organizations and people. Currently, a recipient of web content knows nothing about the environment in which that information was generated other than the specific server from whence it came (and even that information can be unreliable). In this paper, we develop and evaluate the Spork system that uses the Trusted Platform Module (TPM) to tie the web server integrity state to the web content delivered to browsers, thus allowing a client to verify that the origin of the content was functioning properly when the received content was generated and/or delivered. We discuss the design and implementation of the Spork service and its browser-side Firefox validation extension. In particular, we explore the challenges and solutions of scaling the delivery of mixed static and dynamic content using exceptionally slow TPM hardware. We perform an in-depth empirical analysis of the Spork system within Apache web servers. This analysis shows Spork can deliver nearly 8,000 static or over 7,000 dynamic integrity-measured web objects per-second. More broadly, we identify how TPM-based content web services can scale with manageable overheads and deliver integrity-measured content with manageable overhead.

Semantically Rich Application-Centric Security in Android

Smartphones are now ubiquitous. However, the security requirements of these relatively new systems and the applications they support are still being understood. As a result, the security infrastructure available in current smartphone operating systems is largely underdeveloped. In this paper, we consider the security requirements of smartphone applications and augment the existing Android operating system with a framework to meet them. Named Secure Application INTeraction (Saint), this modified infrastructure governs install-time permission assignment and their run-time use as dictated by application provider policy. An in-depth description of the semantics of application policy is presented. The architecture and technical detail of Saint is given, and areas for extension, optimization, and improvement explored. The paper concludes by illustrating the usefulness of this framework via concrete example.

On Lightweight Mobile Phone Application Certification

Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scopea of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware.

On Cellular Botnets: Measuring the impact of Malicious Devices on Cellular Network Core

The vast expansion of interconnectivity with the Internet and the rapid evolution of highly-capable but largely insecure mobile devices threatens cellular networks. In this paper, we characterize the impact of the large scale compromise and coordination of mobile phones in attacks against the core of these networks. Through a combination of measurement, simulation and analysis, we demonstrate the ability of a botnet composed of as few as 11,750 compromised mobile phones to degrade service to area-code sized regions by 93%. As such attacks are accomplished through the execution of network service requests and not a constant stream of phone calls, users are unlikely to be aware of their occurrence. We then investigate a number of significant network bottlenecks, their impact on the density of compromised nodes per base station and how they can be avoided. We conclude by discussing a number of countermeasures that may help to partially mitigate the threats posed by such attacks.

Analysis of Virtual Machine System Policies

The recent emergence of mandatory access (MAC) enforcement for virtual machine monitors (VMMs) presents an opportunity to enforce a security goal over all its virtual machines (VMs). However, these VMs also have MAC enforcement, so to determine whether the overall system (VM-system) is secure requires an evaluation of whether this combination of MAC policies, as a whole, complies with a given security goal. Previous MAC policy analyses either consider a single policy at a time or do not represent the interaction between different policy layers (VMM and VM). We observe that we can analyze the VMM policy and the labels used for communications between VMs to create an inter-VM flow graph that we use to identify safe, unsafe, and {\em ambiguous} VM interactions. A VM with only safe interactions is compliant with the goal, a VM with any unsafe interaction violates the goal. For a VM with ambiguous interactions we analyze its local MAC policy to determine whether it is compliant or not with the goal. We used this observation to develop an analytical model of a VM-system, and evaluate if it is compliant with a security goal. We implemented the model and an evaluation tool in Prolog. We evaluate our implementation by checking whether a VM-system running XSM/Flask policy at the VMM layer and SELinux policies at the VM layer satisfies a given integrity goal. This work is the first step toward developing layered, multi-policy analyses.

Implicit Flows: Can't Live With 'Em, Can't Live Without 'Em

Verifying that programs trusted to enforce security actually enforce security is practical concern for programmers and administrators. However, there is a disconnect between the kinds of tools that have been successfully applied to real software systems (such as taint mode in Perl and Ruby), and information-flow compilers that enforce a variant of the stronger security property of noninterference. Tools that have been successfully used to find security violations have focused on explicit flows of information, where high-security information is directly leaked to output. Analysis tools that enforce stronger security guarantees also prevent implicit flows of information, where high-security information can be inferred from a program's flow of control. However, these tools have not been used to find bugs in real programs, despite the stronger guarantees that they provide. We investigate both classes of flows that are flagged by a security-typed language compiler when applied to implementations of authentication and cryptographic functions. We conclude that the extremely large number of false alarms makes checking production code for stronger information-flow security properties, such as noninterference, currently infeasible. This false alarm rate must be improved if technology from security-typed languages is to be adopted to guarantee security properties of existing programs.

Rootkit-Resistant Disks

Rootkits are now prevalent in the wild. Users affected by rootkits are subject to the abuse of their data and resources, often unknowingly. Such malware becomes even more dangerous when it is persistent--infected disk images allow the malware to exist across reboots and prevent patches or system repairs from being successfully applied. In this paper, we introduce rootkit-resistant disks (RRD) that label all immutable system binaries and configuration files at installation time. During normal operation, the disk controller inspects all write operations received from the host operating system and denies those made for labeled blocks. To upgrade, the host is booted into a safe state and system blocks can only be modified if a security token is it attached to the disk controller. By enforcing immutability at the disk controller, we prevent a compromised operating system from infecting its on-disk image. We implement the RRD on a Linksys NSLU2 network storage device by extending the I/O processing on the embedded disk controller running the SlugOS Linux distribution. Our performance evaluation shows that the RRD exhibits an overhead of less than 1% for filesystem creation and less than 1.5% during I/O intensive Postmark benchmarking. We further demonstrate the viability of our approach by preventing a rootkit collected from the wild from infecting the OS image. In this way, we show that RRDs not only prevent rootkit persistence, but do so in an efficient way.

Composition Attacks and Auxiliary Information in Data Privacy

Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations.
  • We investigate composition attacks, in which an adversary uses independently anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes $k$-anonymity and several recent variants.
  • On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information. This resistance enables ``stand-alone" design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design.

Effective Blame for Information-Flow Violations

Programs trusted with secure data should not release this data in ways contrary to system policy. However, when a program contains an illegal flow of information, current information-flow reporting techniques are inadequate for determining the cause of the error. Reasoning about information-flow errors can be difficult, as the flows involved can be quite subtle. We present a general model for information-flow blame that can explain the source of such security errors in code. This model is implemented by changing the information-flow verification procedure to: (1) generate supplementary information to reveal otherwise hidden program dependencies; (2) modify the constraint solver to construct a blame dependency graph; and (3) develop an explanation procedure that returns a {\em complete} and {\em minimal} error report. Our experiments show that information-flow errors can generally be explained and resolved by viewing only a small fraction of the total code.

Verifying Compliance of Trusted Programs

Every system depends on trusted programs for its secure operation. A trustedprogram is trusted to only perform safe operations despite have theauthority to perform unsafe operations; for example, administrative programs, root network daemons, and many others. Currently, these programs are trusted without concrete justification. The emergence of tools for building applications that guarantee policy enforcement, such as security-typed languages (STLs), and mandatory access control systems, such as user-level policy servers, finally offer a basis for justifying trust in such programs. If the operations authorized by the program policy are also authorized by system policy, then the program complies with the system. However, program policy is not directly comparable with system policy, making compliance checking difficult in general. We observe that the integrity of trusted programs must dominate the integrity of system data (the PIDSI approach). Using this insight, we assign trusted program objects to a higherintegrity level than system data objects, resulting in a simplified programpolicy that enables automated compliance verification. We find that thisassumption is consistent with the SELinux reference policy for its trustedprograms.

Measuring Integrity on Mobile Phone Systems

Mobile phone security is a relatively new field that is gathering momentum in the wake of rapid advancements in phone system technology. Mobile phones are now becoming sophisticated smart phones that provide services beyond basic telephony, such as supporting third-party applications. Such third-party applications may be security-critical, such as mobile banking, or may be untrusted applications, such as downloaded games. Our goal is to protect the integrity of such critical applications from potentially untrusted functionality, but we find that existing mandatory access control approaches are too complex and do not provide formal integrity guarantees. In this work, we leverage the simplicity inherent to phone system environments to develop a compact SELinux policy that can be used to justify the integrity of a phone system using the Policy Reduced Integrity Measurement Architecture (PRIMA) approach. We show that the resultant policy enables systems to be proven secure to remote parties, enables the desired functionality for installing and running trusted programs, and the resultant SELinux policy is over 90% smaller in size. We envision that this approach can provide an outline for how to build high integrity phone systems.

Realizing Massive-Scale Conditional Access Systems Through Attribute-Based Cryptosystems

The enormous growth in the diversity of content services such as IPtv has highlighted the inadequacy of the accompanying content security: existing security mechanisms scale poorly, require complex and often costly dedicated hardware, or fail to meet basic security requirements. New security methods are needed. In this paper, we explore the ability of attribute-based encryption (ABE) to meet the unique performance and security requirements of conditional access systems such as subscription radio and pay-per-view television. We show through empirical study that costs of ABE make its direct application inappropriate, but present constructions that mitigate its incumbent costs. We develop an extensive simulation that allows us to explore the performance of a number of virtual hardware configurations and construction parameters over workloads developed from real subscription and television audiences. These simulations show that we can securely deliver high quality content to viewerships of the highest rated shows being broadcast today, some in excess of 26,000,000 viewers. It is through these experiments that we expose the viability of not only ABE-based content delivery, but applicability of ABE systems to large-scale distributed systems.

Paper: traynor_ndss08.pdf (February 2008)

Channels: Runtime System Infrastructure for Security-typed Languages

Security-typed languages (STLs) are powerful tools for provably implementing policy in applications. The programmer maps policy onto programs by annotating types with information flow labels, and the STL compiler guarantees that data always obeys its label as it flows within an application. As data flows into or out of an application, however, a runtime system is needed to mediate between the information flow world within the application and the non-information flow world of the operating system. In the few existing STL applications, this problem has been handled in ad hoc ways that hindered software engineering and security analysis. In this paper, we present a principled approach to STL runtime system development along with policy infrastructure and class abstractions for the STL, Jif, that implement these principles. We demonstrate the effectiveness of our approach by using our infrastructure to develop a firewall application, FLOWWALL, that provably enforces its policy.

Paper: hmm07.pdf (December 2007)

Establishing and Sustaining System Integrity via Root of Trust Installation

Integrity measurements provide a means by which distributed systems can assess the trustability of potentially compromised remote hosts. However, current measurement techniques simply assert the identity of software, but provide no indication of the ongoing status of the system or its data. As a result, a number of significant vulnerabilities can result if the system is not configured and managed carefully. To improve the management of a systemÕs integrity, we propose a Root of Trust Installation (ROTI) as a foundation for high integrity systems. A ROTI is a trusted system installer that also asserts the integrity of the trusted computing base software and data that it installs to enable straightforward, comprehensive integrity verification for a system. The ROTI addresses a historically limiting problem in integrity measurement: determining what constitutes a trusted system state in any heterogeneous, evolving environment. Using the ROTI, a high integrity system state is defined by its installer, thus enabling a remote party to verify integrity guarantees that approximate classical integrity models (e.g., Biba). In this paper, we examine what is necessary to prove the integrity of the trusted computing base (sCore) of a distributed security architecture, called the SRM. We describe the design and implementation of our custom ROTI sCore installer and study the costs and effectiveness of binding system integrity to installation in the distributed SRM. This demonstration shows that strong integrity guarantees can be efficiently achieved in large, diverse environments with limited administrative overhead.

Paper: acsac07.pdf (December 2007)

Smooth Sensitivity and Sampling in Private Data Analysis

We introduce a new, generic framework for private data analysis. The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains. Our framework allows one to release functions f of the data with instance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also by the database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smooth sensitivity of f on the database x -- a measure of variability of f in the neighborhood of the instance x. The new framework greatly expands the applicability of output perturbation, a technique for protecting individualsÕ privacy by adding a small amount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-based noise in the context of data privacy.

Our framework raises many interesting algorithmic questions. Namely, to apply the framework one must compute or approximate the smooth sensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost of the minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on many databases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known or when f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.

Paper: nrs07.pdf (August 2007)

On Attack Causality in Internet-Connected Cellular Networks

The emergence of connections between telecommunications networks and the Internet creates significant avenues for exploitation. For example, through the use of small volumes of targeted traffic, researchers have demonstrated a number of attacks capable of denying service to users in major metropolitan areas. While such investigations have explored the impact of specific vulnerabilities, they neglect to address a larger issue - how the architecture of cellular networks makes these systems susceptible to denial of service attacks. As we show in this paper, these problems have little to do with a mismatch of available bandwidth. Instead, they are the result of the pairing of two networks built on fundamentally opposing design philosophies. We support this a claim by presenting two new attacks on cellular data services. These attacks are capable of preventing the use of high-bandwidth cellular data services throughout an area the size of Manhattan with less than 200Kbps of malicious traffic. We then examine the characteristics common to these and previous attacks as a means of explaining why such vulnerabilites are artifacts of design rigidity. Specifically, we show that the shoehorning of data communications protocols onto a network rigorously optimized for the delivery of voice causes that network to fail under modest loads.

Paper: usec07.pdf (August 2007)

Email Communities of Interest

Email has become an integral and sometimes overwhelming part of users' personal and professional lives. In this paper, we measure the flow and frequency of user email toward the identification of communities of interest (COI)--groups of users that share common goals or associations. If detectable, such associations will be useful in automating email management, e.g., topical classification, flagging important missives, and SPAM mitigation. An analysis of a large corpus of university email is used to drive the generation and validation of algorithms for automatically determining COIs. We examine the effect of the structure and transience of COIs with the algorithms and validate algorithms using user-labeled data. Our analysis shows that the proposed algorithms correctly identify email as being sent from the human-identified COI with high accuracy. The structure and characteristics of COIs are explored analytically and broader conclusions about email use are posited.

Paper: jrb07_emailcoi.pdf (August 2007)

Analysis of IPv4 Address Space Delegation Structure

The Internet has grown tremendously in terms of the number of users who rely on it and the number of organizations that are connected to it. Characterizing how this growth affects its structure and topology is vitally important to determine the fundamental characteristics and limitations that must be handled, such as address space exhaustion; understanding the process of allocating and delegating address space can help to answer these questions. In this paper, we analyze BGP routing data to study the structure and growth of IPv4 address space allocation, fragmentation and usage. We explore the notion of delegation relationships among prefixes and use this information to construct an autonomous system (AS) delegation tree. We show that delegation in the Internet is not significantly correlated to the underlying topology or AS customer-provider relationships. We also analyze the fragmentation and usage of address space over a period of five years and examine prefixes that are delegated by organizations vs. those that are not delegated. We notice that the address space usage due to delegating prefixes is increasing at the same rate as the address space usage due to non-delegating prefixes. This indicates that fragmentation rate of the address space is actually almost a constant with respect to total address usage. Additionally, we show that most delegation is performed by a small number of organizations, which may aid in the implementation of a public-key infrastructure for the Internet.

Paper: iscc07.pdf (July 2007)

From Trusted to Secure: Building and Executing Applications that Enforce System Security

Commercial operating systems have recently introduced mandatory access controls (MAC) that can be used to ensure system-wide data confidentiality and integrity. These protections rely on restricting the flow of information between processes based on security levels. The problem is, there are many applications that defy simple classification by security level, some of them essential for system operation. Surprisingly, the common practice among these operating systems is simply to mark these applications as "trusted", and thus allow them to bypass label protections. This compromise is not a limitation of MAC or the operating system services that enforce it, but simply a fundamental inability of any operating system to reason about how applications treat sensitive data internally---and thus the OS must either restrict the data that they receive or trust them to handle it correctly. These practices were developed prior to the advent security-typed languages. These languages provide a means of reasoning about how the OS's sensitive data is handled within applications. Thus, applications can be shown to enforce system security by guaranteeing, in advance of execution, that they will adhere to the OS's MAC policy. In this paper, we provide an architecture for an operating system service, that integrate security-typed language with operating system MAC services. We have built an implementation of this service, called SIESTA, which handles applications developed in the security-typed language, Jif, running on the SELinux operating system. We also provide some sample applications to demonstrate the security, flexibility and efficiency of our approach.

Paper: tech07-hicks.pdf (June 2007)

Configuration Management at Massive Scale: System Design and Experience

The development and maintenance of network device configurations is one of the central challenges faced by large network providers. Current network management systems fail to meet this challenge primarily because of their inability to adapt to rapidly evolving customer and provider-network needs, and because of mismatches between the conceptual models of the tools and the services they must support. In this paper, we present the PRESTO configuration management system that attempts to address these failings in a comprehensive and flexible way. Developed for and deployed over the last 4 years within a large ISP network, PRESTO constructs device-native configurations based on the composition of configlets representing different services or service options. Configlets are compiled by extracting and manipulating data from external systems as directed by the PRESTO configuration scripting and template language. We outline the configuration management needs of large-scale network providers, introduce the PRESTO system and configuration language, and demonstrate the use, workflows, and ultimately the platform's flexibility via an example of VPN service. We conclude by considering future work and reflect on the operatorsÕ experiences with PRESTO.

Paper: usenix07.pdf (June 2007)

A Logical Specification and Analysis for SELinux MLS Policy

The SELinux mandatory access control (MAC) policy has recently added a multi-level security (MLS) model which is able to express a fine granularity of control over a subject's access rights. The problem is that the richness of this policy makes it impractical to verify, by hand, that a given policy has certain important information flow properties or is compliant with another policy. To address this, we have modeled the SELinux MLS policy using a logical specification and implemented that specification in the Prolog language. Furthermore, we have developed some analyses for testing the properties of a given policy as well an algorithm to determine whether one policy is compliant with another. We have implemented these analyses in Prolog and compiled our implementation into a tool for SELinux MLS policy analysis, called PALMS. Using PALMS, we verified some important properties of the SELinux MLS reference policy, namely that it satisfies the simple security condition and *-property defined by Bell and LaPadula.

Paper: sacmat07.pdf (June 2007)

Managing the Risk of Covert Information Flows in Virtual Machine Systems

Flexible mandatory access control (MAC) enforcement is nowavailable for virtual machine systems. For example, the sHype MAC systemfor the Xen virtual machine monitor is part of the mainline Xendistribution. Such systems offer the isolation of VM systems with theflexible security of MAC enforcement. A problem is that such MAC VMsystems will only be assured at modest levels (e.g., Common CriteriaEAL4), so they may contain covert channels. Covert channels are oftendifficult to identify and harder to remove, so we propose an approach tomanage possible covert leakage to enable verification of securityguarantees. Typically, covert channels are outside of access controlpolicies, but we propose an approach that includes both overt flows andcovert flows to assess the possible risk of information leakage due totheir combination. We define the concept of a risk flow policy thatdescribes the authorized risks due to covert flows. In this paper, weevaluate the ability of four policy models to express risk flowpolicies. Further, we examine how such policies will be enforced in VMsystems. We find that variants of the Chinese Wall model andBell-LaPadula model have features necessary to express risk flowpolicies. Further, we find that such policies can be enforced in thecontext of sHype's Type Enforcement model.

Privacy Preserving Communication in MANETs

Mobile ad hoc networks often support sensitive applications. These applications may require that users' identity, location, and correspondents be kept secret. This is a challenge in a MANET because of the cooperative nature of the network and broadcast nature of the communication media. In this paper, we propose a Privacy Preserving Communication System (PPCS) which provides a comprehensive solution to anonymize communication endpoints, keep the location and identifier of a node unlinkable, and mask the existence of communication flows. We present an analysis of the security of PPCS against passive internal attackers, provide a qualitative discussion on its strength against external attackers, and characterize its performance trade-offs. The simulation results demonstrate that PPCS has only 3% lower packet delivery ratio than existing multi-path routing protocols, while effectively providing privacy service in MANETs.

Paper: smac07.pdf (June 2007)

Toward Valley-Free Inter-domain Routing

ASes in inter-domain routing receive little information about the quality of the routes they receive. This lack of information can lead to inefficient and even incorrect routing. In this paper, we quantitatively characterize BGP announcements that violate the so-called valley-free property - an indicator that universal best practices are not being preserved in the propagation of routes. Our analysis indicates that valley announcements are more pervasive than expected. Approximately ten thousand valley announcements appear every day and involve a substantial number of prefixes. 11% of provider ASes propagate valley announcements, with a majority of violations happening at intermediate providers. We find that large surges of violating announcements can be attributed to transient configuration errors. We further propose a dynamic mechanism that provides route propagation information as transitive attributes of BGP. This information implicitly reflects the policies of the ASes along the path, without revealing the relationship of each AS pair. BGPspeaking routers use this information to identify (and presumably avoid) routes that violate the valley-free property.

Paper: icc07.pdf (June 2007)

Mining Security-Sensitive Operations in Legacy Code using Concept Analysis

This paper presents an approach to statically retrofit legacy servers with mechanisms for authorization policy enforcement. The approach is based upon the observation that security-sensitive operations performed by a server are characterized by idiomatic resource manipulations, called fingerprints. Candidate fingerprints are automatically mined by clustering resource manipulations using concept analysis. These fingerprints are then used to identify security-sensitive operations performed by the server. Case studies with three real-world servers show that the approach can be used to identify security-sensitive operations with a few hours of manual effort and modest domain knowledge.

Paper: icse07.pdf (May 2007)

Leveraging Identity-based Cryptography for Node ID Assignment in Structured P2P Systems

Structured peer-to-peer systems have grown enormously because of their scalability, efficiency and reliability. These systems assign a unique identifier to each user and object. However, current assignment schemes allow an adversary to carefully select user IDs and/or simultaneously obtain many pseudo-identities -- leading ultimately to an ability to disrupt the P2P system in very targeted (and dangerous) ways. In this paper, we propose novel ID assignment protocols based on identity-based cryptography. This approach permits the acquisition of node IDs to be tightly regulated without many of the complexities and costs associated with traditional certificate solutions. We broadly consider the security requirements of ID assignment and present three protocols representing distinct threat and trust models. A detailed empirical study of the protocols is given. Our analysis shows that the cost of our identity-based protocols is nominal, and that the associated identity services can scale to millions of users using a limited number of servers.

Paper: ssnds07.pdf (May 2007)

Limiting Sybil Attacks in Structured Peer-to-Peer Networks

One practical limitation of structured peer-to-peer (P2P) networks is that they are frequently subject to Sybil attacks: malicious parties can compromise the network by generating and controlling large numbers of shadow identities. In this paper, we propose an admission control system that mitigates Sybil attacks by adaptively constructing a hierarchy of cooperative peers. The admission control system vets joining nodes via client puzzles. A node wishing to join the network is serially challenged by the nodes from a leaf to the root of the hierarchy. Nodes completing the puzzles of all nodes in the chain are provided a cryptographic proof of the vetted identity. We evaluate our solution and show that an adversary must perform days or weeks of effort to obtain even a small percentage of nodes in small P2P networks, and that this effort increases linearly with the size of the network. We further show that we can place a ceiling on the number of IDs any adversary may obtain by requiring periodic reassertion of the IDs continued validity.

Paper: infocom07.pdf (May 2007)

Integrating SELinux with Security-typed Languages

Traditionally, operating systems have enforced MAC and information flow policies with minimal dependence on application programs. However, there are many cases where systems depend on user-level programs to enforce information flows. Previous approaches to handling this problem, such as privilege-separation of application components or assuming trust in application information flow enforcement, are prone to error and cumbersome to manage. On the other hand, recent advances in the area of security-typed languages have enabled the development of realistic applications with formally and automatically verified information flow controls. In this paper, we examine what it takes to integrate information flow enforcement of applications written in a security-typed extension of Java (called Jif) with SELinux. To this end, we have extended the Jif infrastructure to support interaction with SELinux security contexts, and we describe the SELinux policy and system calls which are necessary for a successful integration. We have also identified the need for further services, such as a means of formally verifying compliance between information flow policies. We have demonstrated the utility, flexibility and security of our approach by constructing a prototype multi-level secure email client.

Paper: jpselinux06.pdf (March 2007)

Scrambling Adversarial Errors Using Few Random Bits

When communicating over a noisy channel, it is typically much easier to deal with random, independent errors with a known distribution than with adversarial errors. This paper looks at how one can use schemes designed for random errors in an adversarial context, at the cost of relatively few additional random bits and without using unproven computational assumptions.

The basic approach is to permute the positions of a bit string using a permutation drawn from a $t$-wise independent family, where $t=o(n)$. This leads to two new results:

1. We construct *computationally efficient* information reconciliation protocols correcting $pn$ adversarial binary Hamming errors with optimal communication and entropy loss $n(h(p)+o(1))$ bits, where $n$ is the length of the strings and $h()$ is the binary entropy function. Information reconciliation protocols are important tools for dealing with noisy secrets in cryptography; they are also used to synchronize remote copies of large files.

2. We improve the randomness complexity (key length) of efficiently decodable capacity-approaching "private codes" from $\Theta(n\log n)$ to $n+o(n)$.

We also present a simplified proof of an existential result on private codes due to Langberg (FOCS '04).

Paper: s07.pdf (January 2007)

Understanding Practical Application Development in Security-typed Languages

Security-typed languages are an evolving tool for implementing systems with provable security guarantees. However, to date, these tools have only been used to build simple "toy" programs. As described in this paper, we have developed the first real-world, security-typed application: a secure email system written in the Java language variant Jif. Real-world policies are mapped onto the information flows controlled by the language primitives, and we consider the process and tractability of broadly enforcing security policy in commodity applications. We find that while the language provided the rudimentary tools to achieve low-level security goals, additional tools, services, and language extensions were necessary to formulate and enforce application policy. We detail the design and use of these tools. We also show how the strong guarantees of Jif in conjunction with our policy tools can be used to evaluate security. This work serves as a starting point--we have demonstrated that it is possible to implement real-world systems and policy using security-typed languages. However, further investigation of the developer tools and supporting policy infrastructure is necessary before they can fulfill their considerable promise of enabling more secure systems.

Paper: hicks-acsac06jpmail.pdf (December 2006)

Shamon: A system for distributed mandatory access control

We define and demonstrate an approach to securing distributed computation based on a shared reference monitor (Shamon) that enforces mandatory access control (MAC) policies across a distributed set of machines. The Shamon enables local reference monitor guarantees to be attained for a set of reference monitors on these machines. We implement a prototype system on the Xen hypervisor with a trusted MAC virtual machine built on Linux 2.6 whose reference monitor design requires only 13 authorization checks, only 5 of which apply to normal processing (others are for policy setup). We show that, through our architecture, distributed computations can be protected and controlled coherently across all the machines involved in the computation.

Paper: acsac06.pdf (December 2006)

Privacy-Preserving Web-Based Email

Recent web-based applications offer users free service in exchange for access to personal communication, such as online email services and instant messaging. The inspection and retention of user communication is generally intended to enable targeted marketing. However, unless specifically stated otherwise by the collecting serviceÕs privacy policy, such records have an indefinite lifetime and may be later used or sold without restriction. In this paper, we show that it is possible to protect a userÕs privacy from these risks by exploiting mutually oblivious, competing communication channels. We create virtual channels over online services (e.g., Google's Gmail, Microsoft's Hotmail) through which messages and cryptographic keys are delivered. The message recipient uses a shared secret to identify the shares and ultimately recover the original plaintext. In so doing, we create a wired "spread-spectrum" mechanism for protecting the privacy of web-based communication. We discuss the design and implementation of our open-source Java applet, Aquinas, and consider ways that the myriad of communication channels present on the Internet can be exploited to preserve privacy.

Paper: iciss06b.pdf (December 2006)

Password Exhaustion: Predicting the End of Password Usefulness

Passwords are currently the dominant authentication mechanism in computing systems. However, users are unwilling or unable to retain passwords with a large amount of entropy. This reality is exacerbated by the increasing ability of systems to mount offline attacks. In this paper, we evaluate the degree to which the previous statements are true and attempt to ascertain the point at which passwords are no longer sufficient to securely mediate authentication. In order to demonstrate this, we develop an analytical model for computation to understand the time required to recover random passwords. Further, an empirical study suggests the situation is much worse. In fact, we found that past systems vulnerable to offline attacks will be obsolete in 5-15 years, and our study suggests that a large number of these systems are already obsolete. We conclude that we must discard or fundamentally change these systems, and to that effect, we suggest a number of ways to prevent offline attacks.

Paper: iciss06a.pdf (December 2006)

Optimizing BGP Security by Exploiting Path Stability

The Border Gateway Protocol (BGP) is the de facto interdomain routing protocol on the Internet. While the serious vulnerabilities of BGP are well known, no security solution has been widely deployed. The lack of adoption is largely caused by a failure to find a balance between deployability, cost, and security. In this paper, we consider the design and performance of BGP path authentication constructions that limit resource costs by exploiting route stability. Based on a year-long study of BGP traffic and indirectly supported by findings within the networking community, we observe that routing paths are highly stable. This observation leads to comprehensive and efficient constructions for path authentication. We empirically analyze the resource consumption of the proposed constructions via trace-based simulations. This latter study indicates that our constructions can reduce validation costs by as much as 97.3% over existing proposals while requiring nominal storage resources. We conclude by considering operational issues related to incremental deployment of our solution.

Paper: ccs06a.pdf (November 2006)

Secure Attribute-Based Systems

Attributes define, classify, or annotate the datum to which they are assigned. However, traditional attribute architectures and cryptosystems are ill-equipped to provide security in the face of diverse access requirements and environments. In this paper, we introduce a novel secure information management architecture based on emerging attribute-based encryption (ABE) primitives. A policy system that meets the needs of complex policies is defined and illustrated. Based on the needs of those policies, we propose cryptographic optimizations that vastly improve enforcement efficiency. We further explore the use of such policies in two example applications: a HIPAA compliant distributed file system and a social network. A performance analysis of our ABE system and example applications demonstrates the ability to reduce cryptographic costs by as much as 98% over previously proposed constructions. Through this, we demonstrate that our attribute system is an efficient solution for securely managing information in large, loosely-coupled, distributed systems.

Paper: ccs06b.pdf (November 2006)

Secure Multiparty Quantum Computation with (Only) a Strict Honest Majority

Secret sharing and multiparty computation (also called "secure function evaluation") are fundamental primitives in modern cryptography, allowing a group of mutually distrustful players to perform correct, distributed computations under the sole assumption that some number of them will follow the protocol honestly. This paper investigates how much trust is necessary -- that is, how many players must remain honest -- in order for distributed quantum computations to be possible. We present a verifiable quantum secret sharing (VQSS) protocol, and a general secure multiparty quantum computation (MPQC) protocol, which can tolerate any (n-1)/2 (rounded down) cheaters among n players. Previous protocols for these tasks tolerated (n-1)/4 (rounded down) and (n-1)/6 (rounded down) cheaters, respectively. The threshold we achieve is tight - even in the classical case, "fair" multiparty computation is not possible if any set of n/2 players can cheat. Our protocols rely on approximate quantum error-correcting codes, which can tolerate a larger fraction of errors than traditional, exact codes. We introduce new families of authentication schemes and approximate codes tailored to the needs of our protocols, as well as new state purification techniques along the lines of those used in fault-tolerant quantum circuits.

Paper: BCGHS06.pdf (October 2006)

Mitigating Open Functionality in SMS-Capable Cellular Networks

The transformation of telecommunications networks from homogeneous closed systems providing only voice services to Internetconnected open networks that provide voice and data services presents significant security challenges. For example, recent research illustrated that a carefully crafted DoS attack via text messaging could incapacitate all voice communications in a metropolitan area with little more than a cable modem. This attack highlights a growing threat to these systems; namely, cellular networks are increasingly exposed to adversaries both in and outside the network. In this paper, we use a combination of modeling and simulation to demonstrate the feasibility of targeted text messaging attacks. Under realistic network conditions, we show that adversaries can achieve blocking rates of more than 70% with only limited resources. We then develop and characterize five techniques from within two broad classes of countermeasures - queue management and resource provisioning. Our analysis demonstrates that these techniques can eliminate or extensively mitigate even the most intense targeted text messaging attacks. We conclude by considering the tradeoffs inherent to the application of these techniques in current and next generation telecommunications networks.

Paper: mobicom06.pdf (September 2006)

Efficient Group Mobility for Heterogeneous Sensor Networks

Mobility management protocols allow wireless devices to move between networks. These protocols have traditionally supported the mobility of individual nodes and are therefore not optimized to support the migration of groups. Accordingly, the time required to re-establish connectivity, frequency of dropped packets and contention for the air interface increase significantly for mobile groups. We propose a protocol for mobile groups that reduces all of the above by allowing a single node to perform handoffs on behalf of all group members. This "gateway" node eliminates the need for multiple handoff messages by obscuring group membership to external parties. Through extensive simulation and implementation, we show significant reduction in handoff times, message complexity and packet loss for groups of heterogeneous, mobile sensors running AODV and DSDV. By leveraging the naturally occurring hierarchy, we demonstrate that it is possible for groups to efficiently use traditional mobility protocols to support their collective movements.

Paper: vtc06.pdf (September 2006)

Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared key Models

We address the message authentication problem in two seemingly different communication models. In the first model, the sender and receiver are connected by an insecure channel and by a low-bandwidth auxiliary channel, that enables the sender to ``manually'' authenticate one short message to the receiver (for example, by typing a short string or comparing two short strings). We consider this model in a setting where no computational assumptions are made, and prove that for any $0 < \epsilon < 1$ there exists a $\log^* n$-round protocol for authenticating $n$-bit messages, in which only $2 \log(1 / \epsilon) + O(1)$ bits are manually authenticated, and any adversary (even computationally unbounded) has probability of at most $\epsilon$ to cheat the receiver into accepting a fraudulent message. Moreover, we develop a proof technique showing that our protocol is essentially optimal by providing a lower bound of $2 \log(1 / \epsilon) - O(1)$ on the required length of the manually authenticated string.

The second model we consider is the traditional message authentication model. In this model the sender and the receiver share a short secret key; however, they are connected only by an insecure channel. We apply the proof technique above to obtain a lower bound of $2 \log(1 / \epsilon) - 2$ on the required Shannon entropy of the shared key. This settles an open question posed by Gemmell and Naor (CRYPTO '93).

Finally, we prove that one-way functions are necessary (and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.

Paper: 175.pdf (August 2006)

Leveraging IPsec for Distributed Authorization

Mandatory access control (MAC) enforcement is becoming available for commercial environments. For example, Linux 2.6 includes the Linux Security Modules (LSM) framework that enables the enforcement of MAC policies (e.g., Type Enforcement or Multi-Level Security) for individual systems. While this is a start, we envision that MAC enforcement should span multiple machines. The goal is to be able to control interaction between applications on different machines based on MAC policy. In this paper, we describe a recent extension of the LSM framework that enables labeled network communication via IPsec that is now available in mainline Linux as of version 2.6.16. This functionality enables machines to control communication with processes on other machines based on the security label assigned to an IPsec security association. We outline a security architecture based on labeled IPsec to enable distributed MAC authorization. In particular, we examine the construction of a xinetd service that uses labeled IPsec to limit client access on Linux 2.6.16 systems. We also discuss the application of labeled IPsec to distributed storage and virtual machine access control.

Paper: securecomm06.pdf (August 2006)

Non-Invasive Methods for Host Certification

Determining whether a user or system is exercising appropriate security practices is difficult in any context. Such difficulties are particularly pronounced when uncontrolled or unknown platforms join public networks. Commonly practiced techniques used to vet these hosts, such as system scans, have the potential to infringe upon the privacy of users. In this paper, we show that it is possible for clients to prove both the presence and proper functioning of security infrastructure without allowing unrestricted access to their system. We demonstrate this approach, specifically applied to anti-virus security, by requiring clients seeking admission to a network to positively identify the presence or absence of malcode in a series of puzzles. The implementation of this mechanism and its application to real networks are also explored. In so doing, we demonstrate that it is not necessary for an administrator to be invasive to determine whether a client implements good security practices.

Paper: securecomm06b.pdf (August 2006)

LIGER: Implementing Efficient Hybrid Security Mechanisms for Heterogeneous Sensor Networks

The majority of security schemes available for sensor networks assume deployment in areas without access to a wired infrastructure. More specifically, nodes in these networks are unable to leverage key distribution centers (KDCs) to assist them with key management. In networks with a heterogeneous mix of nodes, however, it is not unrealistic to assume that some more powerful nodes have at least intermittent contact with a backbone network. For instance, an air-deployed battlefield network may have to operate securely for some time until uplinked friendly forces move through the area. We therefore propose LIGER, a hybrid key management scheme for heterogeneous sensor networks that allows systems to operate in both the presence and absence of a KDC. Specifically, when no KDC is available, nodes communicate securely with each other based upon a probabilistic unbalanced method of key management. The ability to access a KDC allows nodes to probabilistically authenticate neighboring devices with which they are communicating. We also demonstrate that this scheme is robust to the compromise of both low and high capability nodes and that the same keys can be used for both modes of operation. Detailed experiments and simulations are used to show that LIGER is a highly practical solution for the current generation of sensors and the unbalanced approach can significantly reduce network initialization time.

Paper: mobisys06.pdf (June 2006)

Characterizing Address Use Structure and Stabillity of Origin Advertisement in Interdomain Routing

The stability and robustness of BGP remains one of the most critical elements in sustaining today's Internet. In this paper, we study the structure and stability of origin advertisements in inter-domain routing. We visualize and quantitatively characterize the frequency, size, and effect of address assignment and origin changes by analyzing realworld BGP updates for a period of one year from multiple vantage points. Broad classes of prefix behaviors are developed. We show that a significant portion of BGP traffic is due to prefix flapping and explore the contributing factors which include a number of prefixes with abnormal short up-and-down cycles. A significant portion of prefixes have high origin stability. Most ASes are involved in few, if any, prefix movement events, while a small number of ASes are responsible for most of the origin churn. Additionally, we find that a high volume of new prefixes can be attributed to actively evolving countries, that some abnormal prefix flapping is most likely due to misconfiguration, and that some culprit ASes characterize the places where multi-origin prefixes oscillate.

Paper: icss06.pdf (June 2006)

Retrofitting Legacy Code for Authorization Policy Enforcement

Researchers have argued that the best way to construct a secure system is to proactively integrate security into the design of the system. However, this tenet is rarely followed because of economic and practical considerations. Instead, security mechanisms are added as the need arises, by retrofitting legacy code. Existing techniques to do so are manual and ad hoc, and often result in security holes. We present program analysis techniques to assist the process of retrofitting legacy code for authorization policy enforcement. These techniques can be used to retrofit legacy servers, such as X window, web, proxy, and cache servers. Because such servers manage multiple clients simultane- ously, and offer shared resources to clients, they must have the ability to enforce authorization policies. A developer can use our techniques to identify security-sensitive loca- tions in legacy servers, and place reference monitor calls to mediate these locations. We demonstrate our techniques by retrofitting the X11 server to enforce authorization policies on its X clients.

Paper: oakland06.pdf (May 2006)

Establishing Pair-Wise Keys In Heterogeneous Sensor Networks

Many applications that make use of sensor networks require secure communication. Because asymmetric-key solutions are difficult to implement in such a resource-constrained environment, symmetric-key methods coupled with a priori key distribution schemes have been proposed to achieve the goals of data secrecy and integrity. These approaches typically assume that all sensors are similar in terms of capabilities, and hence deploy the same number of keys in all sensors in a network to provide the aforementioned protections. In this paper we demonstrate that a probabilistic unbalanced distribution of keys throughout the network that leverages the existence of a small percentage of more capable sensor nodes can not only provide an equal level of security but also reduce the consequences of node compromise. We demonstrate the effectiveness of this approach on small networks using a variety of trust models and then demonstrate the application of this method to very large systems. The approach and analysis presented in this paper can be applied to all protocols that use probabilistic keys including those that employ broadcast mechanisms, hash functions or polynomials for the generation of keys.

Paper: infocom06.pdf (April 2006)

The Effects of Probabilistic Key Management on Secure Routing in Sensor Networks

Secure data dissemination in wireless ad hoc and sensor networks has recently received a great deal of attention. A variety of protocols have been proposed in order to ensure secure data delivery across these systems; however, the majority of these schemes assume the presence of public or pre-established symmetric keys. Accordingly, the cost of key management has not been incorporated into secure routing mechanisms in this setting. This paper considers the expenses incurred by sensor networks implementing secure routing schemes on top of probabilistic symmetric key management schemes. Specifically, we examine the overhead observed from proactive and reactive key establishment mechanisms for networks using a balanced method of key management. Through extensive simulation, we quantify more realistic costs for the application of secure hop-by-hop routing in sensor networks.

Paper: wcnc06.pdf (April 2006)

Calibrating Noise to Sensitivity in Private Data Analysis

We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user.

Previous work focused on the case of noisy sums, in which f = Σi g(xi), where xi denotes the ith row of the database and g maps database rows to [0, 1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case.

The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.

Paper: sensitivity-tcc-final.pdf (March 2006)

Enterprise Security: A Community of Interest Based Approach

Enterprise networks today carry a range of mission critical communications. A successful worm attack within an enterprise network can be substantially more devastating to most companies than attacks on the larger Internet. In this paper we explore a brownfield approach to hardening an enterprise network against active malware such as worms. The premise of our approach is that if future communication patterns are constrained to historical "normal" communication patterns, then the ability of malware to exploit vulnerabilities in the enterprise can be severely curtailed. We present techniques for automatically deriving individual host profiles that capture historical communication patterns (i.e., community of interest (COI)) of end hosts within an enterprise network. Using traces from a large enterprise network, we investigate how a range of different security policies based on these profiles impact usability (as valid communications may get restricted) and security (how well the policies contain malware). Our evaluations indicate that a simple security policy comprised of our Extended COI-based profile and Relaxed Throttling Discipline can effectively contain worm behavior within an enterprise without significantly impairing normal network operation.

Paper: ndss06.pdf (February 2006)

Toward Automated Information-Flow Integrity Verification for Security-Critical Applications

We provide a largely automated system for verifying Clark- Wilson interprocess information-flow integrity. Information-flow integrity properties are essential to isolate trusted processes from untrusted ones by removing unwanted input dependences. For example, an untrusted user process should not be able to write to sshd config via a cron script. A useful notion of integrity is the Clark-Wilson integrity model [7], which allows trusted processes to accept necessary untrusted inputs (e.g., network data or print jobs) via filtering interfaces that sanitize the data. However, Clark-Wilson has the requirement that programs undergo formal semantic verification; in practice, this kind of burden has meant that no information-flow integrity property is verified on most widely-used systems. We define a weaker version of Clark-Wilson integrity, called CW-Lite, which has the same interprocess information-flow guarantees, but which requires less filtering, only small changes to existing applications, and which we can check using automated tools. We modify the SELinux user library and kernel module in order to support CWLite integrity verification and develop new software tools to aid developers in finding and enabling filtering interfaces. Using our toolset, we found and fixed several integrity-violating configuration errors in the default SELinux policies for OpenSSH and vsftpd.

Paper: ndss06.pdf (February 2006)

Understanding Mutable Internet Pathogens, or How I Learned to Stop Worrying and Love Parasitic Behavior

Worms are becoming increasingly hostile. The exponential growth of infection rates allows small outbreaks to have worldwide consequences within minutes. Moreover, the collateral damage caused by infections can cripple the entire Internet. While harmful, such behaviors have historically been short-lived. We assert the future holds much more caustic malware. Attacks based on mutation and covert propagation are likely to be ultimately more damaging and long lasting. This assertion is supported by observations of natural systems, where similarly behaving parasites represent by far the most successful class of living creatures. This talk considers a parasite for the Internet, providing biological metaphors for its behavior and demonstrating the structure of pathogens. Through simulation, we show that even with low infection rates, a mutating pathogen will eventually infect an entire community. We posit the inevitability of such parasites, and consider ways that they can be mitigated.

Paper: iciss05.pdf (December 2005)

Building a MAC-based Security Architecture for the Xen Open-Source Hypervisor

We present the sHype hypervisor security architecture and examine in detail its mandatory access control facilities. While existing hypervisor security approaches aiming at high assurance have been proven useful for high-security environments that prioritize security over performance and code reuse, our approach aims at commercial security where near-zero performance overhead, non-intrusive implementation, and usability are of paramount importance. sHype enforces strong isolation at the granularity of a virtual machine, thus providing a robust foundation on which higher software layers can enact finer-grained controls. We provide the rationale behind the sHype design and describe and evaluate our implementation for the Xen open-source hypervisor.

Paper: acsac05.pdf (December 2005)

TARP: Ticket-Based Address Resolution Protocol

IP networks fundamentally rely on the Address Resolution Protocol (ARP) for proper operation. Unfortunately, vulnerabilities in the ARP protocol enable a raft of IP-based impersonation, man-in-the-middle, or DoS attacks. Proposed countermeasures to these vulnerabilities have yet to simultaneously address backward compatibility and cost requirements. This paper introduces the Ticket-based Address Resolution Protocol (TARP). TARP implements security by distributing centrally issued secure MAC/IP address mapping attestations through existing ARP messages. We detail the TARP protocol and its implementation within the Linux operating system. Our experimental analysis shows that TARP improves the costs of implementing ARP security by as much as two orders of magnitude over existing protocols. We conclude by exploring a range of operational issues associated with deploying and administering ARP security.

Paper: acsac05.pdf (December 2005)

Automatic placement of authorization hooks in the Linux security modules framework

We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.

Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hookplaced version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (SELinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.

Paper: ccs05.pdf (November 2005)

Exploiting Open Functionality in SMS-Capable Cellular Networks

Cellular networks are a critical component of the economic and social infrastructures in which we live. In addition to voice services, these networks deliver alphanumeric text messages to the vast majority of wireless subscribers. To encourage the expansion of this new service, telecommunications companies offer connections between their networks and the Internet. The ramifications of such connections, however, have not been fully recognized. In this paper, we evaluate the security impact of the SMS interface on the availability of the cellular phone network. Specifically, we demonstrate the ability to deny voice service to cities the size of Washington D.C. and Manhattan with little more than a cable modem. Moreover, attacks targeting the entire United States are feasible with resources available to medium-sized zombie networks. This analysis begins with an exploration of the structure of cellular networks. We then characterize network behavior and explore a number of reconnaissance techniques aimed at effectively targeting attacks on these systems. We conclude by discussing countermeasures that mitigate or eliminate the threats introduced by these attacks.

Paper: ccs05.pdf (November 2005)

The Sleep Deprivation Attack in Sensor Networks: Analysis and Methods of Defense

The ability of sensor nodes to enter a low power sleep mode is very useful for extending network longevity. We show how adversary nodes can exploit clustering algorithms to ensure their selection as cluster heads for the purpose of launching attacks that prevent victim nodes from sleeping. We present two such attacks: the barrage attack and the sleep deprivation attack. The barrage attack bombards victim nodes with legitimate requests, whereas the sleep deprivation attack makes requests of victim nodes only as often as is necessary to keep the victims awake. We show that while the barrage attack causes its victims to spend slightly more energy, it is more easily detected and requires more effort on behalf of the attacker. Thus we have focused our research on the sleep deprivation attack. Our analysis indicates that this attack can nullify any energy savings obtained by allowing sensor nodes to enter sleep mode. We also analyze three separate methods for mitigating this attack: the random vote scheme, the round robin scheme, and the hash-based scheme. We have evaluated these schemes based upon their ability to reduce the adversary's attack, the amount of time required to select a cluster head, and the amount of energy required to perform each scheme. We have found that of the three clustering methods analyzed, the hash-based scheme is the best at mitigating the sleep deprivation attack.

Paper: ICA_DSN_05b.pdf (October 2005)

Privacy Preserving Clustering

The freedom and transparency of information flow on the Internet has heightened concerns of privacy. Given a set of data items, clustering algorithms group similar items together. Clustering has many applications, such as customer-behavior analysis, targeted marketing, forensics, and bioinformatics. In this paper, we present the design and analysis of a privacy-preserving k-means clustering algorithm, where only the cluster means at the various steps of the algorithm are revealed to the participating parties. The crucial step in our privacy-preserving k-means is privacy-preserving computation of cluster means. We present two protocols (one based on oblivious polynomial evaluation and the second based on homomorphic encryption) for privacy-preserving computation of cluster means. We have a JAVA implementation of our algorithm. Using our implementation, we have performed a thorough evaluation of our privacy-preserving clustering algorithm on three data sets. Our evaluation demonstrates that privacy-preserving clustering is feasible, i.e., our homomorphic-encryption based algorithm finished clustering a large data set in approximately 66 seconds.

Paper: esorics05.pdf (September 2005)

Secure Reporting of Traffic Forwarding Activity in Mobile Ad Hoc Networks

Nodes forward data on behalf of each other in mobile ad hoc networks. In a civilian application, nodes are assumed to be selfish and rational, i.e., they pursue their own self-interest. Hence, the ability to accurately measure traffic forwarding is critical to ensure proper network operation. These measurements are often used to credit nodes based on their level of participation, or to detect loss. Past solutions employ neighbor monitoring and reporting on node forwarding traffic. These methods are not applicable in civilian networks where neighbor nodes lack the desire or ability to perform the monitoring function. Such environments occur frequently in which neighbor hosts are resource constrained, or in networks where directional antennas are used and reliable monitoring is difficult or impossible.

In this paper, we propose a protocol that uses nodes on the data path to securely produce packet forwarding reports. Reporting nodes are chosen randomly and secretly so that malicious nodes cannot modify their behavior based upon the monitoring point. The integrity and authenticity of reports are preserved through the use of secure link layer acknowledgments and monitoring reports. The robustness of the reporting mechanism is strengthened by forwarding the report to multiple destinations (source and destination). We explore the security, cost, and accuracy of our protocol.

Paper: mobiq05.pdf (July 2005)

Correcting Errors Without Leaking Partial Information

This paper explores what kinds of information two partiesmust communicate in order to correct errors which occur ina shared secret string W. Any bits they communicate mustleak a significant amount of information about W Ñ thatis, from the adversaryÕs point of view, the entropy of W willdrop significantly. Nevertheless, we construct schemes withwhich Alice and Bob can prevent an adversary from learningany useful information about W. Specifically, if the entropyof W is sufficiently high, then there is no function f(W)which the adversary can learn from the error-correction informationwith significant probability. This leads to severalnew results: (a) the design of noise-tolerant Òperfectly onewayÓhash functions in the sense of Canetti et al. [7], whichin turn leads to obfuscation of proximity queries for highentropy secrets W; (b) private fuzzy extractors [11], whichallow one to extract uniformly random bits from noisy andnonuniform data W, while also insuring that no sensitiveinformation about W is leaked; and (c) noise tolerance andstateless key re-use in the Bounded Storage Model, resolvingthe main open problem of Ding [10].

The heart of our constructions is the design of strong randomnessextractors with the property that the source W canbe recovered from the extracted randomness and any stringW' which is close to W.

Paper: proxobf-stoc-proc.pdf (May 2005)

Approximate Quantum Error Correcting Codes and Verifiable Secret Sharing

It is a standard result in the theory of quantum error-correctingcodes that no code of length n can fix more than n/4 arbitrary errors,regardless of the dimension of the coding and encoded Hilbert spaces.However, this bound only applies to codes which recover the messageexactly. Naively, one might expect that correcting errors to very highfidelity would only allow small violations of this bound. This intuitionis incorrect: in this paper we describe quantum error-correcting codescapable of correcting up to [(n- 1)/2] arbitrary errors with fidelity exponentiallyclose to 1, at the price of increasing the size of the registers(i.e., the coding alphabet). This demonstrates a sharp distinction betweenexact and approximate quantum error correction. The codes havethe property that any t components reveal no information about themessage, and so they can also be viewed as error-tolerant secret sharingschemes.

The construction has several interesting implications for cryptographyand quantum information theory. First, it suggests that secret sharing isa better classical analogue to quantum error correction than is classicalerror correction. Second, it highlights an error in a purported proof thatverifiable quantum secret sharing (VQSS) is impossible when the numberof cheaters t is n/4. In particular, the construction directly yields anhonest-dealer VQSS scheme for t = [(n- 1)/2]. We believe the codescould also potentially lead to improved protocols for dishonest-dealerVQSS and secure multi-party quantum computation.

More generally, the construction illustrates a difference between exactand approximate requirements in quantum cryptography and (yet again)the delicacy of security proofs and impossibility results in the quantummodel.

Paper: aqecc-proc-Eurocrypt-2005.pdf (May 2005)

Secure Remote Authentication Using Biometric Data

Biometric data offer a potential source of high-entropy, secretinformation that can be used in cryptographic protocols provided twoissues are addressed: (1) biometric data are not uniformly distributed;and (2) they are not exactly reproducible. Recent work, most notablythat of Dodis, Reyzin, and Smith, has shown how these obstacles maybe overcome by allowing some auxiliary public information to be reliablysent from a server to the human user. Subsequent work of Boyen hasshown how to extend these techniques, in the random oracle model, toenable unidirectional authentication from the user to the server withoutthe assumption of a reliable communication channel.

We show two efficient techniques enabling the use of biometric data toachieve mutual authentication or authenticated key exchange over a completelyinsecure (i.e., adversarially controlled) channel. In addition toachieving stronger security guarantees than the work of Boyen, we improveupon his solution in a number of other respects: we tolerate abroader class of errors and, in one case, improve upon the parameters ofhis solution and give a proof of security in the standard model.

Paper: fuzzy-remote-proceedings-Eurocrypt-2005.pdf (May 2005)

Towards Privacy in Public Databases

We initiate a theoretical study of the census problem. Informally,in a census individual respondents give private information to atrusted party (the census bureau), who publishes a sanitized version ofthe data. There are two fundamentally conflicting requirements: privacyfor the respondents and utility of the sanitized data. Unlike in the studyof secure function evaluation, in which privacy is preserved to the extentpossible given a specific functionality goal, in the census problem privacyis paramount; intuitively, things that cannot be learned "safely" shouldnot be learned at all.

An important contribution of this work is a definition of privacy (andprivacy compromise) for statistical databases, together with a methodfor describing and comparing the privacy offered by specific sanitizationtechniques.We obtain several privacy results using two different sanitizationtechniques, and then show how to combine them via cross training.We also obtain two utility results involving clustering.

Paper: sdb-tcc-2005-almost-final-proceedings.pdf (February 2005)

Entropic Security and the Encryption of High Entropy Messages

Russell and Wang (Eurocrypt 2002) recently introduced an elegant, information-theoretic notion called entropic security of encryption: they required that the cipher text leak no predicate of the plaintext (similar to semantic security (Goldwasser and Micali, JCSS 1984)) but only as long as the distribution on messages has high entropy from the adversary's point of view. They show that this notion of security can be achieved with very short keys for entropically rich message spaces. Canetti, Miccianci and Reingold (STOC, 1998) had previously constructed hash functions which satisfy a similar entropic security condition. The output of such hash function leaks no partial information about the input, provided the input has sufficiently high entropy. This paper studies entropic security in general, and its application to the encryption of high-entropy messages.

(1) We elucidate the notion of entropic security. Our results apply to all entropically-secure primitives, including both encryption and hash functions. We strengthen the formulation of Canetti and Russell and Wang: we require that an entropically-secure primitive leak no function whasoever of its input, while the previous works focus only on predicates. This stronger formulation is much closer to the original notion of semantic security. We also show that entropic security is equivalent to indistinguishability on pairs of input distributions of sufficiently high entropy. This equivalence generalizes the conventional equivalence between indistinguishability and semantic security. Indistinguishability, in turn, is related to randomness extraction from non-uniform distributions. The proof of the equivalence of hiding predicates to hiding all functions is quite involved, and requires very different techniques from those of Goldwasser and Micali.

(2) We use the equivalence above, and the connection to randomness extraction, to prove several new results on entropically-secure encryption. We obtain:

(a) Two general frameworks for constructing entropically secure encryption schemes, one based on expander graphs and the other on XOR-universal hash functions. These schemes generalize the schemes of Russell and Wang, yielding simpler constructions and proofs as well as improved parameters. The best scheme uses a key of length k=n-t+2log(1/eps)+O(1), where eps is a measure of leakage and t is the initial min-entropy of the message.

(b) Lower bounds on the key length k for entropic security and indistinguishability. In particular, we show near tightness of Russell-Wang constructions: k > n-t. (In fact, for a large class of schemes k >= n-t + log(1/eps).)

Paper: 219.pdf (February 2005)

Attestation-based policy enforcement for remote access

Intranet access has become an essential function for corporate users. At the same time, corporation's security administrators have little ability to control access to corporate data once it is released to remote clients. At present, no confidentiality or integrity guarantees about the remote access clients are made, so it is possible that an attacker may have compromised a client process and is now downloading or modifying corporate data. Even though we have corporatewide access control over remote users, the access control approach is currently insufficient to stop these malicious processes. We have designed and implemented a novel system that empowers corporations to verify client integrity properties and establish trust upon the client policy enforcement before allowing clients (remote) access to corporate Intranet services. Client integrity is measured using a Trusted Platform Module (TPM), a new security technology that is becoming broadly available on client systems, and our system uses these measurements for access policy decisions enforced upon the client's processes. We have implemented a Linux 2.6 prototype system that utilizes the TPM measurement and attestation, existing Linux network control (Netfilter), and existing corporate policy management tools in the Tivoli Access Manager to control remote client access to corporate data. This prototype illustrates that our solution integrates seamlessly into scalable corporate policy management and introduces only a minor performance overhead.

Paper: ccs04.pdf ( 2004)

Design and Implementation of a TCG-based Integrity Measurement Architecture

We present the design and implementation of a secure integrity measurement system for Linux. All executable content that is loaded onto the Linux system is measured before execution and these measurements are protected by the Trusted Platform Module (TPM) that is part of the Trusted Computing Group (TCG) standards. Our system is the first to extend the TCG trust measurement concepts to dynamic executable content from the BIOS all the way up into the application layer. In effect, we show that many of the Microsoft NGSCB guarantees can be obtained on today's hardware and today's software and that these guarantees do not require a new CPU mode or operating system but merely depend on the availability of an independent trusted entity, a TPM for example. We apply our trust measurement architecture to a web server application where we show how our system can detect undesirable invocations, such as rootkit programs, and that our measurement architecture is practical in terms of the number of measurements taken and the performance impact of making them.

Paper: usenix_sec2004.pdf ( 2004)

Resolving Constraint Conflicts

In this paper, we define constraint conflicts and examine properties that may aid in guiding their resolution. A constraint conflict is an inconsistency between the access control policy and the constraints specified to limit that policy. For example, a policy that permits a high integrity subject to access low integrity data is in conflict with a Biba integrity constraint. Constraint conflicts differ from typical policy conflicts in that constraints are never supposed to be violated. That is, a conflict with a constraint results in a policy compilation error, whereas policy conflicts are resolved at runtime. As we have found in the past, when constraint conflicts occur in a specification a variety of resolutions are both possible and practical. In this paper, we detail some key formal properties of constraint conflicts and show how these are useful in guiding conflict resolution. We use the SELinux example policy for Linux 2.4.19 as the source of our constraint conflicts and resolution examples. The formal properties are used to guide the selection of resolutions and provide a basis for a resolution language that we apply to resolve conflicts in the SELinux example policy.

Paper: sacmat04.pdf (June 2004)

Origin Authentication in Interdomain Routing

Attacks against Internet routing are increasing in number and severity. Contributing greatly to these attacks is the absence origin authentication: there is no way to validate claims of address ownership or location. The lack of such services enables not only attacks by malicious entities, but indirectly allow seemingly inconsequential miconfigurations to disrupt large portions of the Internet. This paper considers the semantics, design, and costs of origin authentication in interdomain routing. We formalize the semantics of address delegation and use on the Internet, and develop and characterize broad classes of origin authentication proof systems. We estimate the address delegation graph representing the current use of IPv4 address space using available routing data. This effort reveals that current address delegation is dense and relatively static: as few as 16 entities perform 80% of the delegation on the Internet. We conclude by evaluating the proposed services via traced based simulation. This simulation shows the enhanced proof systems can reduce resource consumption by an order of magnitude over currently proposed solutions.

Paper: ccs03a.pdf (October 2003)

On the Performance, Feasibility, and Use of Forward Secure Signatures

Forward-secure signatures (FSSs) have recently received much attention from the cryptographic theory community as a potentially realistic way to mitigate many of the difficulties digital signatures face with key exposure. However, no previous works have explored the practical performance of these proposed constructions in real-world applications, nor have they compared FSS to traditional, non-forward-secure, signatures in a non-asymptotic way.

We present an empirical evaluation of several FSS schemes that looks at the relative performance among different types of FSS as well as between FSS and traditional signatures. Our study provides the following contributions: first, a new methodology for comparing the performance of signature schemes, and second, a thorough examination of the practical performance of FSS. We show that for many cases the best FSS scheme has essentially identical performance to traditional schemes, and even in the worst case is only 2-4 times slower. On the other hand, we also show that if the wrong FSS configuration is used, the performance can be orders of magnitude slower. Our methodology provides a way to prevent such misconfigurations, and we examine common applications of digital signatures using it. In addition to the evaluation methodology and empirical study, a third contribution of this paper is the open-source FSS library developed for the study.

We conclude that not only are forward-secure signatures a useful theoretical construct as previous works have shown, but they are also, when used correctly, a very practical solution to some of the problems associated with key exposure in real-world applications. Through our metrics and our reference implementation we provide the tools necessary for developers to efficiently use forward-secure signatures.

Paper: ccs03b.pdf (October 2003)

Analyzing Integrity Protection in the SElinux Example Policy

In this paper, we present an approach for analyzing the integrity protection in the SELinux example policy. The SELinux example policy is intended as an example from which administrators customize to create a policy for their site's security goals, but the complexity of the model and size of the policy make this quite complex. Our aim is to provide an access control model to express site security goals and resolve them against the SELinux policy. Ultimately, we aim to define a minimal trusted computing base (TCB) that satisfies Clark-Wilson integrity, by first testing for the more restrictive Biba integrity policy and resolving conflicts using Clark-Wilson semantics. Our policy analysis tool, Gokyo, implements the following approach: (1) it represents the SELinux example policy and our integrity goals; (2) it identifies conflicts between them; (3) it estimates the resolutions to these conflicts; and (4) provides information for deciding upon a resolution. Using Gokyo, we derive a proposal for a minimal TCB for SELinux includes 30 subject types, and we identify the work remaining to ensure that TCB is integrity-protected. Our analysis is performed on the SELinux example policy for Linux 2.4.19.

Paper: jsz03.pdf (August 2003)

Working Around BGP: An Incremental Approach to Improving Security and Accuracy of Interdomain Routing

BGP is essential to the operation of the Internet, but is vulnerable to both accidental failures and malicious attacks. We propose a new protocol that works in concert with BGP, which Autonomous Systems will use to help detect and mitigate accidentally or maliciously introduced faulty routing information. The protocol differs from previous efforts at securing BGP in that it is receiver-driven, meaning that there is a mechanism for recipients of BGP UPDATE messages to corroborate the information they receive and to provide feedback. We argue that our new protocol can be adopted incrementally, and we show that there is incentive for network operators to do so. We also describe our prototype implementation.

Paper: ndss03.pdf (February 2003)

Runtime Verification of Authorization Hook Placement for the Linux Security Modules Framework

We present runtime tools to assist the Linux community in verifying the correctness of the Linux Security Modules (LSM) framework. The LSM framework consists of a set of authorization hooks inserted into the Linux kernel to enable additional authorizations to be performed (e.g., for mandatory access control). When compared to system call interposition, authorization within the kernel has both security and performance advantages, but it is more difficult to verify that placement of the LSM hooks ensures that all the kernel's security-sensitive operations are authorized. We have examined both static and runtime analysis techniques for this verification, and have found them to be complementary. Static analysis is more complex to implement and tends to generate more false positives, but coverage of all type-safe execution paths is possible. Runtime analysis lacks the code and input coverage of static analysis, but tends to be simpler to gather useful information. The major simplifying factor in our runtime verification approach is that we can leverage the fact that most of the LSM hooks are properly placed to identify misplaced hooks. Our runtime verification tools collect the current LSM authorizations and find inconsistencies in these authorizations. We describe our approach for performing runtime verification, the design of the tools that implement this approach, and the anomalous situations found in an LSM-patched Linux 2.4.16 kernel.

Paper: ccs02.pdf (October 2002)

Using CQUAL for Static Analysis of Authorization Hook Placement

The Linux Security Modules (LSM) framework is a set of authorization hooks for implementing flexible access control in the Linux kernel. While much effort has been devoted to defining the module interfaces, little attention has been paid to verifying the correctness of hook placement. This paper presents a novel approach to the verification of LSM authorization hook placement using CQUAL, a type-based static analysis tool. With a simple CQUAL lattice configuration and some GCC-based analyses, we are able to verify complete mediation of operations on key kernel data structures. Our results reveal some potential security vulnerabilities of the current LSM framework, one of which we demonstrate to be exploitable. Our experiences demonstrate that combinations of conceptually simple tools can be used to perform fairly complex analyses.

Paper: usenix_sec2002.pdf (August 2002)

Methods and Limitations of Security Policy Reconciliation

A security policy is a means by which participant session requirements are specified. However, existing frameworks provide limited facilities for the automated reconciliation of participant policies. This paper considers the limits and methods of reconciliation in a general-purpose policy model. We identify an algorithm for efficient two-policy reconciliation, and show that, in the worst-case, reconciliation of three or more policies is intractable. Further, we suggest efficient heuristics for the detection and resolution of intractable reconciliation. Based upon the policy model, we describe the design and implementation of the Ismene policy language. The expressiveness of Ismene, and indirectly of our model, is demonstrated through the representation and exposition of policies supported by existing policy languages. We conclude with brief notes on the integration and enforcement of Ismene policy within the Antigone communication system.

Paper: oak2002.pdf (MAY 2002)

Flexibly Constructing Secure Groups in Antigone 2.0

Group communication is increasingly used as a low cost building block for the development of highly available and survivable services in dynamic environments. However, contemporary frameworks often provide limited facilities for the definition and enforcement of precise security policies. This paper presents the Antigone 2.0 framework that allows the flexible specification and enforcement of group security policies. Enforcement is achieved through the policy directed composition and configuration of sets of basic security services implementing the group. We summarize the design of the Antigone 2.0 architecture, its use, and the Application Programming Interface (API). The use of the API is illustrated through two applications built on Antigone; a reliable multicast system and host level multicast security service. We conclude with a description of current status and plans for future work.

Paper: discex01.pdf (June 2001)

Managing access control complexity using metrics

General access control models enable flexible expression of access control policies, but they make the verification of whether a particular access control configuration is safe (i.e., prevents the leakage of a permission to an unauthorized subject) difficult. The current approach to expressing safety policy in such models is to use constraints. When the constraints are verified, then the configuration is verified to be safe. However, the addition of constraints to an access control configuration significantly increases its complexity, so it quickly becomes difficult to understand the access control policy expressed in the configuration such that future changes can be made correctly. We propose an approach whereby the complexity of each access control configuration is estimated, so the administrators can see the effect of a configuration change on the future ability to maintain the configuration. We identify metrics for making complexity estimates and evaluate these metrics on some constraint examples. Our goal is to enable the use of flexible access control models for safety-critical systems by permitting limited use of constraints that do not complicate the configuration beyond a maintainable complexity.

Paper: sacmat01.pdf (May 2001)

Principles of Policy in Secure Groups

Security policy is increasingly being used as a vehicle for specifying complex entity relationships. When used to define group security, policy must be extended to state the entirety of the security context. For this reason, the policy requirements of secure groups are more complex than found in traditional peer communication; group policies convey information about associations greater and more abstract than their pair-wise counterparts. This paper identifies and illustrates universal requirements of secure group policy and reasons about the adherence of the Group Security Association Key Management Protocol (GSAKMP) to these principles.

Paper: ndss01.pdf (February 2001)

Windowed Certificate Revocation

The advent of electronic commerce and personal communications on the Internet heightens concerns over the lack of privacy and security. Network services providing a wide range of security related guarantees are increasingly based on public key certificates. A fundamental problem inhibiting the wide acceptance of existing certificate distribution services is the lack of a scalable certificate revocation mechanism. We argue in this paper that the resource requirements of extant revocation mechanisms place significant burden on certificate servers and network resources. We propose a novel mechanism called windowed revocation that satisfies the security policies and requirements of existing mechanisms and, at the same time, reduces the burden on certificate servers and network resources. We include a proof of correctness of windowed revocation and analyze worst case performance scenarios.

Paper: info00.pdf (March 2000)

A Response to `Can We Eliminate Certificate Revocation Lists?'

The massive growth of electronic commerce on the Internet heightens concerns over the lack of meaningful certificate management. One issue limiting the availability of such services is the absence of scalable certificate revocation. The use of certificate revocation lists (CRLs) to convey revocation state in public key infrastructures has long been the subject of debate. Centrally, opponents of the technology attribute a range of semantic and technical limitations to CRLs. In this paper, we consider arguments advising against the use of CRLs made principally by Rivest in his paper "Can we eliminate certificate revocation lists?" [1]. Specifically, the assumptions and environments on which these arguments are based are separated from those features inherent to CRLs. We analyze the requirements and potential solutions for three distinct PKI environments. The fundamental tradeoffs between revocation technologies are identified. Prom the case study analysis we show how, in some environments, CRLs are the most efficient vehicle for distributing revocation state. The lessons learned from our case studies are applied to a realistic PKI environment. The result, revocation on demand, is a CRL based mechanism providing timely revocation information.

Paper: finc00.pdf (February 2000)

Secure Distributed Virtual Conferencing

We describe a secure distributed virtual conferencing application (SDVC) that provides high quality streaming video and audio using IP multicast for efficient distribution, uses strong authentication via cryptographic means, and (optionally) provides fully encrypted communication without sacrificing the quality of the medium or the user experience. We summarize our experiences with SDVC in a recent live demonstration and conclude with a discussion of future plans.

Paper: cms99.pdf (September 1999)

Antigone: A Flexible Framework for Secure Group Communication

Many emerging applications on the Internet requiring group communication have varying security requirements. Significant strides have been made in achieving strong semantics and security guarantees within group environments. However, in existing solutions, the scope of available security policies is often limited. This paper presents the Antigone framework. Antigone provides a suite of mechanisms from which flexible application security policies may be implemented. Using this approach, developers may chose a policy that best addresses their security and performance requirements. We describe the Antigone's mechanisms, consisting of a set of microprotocols, and show how different security policies can be implemented using those mechanisms. We also present a performance study illustrating the security/performance tradeoffs that can be made using Antigone.

Paper: usec99.pdf (August 1999)

SABOT: Specification-based Payload Generation for Programmable Logic Controllers

Programmable Logic Controllers (PLCs) drive the behavior ofindustrial control systems according to uploaded programs. It is nowknown that PLCs are vulnerable to the uploading of malicious codethat can have severe physical consequences. What is not understoodis whether an adversary with no knowledge of the PLC's interface tothe control system can execute a damaging, targeted, or stealthyattack against a control system using the PLC. In this paper, wepresent SABOT, a tool that automatically maps the controlinstructions in a PLC to an adversary-provided specification of thetarget control system's behavior. This mapping recovers sufficientsemantics of the PLC's internal layout to instantiate arbitrarymalicious controller code. This lowers the prerequisite knowledgeneeded to tailor an attack to a control system. SABOT uses anincremental model checking algorithm to map a few plant devices at atime, until a mapping is found for all adversary-specifieddevices. At this point, a malicious payload can be compiled anduploaded to the PLC. Our evaluation shows that SABOT correctlycompiles payloads for all tested control systems when the adversarycorrectly specifies full system behavior, and for 4 out of 5 systemsin most cases where there where unspecified features. Furthermore,SABOT completed all analyses in under 2 minutes.

Paper: NAS-TR-0162-2012.pdf (July 2012)

Designing for Attack Surfaces: Keep Your Friends Close, but Your Enemies Closer

It is no surprise to say that attackers have the upper hand on security practitioners today when it comes to host security. There are several causes for this problem ranging from unsafe programming languages to the complexity of modern systems at large, but fundamentally, all of the parties involved in constructing and deploying systems lack a methodology for reasoning about the security impact of their design decisions. Previous position papers have focused on identifying particular parties as being “enemies” of security (e.g., users and application developers), and proposed removing their ability to make security-relevant decisions. In this position paper, we take this approach a step further by “keeping the enemies closer,” whereby the security ramifications of design and deployment decisions of all parties must be evaluated to determine if they violate security requirements or are inconsistent with other party’s assumptions. We propose a methodology whereby application developers,OS distributors, and system administrators propose, evaluate, repair, and test their artifacts to provide a defensible attack surface, the set of entry points available to an attacker. We propose the use of a hierarchical state machine (HSM) model as a foundation for automatically evaluating attack surfaces for programs, OS access control policies, and network policies. We examine how the methodology tasks can be expressed as problems in the HSM model for each artifact, motivating the possibility of a comprehensive, coherent, and mostly-automated methodology for deploying systems to manage accessibility to attackers.

Protecting Consumer Privacy from Electric Load Monitoring

The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters re- tain and distribute highly accurate profiles of home en- ergy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human ac- tivity within the served site. This paper introduces a new class of algorithms and systems, called Non-Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance- driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery- suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence’s aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.

Outlasting Attestation with Integrity Verified Channels

Increasingly, users are depending on remote and distributed services to provide integrity-sensitive inputs and computations. Unfortunately, users must blindly accept data from these services without any justification of their integrity. What we would like is something akin to a secure communication channel that enforces data integrity requirements for all transmissions sent on that channel, in addition to protecting the data in transit. We call this an integrity-verified channel (IVC). With the introduction hardware-based reporting devices, researchers have begun exploring integrity measurement techniques to prove a system's integrity to remote parties. Unfortunately, these approaches are not suitable for validating an IVC. They prove a partial view of integrity with implicit assumption that is valid only up to the proof generation time, but users want an accurate assessment of integrity for the lifetime of the channel. In this paper, we present the Arete integrity verification model suitable for constructing IVCs. Arete enables reasoning about enforcement mechanisms and their policies to relate a system's runtime state to approximations of classical integrity requirements (Biba, Clark-Wilson). We apply the Arete model to several existing integrity measurement approaches and identify where they introduce ambiguities that require either additional effort on the part of the verifier or blind acceptance that certain events have not occurred. We then demonstrate this model by constructing IVCs to a single remote VM and an entire distributed system that are active only as long as a user's integrity criteria are satisfied. As a result, users' can obtain a practical justification for the integrity of transmissions received from remote services.

A Study of Android Application Security

The fluidity of application markets complicate smartphone security. Although recent efforts have shed light on particular security issues, there remains little insight into broader security characteristics of smartphone applications. This paper seeks to better understand smartphone application security by studying 1,100 popular free Android applications. We introduce the ded decompiler, which recovers Android application source code directly from its installation image. We design and execute a horizontal study of smartphone applications based on static analysis of 21 million lines of recovered code. Our analysis uncovered pervasive use/misuse of personal/phone identifiers, and deep penetration of advertising and analytics networks. However, we did not find evidence of malware or exploitable vulnerabilities in the studied applications. We conclude by considering the implications of these preliminary findings and offer directions for future analysis.

The ded Decompiler

Smartphone applications are frequently incompletely vetted, poorly isolated, and installed by users without restraint. Such behavior is fraught with peril: applications containing malicious logic or critical vulnerabilities are likely to be identified only after substantial damage has already occurred. Unfortunately, the limitations of application markets make them a poor agent for certifying that applications are secure. This paper presents a certification process that allows the consumers of applications to validate applications security directly. Built for the Android mobile phone platform, we reverse engineer downloaded application images into application source code and use static analysis to detect vulnerabilities. We develop and document a multi-stage process for VM retargeting and code recovery. A study of the top 1100 free Android market applications recovers source code for over 95 percent of the 143 thousand class files containing over 12 million lines of code. A preliminary analysis of the recovered source code identified over 3100 potential vulnerabilities involving a broad range of program features.

Managing Attack Risks in Virtual Machine Systems

Mandatory access control enforcement is now available in multiple software layers of conventional systems, including VMM, OS, and individual programs. Policies for each layer are developed independently, so the problem is to determine whether these policies, when integrated, prevent attackers from compromising a VM system. One prior approach to find possible attacks is to generate attack graphs for the system, but the number of paths in a VM system makes graphs difficult to use. Instead, we propose to compute a slightly different view based on attack risks, the components that enable an attacker to compromise higher integrity data, enabling propagation of an attack. In this paper, we propose a method to find such attack risks in multiple, independent MAC policies based on a reachability analysis over a hierarchical data structure, called a hierarchical state machine. We implement a tool, called Hippocrates, that constructs and solves attack risk problems from little user input and enables administrators to manage attack risks using responsibilities, simple constraints that state how components are expected to protect each other. Using Hippocrates, complex attack graphs are converted to a graph of clustered components with the same attack risk containing fewer than 10% of the nodes and 3% of the edges, enabling an administrator to manage responsibilities without delving into fine-grained policy details.

Federated Information Flow Control for Mobile Phones

Smartphone applications have attracted millions of consumers worldwide. Previous work has focused on well-defined threats to phone-specific resources such as placing phone calls and eavesdropping on the user while the phone is perceived to be ``off.'' However, researchers have failed to discuss threats to and protection of ill-defined, application-specific information that currently litters smartphones. Current smartphone information protection is error prone, difficult to administer, and in many cases, insufficient. We propose {\em Federated Information Flow Control

Network-based Root of Trust for Installation

Administrators of large data centers often require network installation mechanisms, such as disk cloning over the network, to manage the integrity of their machines. However, network-based installation is vulnerable to a variety of attacks, including compromised machines responding to installation requests with malware. To enable verification that running machines were installed correctly, we propose a network-based Root of Trust for Installation (netROTI), an installer that binds the state of a system to its installer and disk image. Our evaluation demonstrates that a netROTI installation adds about 8 seconds overhead plus 3% of image download time to a standard network install and thwarts many known attacks against the installation process.

Kells: A Protection Framework for Portable Data

Portable storage devices, such as key-chain USB devices, are ubiquitous. These devices are often used with impunity, with users repeatedly using the same storage device in open computer laboratories, Internet cafes, and on office and home computers. Consequently, they are the target of malware that exploit the data present or use them as a means to propagate malicious software.This paper presents the Kells mobile storage system. Kells limits untrusted or unknown systems from accessing sensitive data by continuously validating the accessing host’s integrity state. We explore the design and operation of Kells, and implement a proof-of-concept USB 2.0 storage device on experimental hardware. Our analysis of Kells is twofold. We first prove the security of device operation (within a freshness security parameter ∆t ) using the LS 2 logic of secure systems. Second, we empirically evaluate the performance of Kells. These experiments indicate nominal overheads associated with host validation , showing a worst case throughput overhead of 1.22 percent for read operations and 2.78 percent for writes.

Multi-vendor Penetration Testing in the Advanced Metering Infrastructure

The advanced metering infrastructure (AMI) is revolutionizing electrical grids. Intelligent AMI “smart meters” report real time usage data that enables efficient energy generation and use. However, aggressive deployments are outpacing security efforts: new devices from a dizzying array of vendors are being introduced into grids with little or no understanding of the security problems they represent. In this paper we develop an archetypal attack tree approach to guide penetration testing across multiple-vendor implementations of a technology class. In this, we graft archetypal attack trees modeling broad adversary goals and attack vectors to vendor-specific concrete attack trees. Evaluators then use the grafted trees as a roadmap to penetration testing. We apply this approach within AMI to model attacker goals such as energy fraud and denial of service. Our experiments with multiple vendors generate real attack scenarios using vulnerabilities identified during directed penetration testing, e.g., manipulation of energy usage data, spoofing meters, and extracting sensitive data from internal registers. More broadly, we show how we can reuse efforts in penetration testing to efficiently evaluate the increasingly large body of AMI technologies being deployed in the field.

Porscha: Policy Oriented Secure Content Handling in Android

The penetration of cellular networks worldwide and emergence of smart phones has led to a revolution in mobile content. Users consume diverse content when, for example, exchanging photos, playing games, browsing websites, and viewing multimedia. Current phone platforms provide protections for user privacy, the cellular radio, and the integrity of the OS itself. However, few offer protections to protect the content once it enters the phone. For example, MP3-based MMS or photo content placed on Android smart phones can be extracted and shared with impunity. In this paper, we explore the requirements and enforcement of digital rights management (DRM) policy on smart phones. An analysis of the Android market shows that DRM services should ensure: a) protected content is accessible only by authorized phones b) content is only accessible by provider-endorsed applications, and c) access is regulated by contextual constraints, e.g., used for a limited time, a maximum number of viewings, etc. The Porscha system developed in this work places content proxies and reference monitors within the Android middleware to enforce DRM policies embedded in received content. A pilot study controlling content obtained over SMS, MMS, and email illustrates the expressibility and enforcement of Porscha policies. Our experiments demonstrate that Porscha is expressive enough to articulate needed DRM policies and that their enforcement has limited impact on performance.

System-Wide Information Flow Enforcement

This report presents two sets of research results generated for this project: (1) the design and implementation of a VM-based system and information flow-aware browser, called FlowwolF, for deploying web client software that enforces information flow policies, and (2) the design and imple- mentation of a policy analysis tool, called Hippocrates, for detecting and removing unauthorized attack surfaces in VM system mandatory access control (MAC) policies. First, the FlowwolF VM system and browser were built to compose enforce information flows enforcement in virtual machine monitor, virtual machine, and browser MAC policies resulting in coherent end-to-end enforcement of information flow for web applications. Second, the Hippocrates policy analysis tool was constructed to evaluate XSM/Flask virtual machine monitor policies and SELinux virtual machine policies, integrated with network policies, to find the attack surfaces that result from the overall system. As a result of this work, software tools have been developed for designing and deploying privileged programs and applications on virtual machine systems that enable practical enforcement of secrecy and integrity information flow goals.

Seeding Clouds with Trust Anchors

We are finding that customers with security-critical data processing needs are beginning to push back strongly against using clouds, so we examine how clouds may prove that they enforce a customer’s data security requirements. Researchers have proposed integrity measurement methods for building proofs of system integrity, but while there have been some successes on single-purpose systems or for specific threats such methods have not been adopted broadly. However, we find that the centralized management of cloud data centers and our proposed approach to leverage integrity enforcement results in a practical approach for customers to verify data security in the cloud. Specifically, we propose a cloud verifier service that generates integrity proofs for customers to verify the integrity and access control enforcement abilities of the cloud platform to protect the integrity of customer’s application VMs in IaaS clouds. While a cloud-wide verifier service could present a significant system bottleneck, we demonstrate that aggregating proofs enables significant overhead reductions. As a result, we find several, promising research directions to pursue for building cloud systems for security- critical applications and the skeptics that run them.

DAuth: Fine-grained Authorization Delegation for Distributed Web Application Consumers

Web applications are becoming the predominate means by which users interact with online content. However, current authentication approaches use a single authentication credential to manage access permissions, which is too inflexible for distributed programs with unique security and privacy requirements for each component. In this paper, we introduce DAuth, an authorization mechanism that allows fine-grained and flexible control of access permissions derived from a single authentication credential for distributed consumers of web applications. We implement DAuth as a proxy for a Twitter social networking application within our distributed Elastic Application framework and find it introduces negligible overhead and requires only minor modification of existing applications. Through our evaluation, we demonstrate DAuth improves on existing web authentication mechanisms to support distributed web application consumers and can be implemented as a proxy to web applications that do not wish to develop their own implementation.

Hierarchical Policy Compliance for Virtual Machine Systems

Mandatory access control (MAC) enforcement mechanisms are now available for virtual machine monitors (VMMs) and user-level programs, in addition to operating systems, presenting an opportunity for comprehensive and flexible access control. However, MAC policies for these individual components are often developed independently. As a consequence, they may enforce inconsistent or contradictory security goals. At present, we have no approach for understanding the security risks that may result from these inconsistencies, so VMs are deployed with an unknown number and variety of risks. In this paper, we develop a method for identifying integrity risks in multiple, independent MAC policies in VM systems that has the following properties: (1) it is comprehensive in that all risky information flows in a VM system are identified; (2) it is incremental in that individual MAC policies (or even portions of MAC policies) can be evaluated independently; and (3) it is useful because risk identification shows where constraints are necessary to resolve risks. We demonstrate this approach using the hierarchical compliance analysis testing system Hippocrates that we constructed on Xen systems enforcing XSM/Flask policies in the hypervisor and SELinux policies in each of the VMs. Despite the complexity of the MAC policies, we are able to identify a comprehensive set of integrity risks in 9-14 seconds per VM evaluated.

Integrity Walls: Finding Attack Surfaces from Mandatory Access Control Policies

An attack surface defines the entry points that attackers can use to launch attacks against a system. Researchers have examined attack surfaces in the context of systems, by assessing the reachability among processes given an access control policy, and in the context of programs, by assessing the reachability of program interfaces. However, an attacker’s ability to reach a particular program interface depends on the system’s access control policy to allow such access. In this report, we examine how a system’s access control policy may be used to guide a more accurate assessment of attack surfaces. We propose an approach for defining an integrity wall for every program that identifies the set of untrusted operations that that program may perform. Also, we develop a runtime monitoring tool for collecting the interfaces actually used to perform such untrusted operations. We find that even with a conservative SELinux MAC policy, the system TCB and server programs have several interfaces that access untrusted data. Further, using automated techniques we identified interfaces that have led to several, known vulnerabilities, some recently discovered, and also we identified interfaces in programs that were not found by previous manual analysis. We envision that such an approach will be useful for OS distributors who must make decisions about which programs should be trusted with permissions that authorize untrustedoperations.

Automating Security Mediation Placement

We present a framework that automatically produces resolution placement suggestions for type errors in security-typed programs, enabling legacy code to be retrofit with comprehensive security policy mediation. Resolving such type errors requires selecting a placement of mediation statements that implement runtime security decisions, such as declassifiers and authorization checks. Manually placing mediation statements in legacy code can be difficult, as there may be several, interacting type errors. In this paper, we solve this problem by constructing a graph that has the property that a vertex cut is equivalent to the points at which mediation statements can be inserted to allow the program to satisfy the type system. We build a framework that produces suggestions that are minimum cuts of this graph, and the framework can be customized to find suggestions that satisfy functional and security requirements. We have built a tool that implements this framework for Java programs that computes suggestions for 20,000 line programs in less than 100 seconds. Even without additional guidance, we find that this tool selects suggestions where more than 80 percent of the selected mediation points were classified as similar to those placed by expert programmers, and reduces the number of locations that a programmer would otherwise have to inspect using current security-typed language tools by approximately 90 percent.

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones

Today's smartphone operating systems fail to provide users with adequate control and visibility into how third-party applications use their private data. We present TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system for the popular Android platform that can simultaneously track multiple sources of sensitive data. TaintDroid's efficiency to perform real-time analysis stems from its novel system design that leverages the mobile platform's virtualized system architecture. TaintDroid incurs only 14\% performance overhead on a CPU-bound micro-benchmark with little, if any, perceivable overhead when running third-party applications. We use TaintDroid to study the behavior of 30 popular third-party Android applications and find several instances of misuse of users' private information. We believe that TaintDroid is the first working prototype demonstrating that dynamic taint tracking and analysis provides informed use of third-party applications in existing smartphone operating systems.

Justifying Integrity Using a Virtual Machine Verifier

Emerging distributing computing architectures, such as grid and cloud computing, depend on the high integrity execution of each system in the computation. While integrity measurement enables systems to generate proofs of their integrity to remote parties, we find that current integrity measurement approaches are insufficient to prove runtime integrity for systems in these architectures. Integrity measurement approaches that are flexible enough have an incomplete view of runtime integrity, possibly leading to false integrity claims, and approaches that provide comprehensive integrity do so only for computing environments that are too restrictive. In this paper, we propose an architecture for building comprehensive runtime integrity proofs for general purpose systems in distributed computing architectures. In this architecture, we strive for classical integrity, using an approximation of the Clark-Wilson integrity model as our target. Key to building such integrity proofs is a carefully crafted host system whose long-term integrity can be justified easily using current techniques and a new component, called a VM verifier, that can enforce our integrity target on VMs comprehensively. We have built a prototype based on the Xen virtual machine system for SELinux VMs, and find distributed compilation can be implemented, providing accurate proofs of our integrity target with less than 4% overhead.

Cloud Integrity Enforcement via Integrity-Verified Channels

Ensuring that cloud computations protect their integrity is a challenge because of the large number of components involved, many of which are opaque to the cloud customers. Researchers have proposed using hardware-based integrity measurement for justifying cloud integrity, but careful understanding of current integrity measurement architectures and the desired integrity goals are necessary to ensure integrity is protected. We introduce the concept of an integrity-verified channel (IVC), which associates all data sent on such a channel with a proof of integrity of its distributed computation. While extending integrity across cloud systems introduces some issues, cloud architectures provide structure to distributed computing that is helpful for reasoning about integrity. We envision that practical,comprehensive integrity judgments can be made practical in cloud systems.

Energy Theft in the Advanced Metering Infrastructure

Global energy generation and delivery systems are transitioning to a new computerized "smart grid". One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is fraud still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it.

Firma: Disk-Based Foundations for Trusted Operating Systems

Secure boot mechanisms aim to provide guarantees of integrity of a system as it loads. It ensures that if a system is running, all of its process will satisfy integrity verification requirements. While secure boot has been availablefor a long time, it is not available in commodity systems due to the high cost of secure hardware. In this paper, we describe Firma, an architecture that provides secure boot functionality based on a storage root of trust. Unlike previous secure boot mechanisms, use of the disk can protect data secrecy by only releasing data to systems trusted not to leak data, while also providing data integrity through release to high integrity systems. We implement a prototype of Firma and show how it may be used to provide a trusted virtual machine monitor (TVMM) capable of supporting strong security guarantees for running VMs. Only minimal administration is required, and we detail the tasks necessary to support the architecture, showing new systems can be configured with a small number of automated steps. Our evaluation shows that Firma requires additional overhead of just over 1 second for the boot process.

On Lightweight Mobile Phone App Certification

Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware.

On Cellular Botnets: Measuring the Impact of Malicious Devices on the Network Core

The vast expansion of interconnectivity with the Internet and the rapid evolution of highly-capable but largely insecure mobile devices threatens cellular networks. In this paper, we characterize the impact of the large scale compromise and coordination of mobile phones in attacks against the core of these networks. Through a combination of measurement, simulation and analysis, we demonstrate the ability of a botnet composed of as few as 11,750 compromised mobile phones to degrade service to area-code sized regions by 93%. As such attacks are accomplished through the execution of network service requests and not a constant stream of phone calls, users are unlikely to be aware of their occurrence. We then investigate a number of network bottlenecks, their impact on the density of compromised nodes per base station and how they can be avoided. We conclude by discussing a number of countermeasures that may help to partially mitigate the threats posed by such attacks.

Exact and Fuzzy Sensor-Task Assignment

Sensor networks introduce new resource allocation problems in which sensors need to be assigned to the tasks they best help. In the past, such problems have been studied in simplified models in which utility from multiple sensors is assumed to combine additively. In this paper we study more complex utility models, focusing on two particular applications: event detection and target localization. We develop distributed algorithms to assign sensing resources of different types to multiple simultaneous tasks that have different information needs. We show that our schemes perform well using both exact location information or fuzzy location information, which may be desirable to save on computational overhead and/or for privacy reasons.

Retrofitting Authorization in Legacy Programs

A recent trend is the emergence of systems support for applications to enforce security policies. From security-typed languages to user-level reference monitoring (e.g., for SELinux) to access control models that enable application enforcement (e.g., DIFC), a likely result is that many applications will soon be retrofit to enable security. However, retrofitting applications manually for security is a time-consuming and error-prone task. As a result, few programs have actually been retrofit to leverage such technologies. In recent work, we have found that placing mediation points in programs is equivalent to computing a graph-cut, so we propose to apply this technology to automate (mostly) the task of retrofitting programs for authorization. In this paper, we propose a system based on a graph-cut approach that uses placement policy to compute mediation points for a program to build an authorization system. We argue that this work shows that automated tool support for authorization system construction will soon be feasible.

An Architecture for Enforcing End-to-End Security Over Web Ap- owards Comprehensive System Integrity Verification through Monitoring

The web is now being used as a general platform for hosting distributed applications like wikis, bulletin board messaging systems and collaborative editing environments. Data from multiple applications originating at multiple sources all intermixes in a single web browser, making sensitive data stored in the browser subject to a broad milieu of attacks (cross-site scripting, cross-site request forgery and others). The fundamental problem is that existing web infrastructure provides no means for enforcing end-to-end security on data. To solve this we design an architecture using mandatory access control (MAC) enforcement. We overcome the limitations of traditional MAC systems, implemented solely at the operating system layer, by unifying MAC enforcement across virtual machine, operating system, networking and application layers. We implement our architecture using Xen virtual machine management, SELinux at the operating system layer, labeled IPsec for networking and our own label-enforcing web browser, called FlowwolF. We test our implementation on two web applications and find that it performs well, maintaining the rich user experience of intermixing data in the browser, while still providing end-to-end security guarantees.

No Node Is an Island: Shamon Integrity Monitoring Approach

Many users obtain critical information from distributed systems. As more Internet services leverage distributed computing approaches, like mashups and service-oriented architectures, the need for users to verify the authenticity of data from distributed systems will increase. While a variety of mechanisms are broadly used to authenticate principals in distributed systems, mechanisms to verify the integrity of the distributed computation are only used in limited cases. Hardware-based attestation is still the most promising approach for integrity verification of remote systems, but current attestation mechanisms only verify a single node or only apply to limited distributed systems. In this paper, we propose a Shamon Integrity Monitoring Approach (SHIMA) for integrity verification in distributed systems. SHIMA prescribes that specialized services, called Virtual Machine Verifier, on each distributed node enforce an integrity criteria over the execution of system VMs and collaborate to maintain the integrity of the distributed system. This approach enables any system node to build a global integrity proof that the entire distributed system satisfies that integrity criteria. SHIMA is based on two insights: (1) that having a specialized component whose long-term integrity can be verified using traditional attestation provides a foundation for distributed system verification and (2) that the specialized component can be entrusted with the task of enforcing integrity and reporting enforcement status to others via attestations. We deploy a SHIMA prototype to verify that a Tor anonymity system runs only Tor servers that satisfy an integrity criteria. The resultant Tor system detects malware and policy misconfigurations that would enable compromise of a Tor server while adding negligible overhead to request processing. Thus, with SHIMA, we provide a low overhead mechanism for integrity verification of distributed systems that ensures a meaningful integrity criteria.

Towards Automated Security Mediation Placement

It is important to verify that programs protect the security of the data they process. Security-typed languages use type systems that augment the types of data with security labels to statically verify that a program satisfies a security property. However, many programs exhibit behavior that is not compatible with a static type system. For example, the actual security labels of data may not be known until runtime, requiring an authorization check. Also, secret data may need to be released under controlled conditions, requiring a declassification that changes the data's security label. To resolve these conflicts within the type system, programmers insert mediation statements. Placing these statements manually can be difficult, especially in legacy code, because of the complexity of security type inference. In this paper, we present a method that automatically determines locations in a program where mediation statements should be placed to resolve type system conflicts. To achieve this, we construct a graph that has the property that a vertex cut is equivalent to the points at which mediation statements can be inserted to allow the program to satisfy the type system. We output suggestions associated with minimum cuts of this graph, and find that this technique selects mediation points that are similar to mediation points selected by expert programmers. In our experiments, our tool reduces the number of locations that a programmer would otherwise have to inspect using current security-typed language tools by 85%, and more than 95% of the selected mediation points were classified as similar to those placed by expert programmers.

Mitigating Android Software Misuse Before It Happens

Mobile phones running open operating systems such as Google Android will soon be the norm in cellular networks. These systems expose previously unavailable phone and network resources to application developers. However, with increased exposure comes increased risk. Poorly or maliciously designed applications can compromise the phone and network. While Android defines a base set of permissions to protect phone resources and core applications, it does not define what a secure phone is, relying on the applications to act together securely. In this paper, we develop the Kirin security framework to enforce policy that transcends applications, called policy invariants, and provides an at 'installation' self-certification process to ensure only policy compliant applications will be installed. We begin by describing the Google Android security model and formally model its existing policy. Using relatively simple policy invariants describing realistic security requirements, Kirin identified insecure policy configurations within Android leading to vulnerabilities in core phone services, thereby motivating additional security framework defining system-wide policy.

Protecting Telephony Services in Mobile Phones

Mobile phones have evolved into indispensable devices encompassing a wide variety of features and enabling exciting new applications. The adoption of general purpose operating systems for mobile device, like Symbian, Windows, and Linux, has heralded the emergence of third-party applications, some of which may be malware or themselves vulnerable to attack. The growing reliance on mobile phones implies that we need to be able to protect fundamental functionality from being misused by these untrusted applications. However, much of the fundamental functionality of mobile phones is implemented in user-level services, such as audio, windowing, and telephony services. The problem is that these services have historically been designed on systems where the vendors controlled all the code, so they trust all their clients. Further, the operating system trusts all the clients as well, so the system is quite vulnerable to confused deputy attacks. In this paper, we focus on how to convert mobile phone systems to protect the fundamental function of telephony. Our insight is that the telephony services and the operating system controls need to be coordinated to provide comprehensive protection from attack. We extend an Openmoko Linux mobile phone system with SELinux protection over the system objects, and integrate a reference monitor into the Openmoko GSM telephony server that can enforce an SELinux MAC policy on telephony requests. We examine how the resultant system is secure against the threats from untrusted applications and measure its performance to show that this additional mediation introduces negligible overhead. While this is just one of several user-level services that must be secured on mobile phone devices, we hope that our approach provides guidance for how to build secure mobile systems.

Scalable Asynchronous Web Content Attestation

The web is the primary means of information sharing for most organizationsand people. However, at present, there are few means of validating theuthenticity and integrity of the received content beyond identifying thesource from whence it was received (i.e., via server-side SSL certificatevalidation). In this paper, we develop and evaluate a system enabling theuse of TPM platform to tie the web server integrity state to the webcontent delivered to browsers. We discuss the design and implementation ofthe web server authentication services and the browser-side Firefoxvalidation extension. Our empirical evaluation shows that the system incursa modest 12.5% overhead, sustaining the retrieval of quoted documents atnear line speed. More broadly, we identify how TPM-based content servicescan scale without unmanageable overheads.

Implicit Flows: Can't Live With `Em, Can't Live Without `Em

Verifying that programs trusted to enforce security actually enforce security is a pratical concern for programmers and administrators. However, there is a disconnect between the kinds of tools that have been successfully applied to real software systems (such as taint mode in Perl and Ruby), and information-flow compilers that enforce a variant of the stronger security property of noninterference. Tools that have been successfully used to find security violations have focused on explicit flows of information, where high-security information is directly leaked to output. Analysis tools that enforce stronger security guarantees also prevent implicit flows of information, where high-security information can be inferred from a program's flow of control. We investigate both classes of flows that are flagged by a security-typed language compiler when applied to an SSH server and conclude that the extremely large number of false alarms makes checking production code for stronger informationflow security properties currently infeasible. This false alarm rate must be improved if technology from security-typed languages is to be adopted for the retrofitting of legacy code in mainstream programs.

Trusted Declassification: Policy Infrastructure for a Security-Typed Language

Security-typed languages are a powerful tool for developing verifiably secure software applications. Programs written in these languages enforce a strong, global policy of noninterference which ensures that high-security data will not be observable on low-security channels. Because noninterference is typically too strong a property, most programs use some form of declassification to selectively leak high security information, e.g. when performing a password check or data encryption. Unfortunately, such a declassification is often expressed as an operation within a given program, rather than as part of a global policy, making reasoning about the security implications of a policy difficult. In this paper, we propose a simple idea we call trusted declassification in which special declassifier functions are specified as part of the global policy. In particular, individual principals declaratively specify which declassifiers they trust so all information flows implied by the policy can be reasoned about in absence of a particular program. We formalize our approach for a Javalike language and prove a modified form of noninterference which we call noninterference modulo trusted methods. We have implemented our approach as an extension to Jif, a security-typed variant of Java, and provide our experience using it to build a secure email client, JPmail.

A Logical Specification and Analysis for SELinux MLS Policy

The SELinux mandatory access control (MAC) policy has recently added a multilevel security (MLS) model which is able to express a fine granularity of control over a subject's access rights. The problem is that the richness of the SELinux MLS model makes it impractical to manually evaluate that a given policy meets certain specific properties. To address this issue, we have modeled the SELinux MLS model using a logical specification and implemented that specification in the Prolog language. Furthermore, we have developed some analyses for testing information flow properties of a given policy as well an algorithm to determine whether one policy is compliant with another. We have implemented these analyses in Prolog and compiled our implementation into a tool for SELinux MLS policy analysis, called PALMS. Using PALMS, we verified some important properties of the SELinux MLS reference policy, namely that it satisfies the simple security condition and *-property defined by Bell and LaPadula. We also evaluated whether the policy associated to a given application is compliant with the policy of the SELinux system in which it would be deployed.

On Automatic Placement of Declassifiers for Information-Flow Security

Security-typed languages can be used to build programs that are information-flow secure, meaning that they do not allow secret data to leak. Declassification allows programs to leak secret information in carefully prescribed ways. Manually placing declassifiers to authorize certain flows of information can be dangerous because an incorrectly placed declassifier can leak far more secure data than intended. Additionally, the sheer number of runtime flows that can cause an error means that determining where to place declassifiers can be difficult. We present a new approach for constructing information-flow secure programs where declassifiers are placed such that no unintended leakage occurs. Leakage restrictions are specified using hard constraints and potential declassifier locations are ranked using soft constraints. Finally, the placement problem is submitted to a pseudo-Boolean optimizing SAT solver that selects a minimal set of declassifiers that prevent unauthorized data leakage. These declassifiers can be reviewed by the programmer to ensure that they correspond with acceptable declassification points: if not, new hard constraints can be added and the optimization framework can be re-invoked. Our experimental results indicate that our analysis suggests declassifiers that will cause no more leakage than those placed by programmers in a fraction of the time it would take to perform a manual analysis. This work provides a foundation for less expert programmers to build information-flow secure programs and to convert existing programs to be information-flow secure

Non-Volatile Memory and Disks: Avenues for Policy Architectures

As computing models change, so too do the demands on storage. Distributed and virtualized systems introduce new vulnerabilities, assumptions, and performance requirements on disks. However, traditional storage systems have very limited capacity to implement needed "advanced storage" features such as integrity and data isolation. This is largely due to the simple interfaces and limited computing resources provided by commodity hard-drives. A new generation of storage devices affords better opportunities to meet these new models, but little is known about how to exploit them. In this paper, we show that the recently introduced fast-access non-volatile RAM-enhanced hybrid (HHD) disk architectures can be used to implement a range of valuable storage-security services. We specifically discuss the use of these new architectures to provide data integrity, capability-based access control, and labeled information flow at the disk access layer. In this, we introduce systems that place a security perimeter at the disk interface--and deal with the parent operating system only as a largely untrusted entity.

Protecting Users From "Themselves"

Computer usage and threat models have changed drastically since the advent of access control systems in the 1960s. Instead of multiple users sharing a single file system, each user has many devices with their own storage. Thus, a user's fear has shifted away from other users' impact on the same system to the threat of malice in the software they intentionally or even inadvertently run. As a result, we propose a new vision for access control: one where individual users are isolated by default and where the access of individual user applications is carefully managed. A key question is how much user administration effort would be required if a system implementing this vision were constructed. In this paper, we outline our work on just such a system, called PinUP, which manages file access on a per application basis for each user. We use historical data from our lab's users to explore how much user and system administration effort is required. Since administration is required for user sharing in PinUP, we find that sharing via mail and file repositories requires a modest amount of administrative effort, a system policy change every couple of days and a small number of user administrative operations a day. We are encouraged that practical administration on such a scale is possible given an appropriate and secure user approach.

Towards Automated Privilege Separation

Applications are subject to threat from a number of attack vectors, and limiting their attack surface is vital. By using privilege separation to constrain application access to protected resources, we can mitigate the threats against the application. Previous examinations of privilege separation either entailed significant manual effort or required access to the source code. We consider a method of performing privilege separation through black-box analysis. We consider similar applications to the target and infer states of execution, and determine unique trigger system calls that cause transitions. We use these for the basis of state-based policy enforcement by leveraging the Systrace policy enforcement mechanism. Our results show that we can infer state transitions with a high degree of accuracy, while our modifications to Systrace result in more granular protection by limiting system calls depending on the application's state. The modified Systrace increases the size of the Apache web server's policy file by less than 17.5%.

Effective Blame for Information-Flow Violations

Programs trusted with secure data should not release this data in ways contrary to system policy. However, when a program contains an illegal flow of information, current information-flow reporting techniques are inadequate for determining the cause of the error. Reasoning about information-flow errors can be difficult, as the flows involved can be quite subtle. We present a general model for information-flow blame that can explain the source of such security errors in code. This model is implemented by changing the information-flow verification procedure to: (1) generate supplementary information to reveal otherwise hidden program dependencies; (2) modify the constraint solver to construct a blame dependency graph; and (3) develop an explanation procedure that returns a complete and minimal error report. Our experiments show that information-flow errors can generally be explained and resolved by viewing only a small fraction of the total code.

Establishing and Sustaining System Integrity via Root of Trust Installation

Integrity measurements provide a means by which distributed systems can assess the trustability of potentially compromised remote hosts. However, current measurement techniques simply assert the identity of software, but provide no indication of the ongoing status of the system or its data. As a result, a number of significant vulnerabilities can result if the system is not configured and managed carefully. In this paper, we examine what is necessary to prove the integrity of the core trusted computing base (sCore) of a distributed security architecture, called the Shamon. We identify a specific set of vulnerabilities that would result from the use of current integrity measurement techniques, and propose a Root of Trust Instal lation (ROTI) as a foundation for high integrity systems. A ROTI binds the trusted computing base software and data to a trusted installer to enable straightforward, comprehensive integrity verification for a system. We built a custom installer for our sCore and show that it is possible to bind system integrity to installation for a usable sCore system with nominal performance penalties. The resulting approach shows that practical systems management approaches can achieve high degrees of integrity with a modest administrative effort and modest overhead at installation and boot.

PinUP: Protecting User Files by Reducing Application Access

Users commonly download, patch, and use applications such as email clients, office applications, and media-players from the Internet. Such applications are run with the user's full permissions. Because system protections do not differentiate applications from each other, any malcode present in the downloaded software can compromise or otherwise leak all user data. Interestingly, our investigations show that common applications adhere to recognizable workflows on user data. In this paper, we take advantage of this reality by developing protection mechanisms that ``pin'' user files to the applications that may use them. These mechanisms restrict access to user data to explicitly stated workflows--thus preventing malcode from exploiting user data not associated with that application. We describe our implementation of PinUP on the Linux Security Modules framework, explore its performance, and study several practical use cases. Through these activities, we show that user data can be protected from untrusted applications while retaining the ability to receive the benefits of those applications.

Grains of SANs: Building Storage Area Networks from Memory Spots

Emerging storage platforms, such as HPs memory spot, are increasingly becoming smaller, faster and less expensive. Whether intended for holding digital media or personal documents, such systems currently function as independent receptacles of data. As we demonstrate in this paper, however, the true power of these devices is their ability to form the flexible building blocks of larger logical storage systems. Such systems allow for the creation of continuously reconfigurable storage devices, capable of the dynamic addition, removal and repurposing of component nodes. Moreover, such changes can be made transparent to the users view of the storage system. To illustrate these properties, we design and implement a granular storage system based on memory spots. In so doing, we identify and address significant challenges in areas including organization, security and reliability. We then conduct an extensive analysis of performance and demonstrate the ability to achieve throughputs of greater than 3 Mbps to our unoptimized logical storage device. These results demonstrate the potential for new applications and systems built on granular storage.

From Mobile Phones to Responsible Devices

Mobile phones have evolved from simple voice terminals into highly-capable, general-purpose computing platforms. While people are becoming increasingly more dependent on such devices to perform sensitive operations, protect secret data, and be available for emergency use, it is clear that phone operating systems are not ready for such responsibilities. Through a pair of vulnerabilities, we demonstrate that there are a myriad of unmanaged mechanisms on a phone system, and that control of these mechanisms is vital to achieving reliable use. Further, we find that a number of features of mobile phones are beyond the controls of traditional operating systems as well. In this paper, we examine the requirements for providing effective mediation, access control, resource management and quality of service for hone systems. Also, we investigate the convergence of cellular networks with the Internet, and its impact on effective resource management. Based on these results, we argue for phone systems that enable predictable behavior in a network -- they can protect themselves and ensure predictable network impact from their applications.

Managing the Risk of Covert Information Flows in Virtual Machine Systems

Flexible mandatory access control (MAC) enforcement is now available for virtual machine systems. For example, the sHype MAC system for the Xen virtual machine monitor is part of the mainline Xen distribution. Such systems offer the isolation of VM system with the flexible security of MAC enforcement. A problem is that such MAC VM systems will only be assured at modest levels (e.g, Common Criteria EAL4), so they may contain covert channels. Covert channels are often difficult to identify and harder to remove, so we propose an approach to manage possible covert leakage to enable verification of security guarantees. Typically, covert channels are outside of access control policies, but we propose an approach that includes both overt flows and covert flows to assess the possible risk of information and covert flows to asses the possible risk of information leakage due to their combination. We define the concept of a risk flow policy that describes the authorized risks due to covert flows. In this paper, we evaluate the ability of four policy models to express risk flow policies. Further, we examine how such policies will be enforced in VM systems. We find that variants of the Chinese Wall model and Bell-LaPadula model have features that such policies can be enforced in the context of sHype's Type Enforcement model.

Paper: rc24154.pdf (January 2007)

Sum of the Parts: Composing Trust from Validation Primitives

A recent proposal advocates the construction of a shared reference monitor, Shamon, as the basis for enforcing mandatory access control in Internet-wide distributed systems. In a sense, the Shamon effort explored how distributed system policies are enforced on individual systems, where the trust in those systems is based on hardware-based integrity measurement. However, the detailed requirements for trust in individual systems are not defined, and importantly, the development of that trust across large distributed systems is not discussed. In this paper, we develop the requirements for verifying trust in individual systems, and how the methods used to verify trust may enable scalable trust in large, distributed systems. Based on an initial prototype for conveying attestations during IPsec negotiation, we outline the integrity measurements that we would take to justify trust to a remote party. We identify measurements that convey trust in the systems enforcement mechanism that can be verified by as many remote parties as possible, and further can be used to convey trust transitively to other parties. We also identify distributed system (coalition) measurements that enable members of the distributed system to verify that coalition security requirements are being enforced correctly on each component.

Information flow control in database security: A case study for secure programming with JIF

Because of the increasing demands for privacy and integrity guarantees in applications that handle sensitive, electronic data, there is a need for automated software development tools and techniques for enforcing this security. Although there have been many attempts to apply security to existing systems after the fact, manifold failures indicate that this approach should be revisited. To the contrary, experience indicates that secure systems must be designed with explicit policies from the beginning and that there should be some automated, mathematically-verifiable mechanism to aid programmers in doing this correctly. Recent research in language-based security has made great strides towards the development of sophisticated and robust languages for programming with explicit security policies. Insufficient experience with these languages, however, has left them untested and impractical for writing real, distributed applications. In this paper, we present our experiences of working with Jif, a Java-based, security-typed language, in building a distributed, database application. Our experience has indicated the current impracticality of programming in Jif, but has helped us to identify language development tools and automation algorithms that could aid in making Jif more practical for developing real, distributed applications.

Paper: hicks05tr0011-2005.pdf (April 2005)

Declassification with Cryptographic Functions in a Security-Typed Language

Security-typed languages are powerful tools for provably enforcing noninterference. Real computing systems, however, often intentionally violate noninterference by deliberately releasing (or declassifying) sensitive information. These systems frequently trust cryptographic functions to achieve declassification while still maintaining confidentiality. We introduce the notion of trusted functions that implicitly act as declassifiers within a security-typed language. Proofs of the new language's soundness and its enforcement of a weakened form of noninterference are given. Additionally, we implement trusted functions used for declassification in the Jif language. This represents a step forward in making security-typed languages more practical for use in real systems.

Path Authentication in Interdomain Routing

Interdomain routing is implemented on the Internet through the Border Gateway Protocol (BGP). Many approaches have been proposed to mitigate or solve the many problems of BGP security; yet, none of the proposed solutions have been widely deployed. The lack of adoption is largely caused by a failure to find an acceptable balance between deployability, cost, and security. In this paper, we study one aspect of the BGP security puzzle: path validation. We develop formal semantics for path and route attestations, which provide the basis for a suite of cryptographic proof systems for path validation. We analyze the security relevant stability of paths in the Internet and profile resource consumption of the proposed constructions via trace-based simulations. Our constructions are shown through these experiments to reduce signature validation costs by as much as 97.3% over existing proposals while requiring nominal storage resources. We conclude by considering how our solution can be deployed in the Internet.

Paper: nas-tr-0002-2004.pdf (November 2004)

Security and Science of Agility

Moving target defenses alter the environment in response to adver- sarial action and perceived threats. Such defenses are a specific example of a broader class of system management techniques called system agility. In its fullest generality, agility is any reasoned modi- fication to a system or environment in response to a functional, per- formance, or security need. This paper details a recently launched 10-year Cyber-Security Collaborative Research Alliance effort fo- cused in-part on the development of a new science of system agility, of which moving target defenses are a central theme. In this context, the consortium seeks to address the questions of when, what, and how to employ changes to improve the security of an environment, as well as consider how to measure and weigh the effectiveness of different approaches to agility. We discuss several fundamental challenges in developing and using MTD maneuvers, and outline several broad classes of mechanisms that can be used to implement them. We conclude by detailing specific MTD mechanisms used to adaptively quarantine vulnerable code in Android applications, and consider ways of comparing cost and payout of its use.

Sprobes: Enforcing Kernel Code Integrity on the TrustZone Architecture

Many smartphones now deploy conventional operating systems, so the rootkit attacks so prevalent on desktop and server systems are now a threat to smartphones. While researchers have advocated using virtualization to detect and prevent attacks on operating systems (e.g., VM introspection and trusted virtual domains), virtualization is not practical on smartphone systems due to the lack of virtualization support and/or the expense of virtualization. Current smartphone processors do have hardware support for running a protected environment, such as the ARM TrustZone extensions, but such hardware does not control the operating system operations sufficiently to enable VM introspection. In particular, a conventional operating system running with TrustZone still retains full control of memory management, which a rootkit can use to prevent traps on sensitive instructions or memory accesses necessary for effective introspection. In this paper, we present SPROBES, a novel primitive that enables introspection of operating systems running on ARM TrustZone hardware. Using SPROBES, an introspection mechanism protected by TrustZone can instrument individual operating system instructions of its choice, receiving an unforgeable trap whenever any SPROBE is executed. The key challenge in designing SPROBES is preventing the root kit from removing them, but we identify a set of five invariants whose enforcement is sufficient to restrict rootkits to execute only approved, SPROBE-injected kernel code. We implemented a proof-of-concept version of SPROBES for the ARM Fast Models emulator, demonstrating that in Linux kernel 2.6.38, only 12 SPROBES are sufficient to enforce all five of these invariants. With SPROBES we show that it is possible to leverage the limited TrustZone extensions to limit conventional kernel execution to approved code comprehensively.

Cloud Verifier: Verifiable Auditing Service for IaaS Clouds

Cloud computing has commoditized compute, storage, and networking resources creating an on-demand utility. Despite the attractiveness of this new paradigm, its adoption has been stymied by cloud platform's lack of transparency, which leaves customers unsure if their sensitive data and computation can be entrusted to the cloud. While techniques like encryption can protect customers' data at rest, clouds still lack mechanisms for customers to verify that their computations are being executed as expected, a guarantee one could obtain if they were running the computation in their own data center. In this paper, we present the cloud verifier (CV), a flexible framework that cloud vendors can configure to provide cloud monitoring services for customers to validate that their computations are configured and being run as expected in Infrastructure as a Service (IaaS) clouds. The CV builds a chain of trust from the customer to their hosted virtual machine (VM) instances through the cloud platform, enabling it to check customer-specified requirements against a comprehensive view of both the VM's load-time and run-time properties. In addition, the CV enables cloud vendors to provide more responsive remediation techniques than traditional attestation mechanisms. We built a proof of concept CV for the OpenStack cloud platform whose evaluation demonstrates that a single CV enables over 20,000 simultaneous customers to verify numerous properties with little impact on cloud application performance. As a result, the CV gives cloud customers a low-overhead method for assuring that their instances are running according to their requirements.

On Dynamic Malware Payloads Aimed at Programmable Logic Controllers

With the discovery of the Stuxnet attack, increasing attention is being paid to the potential for malware to target Programmable Logic Controllers (PLCs). Despite much speculation about threats from PLC malware, the popular opinion is that automated attacks against PLCs are not practical without having a priori knowledge of the target physical process. In this paper, we explore the problem of designing PLC malware that can generate a dynamic payload based on observations of the process taken from inside the control system. This significantly lowers the bar for attacks against PLCs. We evaluate how PLC malware may infer the structure of the physical plant and how it can use this information to construct a dynamic payload to achieve an adversary's end goal. We find that at the very least, a dynamic payload can be constructed that causes unsafe behavior for an arbitrary process definition.

Embedded Firmware Diversity for Smart Electric Meters

Smart meters are now being aggressively deployed world- wide, with tens of millions of meters in use today and hundreds of millions more to be deployed in the next few years. These low-cost (= $50) embedded devices have not fared well under security analysis: experience has shown that the majority of current devices can be exploited by unsophisticated attackers. The potential for large-scale attacks that target a single or a few vulnerabilities is thus very real. In this paper, we consider how diversity techniques can limit large-scale attacks on smart meters. We show how current meter designs do not possess the architectural features needed to support existing diversity approaches such as address space randomization. In response, we posit a new return address encryption technique suited to the computationally and resource limited smart meters. We conclude by considering analytically the effect of diversity on an attacker wishing to launch a large-scale attack, showing how a lightweight diversity scheme can force the time needed for a large compromise into the scale of years.

Towards a Secure and Efficient System for End-to-End Provenance

Work on the End-to-End Provenance System (EEPS) began in the late summer of 2009. The EEPS effort seeks to explore the three central questions in provenance systems: (1) Where and how do I design secure host-level provenance collecting instruments (called provenance monitors)?; (2) How do I extend completeness and accuracy guarantees to distributed systems and computations?; and (3) What are the costs associated with provenance collection? This position paper discusses our initial exploration into these issues and posits several challenges to the realization of the EEPS vision.

Paper: tapp10.pdf (February 2010)

Energy Theft in the Advanced Metering Infrastructure

Global energy generation and delivery systems are transitioning to a new computerized 'smart grid'. One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is theft still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it.

Systemic Issues in the Hart InterCivic and Premier Voting Systems: Reflections Following Project EVEREST

The State of Ohio commissioned the EVEREST study in late summer of 2007. The study participants were charged with an analysis of the usability, stability, and security of all voting systems used in Ohio elections. This paper details the approach and results of the security analysis of the Premier and Hart systems within the EVEREST effort. As in previous studies, we found the election systems to be critically flawed in ways that are practically and easily exploitable. Such exploits could effect election results, prevent legitimate votes from being cast, or simply cast doubt on the legitimacy of the election itself. In this effort we identified new areas of concern including novel exploitable failures of software and election data integrity protection and the discovery of dangerous hidden software features. We begin by describing in depth our systematic methodology for identifying and validating vulnerabilities appropriate for the current complex political climate, and detail and illustrate broad classes of vulnerabilities uncovered using this approach. We conclude by considering the impact of this study both in terms of the tangible vulnerabilities discovered and as a model for performing future analyses.

Protecting Users from "Themselves"

Computer usage and threat models have changed drastically since the advent of access control systems in the 1960s. Instead of multiple users sharing a single file system, each user has many devices with their own storage. Thus, a user's fear has shifted away from other users' impact on the same system to the threat of malice in the software they intentionally or even inadvertently run. As a result, we propose a new vision for access control: one where individual users are isolated by default and where the access of individual user applications is carefully managed. A key question is how much user administration effort would be required if a system implementing this vision were constructed. In this paper, we outline our work on just such a system, called PinUP, which manages file access on a per application basis for each user. We use historical data from our lab's users to explore how much user and system administration effort is required. Since administration is required for user sharing in PinUP, we find that sharing via mail and file repositories requires a modest amount of administrative effort, a system policy change every couple of days and a small number of user administrative operations a day. We are encouraged that practical administration on such a scale is possible given an appropriate and secure user approach.

Paper: csaw07.pdf (November 2007)

Non-Volatile Memory and Disks: Avenues for Policy Architectures

As computing models change, so to do the demands on storage. Distributed and virtualized system introduce new vulnerabilities, assumptions, and performance requirements on disks. However, traditional storage systems have very limited capacity to implement needed Òadvanced storageÓ features such as integrity and data isolation. This is largely due to the simple interfaces and limited computing resources provided by commodity hard-drives. A new generation of storage devices affords better opportunities to meet these new models, but little is known about how to exploit them. In this paper, we show that the recently introduced fastaccess non-volatile RAM-enhanced hybrid (HHD) disk architectures can be used to implement a range of valuable storage-security services. We specifically discuss the use of these new architectures to provide data integrity, capabilitybased access control, and labeled information flow at the disk access layer. In this, we introduce systems that place a security perimeter at the disk interfaceÑand deal with the parent operating system only as a largely untrusted entity.

Paper: csaw07.pdf (November 2007)

Sublinear Algorithms for Approximating String Compressibility

This work raises the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. This question is studied in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ). Sublinear algorithms are presented for approximating compressibility with respect to both schemes. Several lower bounds are also proven that show that our algorithms for both schemes cannot be improved significantly.

This investigation of LZ yields results whose interest goes beyond the initial questions it set out to study. In particular, it leads to combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, it is shown that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.

Paper: comp-random-full-Nov-07.pdf (August 2007)

Jifclipse: Development Tools for Security-Typed Applications

Security-typed languages such as Jif require the programmer to label variables with information flow security policies as part of application development. The compiler then flags errors wherever information leaks may occur. Resolving these information leaks is a critical task in security-typed language application development. Unfortunately, because information flows can be quite subtle, simple error messages tend to be insufficient for finding and resolving the source of information leaks; more sophisticated development tools are needed for this task. To this end we provide a set of principles to guide the development of such tools. Furthermore, we implement a subset of these principles in an integrated development environment (IDE) for Jif, called Jifclipse, which is built on the Eclipse extensible development platform. Our plug-in provides a Jif programmer with additional tools to view hidden information generated by a Jif compilation, to suggest fixes for errors, and to get more specific information behind an error message. Better development tools are essential for making security-typed application development practical; Jifclipse is a first step in this process.

Paper: plas07.pdf (June 14 2007)

Design, implementation and evaluation of security in iSCSI-based network storage systems

This paper studies the performance and security aspects of the iSCSI protocol in a network storage based system. Ethernet speeds have been improving rapidly and network throughput is no longer considered a bottleneck when compared to Fibre-channel based storage area networks. However, when security of the data traffic is taken into consideration, existing protocols like IPSec prove to be a major hindrance to the overall throughput. In this paper, we evaluate the performance of iSCSI when deployed over standard security protocols and suggest lazy crypto approaches to alleviate the processing needs at the server. The testbed consists of a cluster of Linux machines directly connected to the server through a Gigabit Ethernet network. Micro and application benchmarks like BTIO and dbench were used to analyze the performance and scalability of the different approaches. Our proposed lazy approaches improved throughput by as much as 46% for microbenchmarks and 30% for application benchmarks in comparison to the IPSec based approaches.

Paper: storss06.pdf (October 2006)

Shame on Trust in Distributed Systems

Approaches for building secure, distributed systems have fundamental limitations that prevent the construction of dynamic, Internet-scale systems. In this paper, we propose a concept of a shared reference monitor or Shamon that we believe will provide a basis for overcoming these limitations. First, distributed systems lack a principled basis for trust in the trusted computing bases of member machines. In most distributed systems, a trusted computing base is assumed. However, the fear of compromise due to misconfiguration or vulnerable software limits the cases where this assumption can be applied in practice. Where such trust is not assumed, current solutions are not scalable to large systems [7, 20]. Second, current systems do not ensure the enforcement of the flexible, distributed system security goals. Mandatory access control (MAC) policies aim to describe enforceable security goals, but flexible MAC solutions, such as SELinux, do not even provide a scalable solution for a single machine (due to the complexity of UNIX systems), much less a distributed system. A significant change in approach is necessary to develop a principled trusted computing base that enforces system security goals and scales to large distributed systems.

Paper: hotsec06.pdf (July 2006)

Testing Large Scale BGP Security in Replayable Network Environments

Understanding the operation of BGP and providing for its security is essential to the well-being of the Internet. To date, however, simulating every autonomous system comprising the Internet, in order to test proposals and security solutions, has been infeasible. We have developed lseb, a large scale BGP simulator, that generates realistic network topologies from routing data, and provides the ability to replay network events from any point in the past, and allows what-if scenarios such as simulated attacks or defense mechanisms, to test the resilience of the critical network infrastructure. We describe topology extraction tools that we have developed and the design and implementation of lseb. We also discuss visualization tools that allow a graphical topology representation and provide an example of an attack scenario that can be modeled with lseb.

Paper: deter06.pdf (June 2006)

BGPRV: Retrieving and Processing BGP Data with Efficiency and Convenience

BGPRV is a tool aimed to aid the analysis of BGP updates or routing table snapshots. It provides a set of library functions that make it possible to retrieve and process archived BGP data with efficiency and convenience. It encapsulates the functions of scanning the Route Views route repository, downloading data for specified time frame, processing the binary MRT format, and filtering incomplete or undesired data etc., and returns BGP data as single stream. With the abstraction of operations and simplified usage, it provides users with clean and organized BGP data that is ready for further processing and analysis.

Paper: deter06b.pdf (June 2006)

Trusted Declassification: High-level policy for a security-typed language

Security-typed languages promise to be a powerful tool with which provably secure software applications may be developed. Programs written in these languages enforce a strong, global policy of noninterference which ensures that high-security data will not be observable on low-security channels. Because noninterference is typically too strong a property, most programs use some form of declassification to selectively leak high security information, e.g. when performing a password check or data encryption. Unfortunately, such a declassification is often expressed as an operation within a given program, rather than as part of a global policy, making reasoning about the security implications of a policy more difficult.

In this paper, we propose a simple idea we call trusted declassification in which special declassifier functions are specified as part of the global policy. In particular, individual principals declaratively specify which declassifiers they trust so that all information flows implied by the policy can be reasoned about in absence of a particular program. We formalize our approach for a Java-like language and prove a modified form of noninterference which we call noninterference modulo trusted methods. We have implemented our approach as an extension to Jif and provide some of our experience using it to build a secure e-mail client.

Paper: hicks06trustedDeclass.pdf (June 2006)

Blocking in Private Information Matching

In this paper, the problem of quickly matching records (i.e., record linkage problem) from two autonomous sources without revealing privacy to the other parties is considered. In particular, our focus is to devise secure blocking scheme to improve the performance of record linkage significantly while being secure. Although there have been works on private record linkage, none has considered adopting the blocking framework. Therefore, our proposed blocking-aware private record linkage can perform large-scale record linkage without revealing privacy. Preliminary experimental results showing the potential of the proposal are reported.

Paper: iqis05.pdf (July 2005)

Analysis of Communities Of Interest in Data Networks

Communities of interest (COI) have been applied in a variety of environments ranging from characterizing the online buying behavior of individuals to detecting fraud in telephone networks. The common thread among these applications is that the historical COI of an individual can be used to predict future behavior as well as the behavior of other members of the COI. It would clearly be beneficial if COIs can be used in the same manner to characterize and predict the behavior of hosts within a data network. In this paper, we introduce a methodology for evaluating various aspects of COIs of hosts within an IP network. In the context of this study, we broadly define a COI as a collection of interacting hosts. We apply our methodology using data collected from a large enterprise network over a eleven week period. First, we study the distributions and stability of the size of COIs. Second, we evaluate multiple heuristics to determine a stable core set of COIs and determine the stability of these sets over time. Third, we evaluate how much of the communication is not captured by these core COI sets.

Paper: pam05.pdf (March 2005)

Dynamic updating of information-flow policies

Applications that manipulate sensitive information should ensure end-to-end security by satisfying two properties: sound execution and some form of noninterference. By the former, we mean the program should always perform actions in keeping with its current policy, and by the latter we mean that these actions should never cause high-security information to be visible to a low-security observer. Over the last decade, securitytyped languages have been developed that exhibit these properties, increasingly improving so as to model important features of real programs. No current security-typed language, however, permits general changes to security policies in use by running programs. This paper presents a simple information flow type system for that allows for dynamic security policy updates while ensuring sound execution and a relaxed form of noninterference we term noninterference between updates. We see this work as an important step toward using language-based techniques to ensure end-to-end security for realistic applications.

Paper: dynUpdateInfoflow.pdf (March 2005)

Security Policy Reconciliation in Distributed Computing Environments

A major hurdle in sharing resources between organizations is heterogeneity. Therefore, in order for two organizations to collaborate their policies have to be resolved. The process of resolving different policies is known as policy reconciliation, which in general is an intractable problem. This paper addresses policy reconciliation in the context of security. We present a formal framework and hierarchical representation for security policies. Our hierarchical representation exposes the structure of the policies and leads to an efficient reconciliation algorithm. We also demonstrate that agent preferences for security mechanisms can be readily incorporated into our framework. We have implemented our reconciliation algorithm in a library called the Policy Reconciliation Engine or PRE. In order to test the implementation and measure the overhead of our reconciliation algorithm, we have integrated PRE into a distributed high-throughput system called Condor.

Paper: policy04.pdf (June 2004)

Searching for Privacy: Design and Implementation of a P3P-Enabled Search Engine

Although the number of online privacy policies is increasing, it remains difficult for Internet users to understand them, let alone to compare policies across sites or identify sites with the best privacy practices. The World Wide Web Consortium (W3C) developed the Platform for Privacy Preferences (P3P 1.0) specification to provide a standard computer-readable format for privacy policies. This standard enables web browsers and other user agents to interpret privacy policies on behalf of their users. This paper introduces our prototype P3P-enabled Privacy Bird Search engine. Users of this search service are given visual indicators of the privacy policies at sites included in query results. Our system acts as a front end to a general search engine by evaluating the P3P policies associated with search results against a user's privacy preference settings. To improve system performance we cache unexpired P3P policy information (including information about the absence of P3P policies) for thousands of the most popular sites as well as for sites that have been returned in previous search results. We discuss the system architecture and its implementation, and consider the work necessary to evolve our prototype into a fully functional and efficient service.

Paper: pets04.pdf (May 2004)

Analysis of Security Vulnerabilities in the Movie Production and Distribution Process

Unauthorized copying of movies is a major concern for the motion picture industry. While unauthorized copies of movies have been distributed via portable physical media for some time, low-cost, high-bandwidth Internet connections and peer-to-peer file sharing networks provide highly efficient distribution media. Many movies are showing up on file sharing networks shortly after, and in some cases prior to, theatrical release. It has been argued that the availability of unauthorized copies directly affects theater attendance and DVD sales, and hence represents a major financial threat to the movie industry. Our research attempts to determine the source of unauthorized copies by studying the availability and characteristics of recent popular movies in file sharing networks. We developed a data set of 312 popular movies and located one or more samples of 183 of these movies on file sharing networks, for a total of 285 movie samples. 77% of these samples appear to have been leaked by industry insiders. Most of our samples appeared on file sharing networks prior to their official consumer DVD release date. Indeed, of the movies that had been released on DVD as of the time of our study, only 5% first appeared after their DVD release date on a web site that indexes file sharing networks, indicating that consumer DVD copying currently represents a relatively minor factor compared with insider leaks. We perform a brief analysis of the movie production and distribution process and identify potential security vulnerabilities that may lead to unauthorized copies becoming available to those who may wish to redistribute them. Finally, we offer recommendations for reducing security vulnerabilities in the movie production and distribution process.

Paper: drm03.pdf (October 2003)

Integrated Constraints and Inheritance in DTAC

Inheritance and constraints are two common techniques for safely managing the complexity of large access control configurations. Inheritance is used to help factor the model, while constraints are used to help ensure that the complexity will not result in an unsafe configuration arising in the future evolution of the system. In this paper we develop an integrated mathematical approach to defining both inheritance and constraints in the dynamically typed access control (DTAC) model. In the process we identify several useful relationships among DTAC objects. The combination of DTAC and our new relationships allow us to graphically construct a greater variety and complexity of efficiently verifiable separation of duty constraints than any other model we are aware of.

How To Schedule Unlimited Memory Pinning of Untrusted Processes Or Provisional Ideas about Service-Neutrality

You can read it as a paper that treats a concrete problem motivated in the first section: how can we permit untrusted user processes to pin their virtual pages in memory most flexibly and as unlimited as possible? From this point of view, the paper presents a general solution that is theoretically and experimentally reasonably substantiated. However, you can also read the paper as an approach to solve the more general problem of how an existing system can be extended by new operations while preserving the original system's QoS properties. From this point of view, the paper is highly speculative. The presented principle of service-neutral operations can successfully solve the concrete problem of dynamic pinning. However we still have no sound evidence that it is useful for a much broader class of problems. Nevertheless, we strongly suspect it

Flexible Access Control using IPC Redirection

We present a mechanism for inter-process communication (IPC) redirection that enables efficient and flexible access control for micro-kernel systems. In such systems, services are implemented at user level, so IPC is the only means of communication between them. Thus, the system must be able to mediate IPCs to enforce its access control policy. Such mediation must enable enforcement of security policy with as little performance overhead as possible, but current mechanisms either: (1) place significant access control functionality in the kernel which increases IPC cost or (2) are static and require more IPCs than necessary to enforce access control. We define an IPC redirection mechanism that makes two improvements: (1) it removes the management of redirection policy from the kernel, so access control enforcement can be implemented outside the kernel; and (2) it separates the notion of who controls the redirection policy from the redirections themselves, so redirections can be configured arbitrarily and dynamically. We define our redirection mechanism, demonstrate its use, and examine possible efficient implementations.