Detecting and Managing User-Level Compromises in Corporate Networks
- Craig Shue (WPI)
- Curtis Taylor (WPI)
- Mohamed Najd (WPI)
- Joshua Pritchett (WPI)
Organizations are under constant attack and occasionally their computer systems are compromised by an adversary. For example, phishing and drive-by downloads attacks can be used to gain access into a network that is otherwise fortified. Corporate networks following best practices, such as least-user privilege, can limit these attacks to a single user-level account and prevent a system-wide or kernel-level compromise. However, even these user-level compromises can be challenging to mitigate.
This talk will discuss ways to improve computer network security by making it easier to understand a corporate network’s traffic and detect activity that may be due to malware or a network intrusion. We will describe ways to enhance networks with techniques from the software-defined networking, network function virtualization, and distributed systems fields to get deeper insight and to distinguish user traffic from attacks.
The Dover Architecture: Hardware Enforcement of Software-Defined Security Policies
- Greg Sullivan (Draper)
- André DeHon (UPenn)
- Eli Boling (Draper)
- Marco Ciaffi (Draper)
- Steve Milburn (Draper)
- Nirmal Nepal (Draper)
- Jothy Rosenberg (Draper)
- Andrew Sutherland (Draper)
The Dover project (2015-) at Draper is an attempt to adapt, extend, and commercialize earlier work funded under the DARPA CRASH SAFE project (2010-2015). The SAFE project attempted to define a formally-verified “inherently secure processor”. Whereas SAFE created a custom instruction set architecture (ISA), custom programming languages, and a custom runtime, Dover is extending the open source RISC-V ISA and will protect existing C/C++ applications with only recompilation required.
Dover associates every register and addressable unit of memory (down to the byte level) with an arbitrary amount of metadata. Furthermore, we define a set of “micro-policies” that, at runtime, each take a set of metadata for an instruction (e.g. an ADD instruction and the metadata for the PC, instruction word, and all registers involved) and calculate whether the instruction should be allowed and, if so, what should be the metadata of the outputs. Semantically, at every instruction, policies are invoked with the metadata for every value (including PC and instruction word) related to that instruction. In practice, we use cacheing to reuse the results from tuples of tags (pointers to metadata) that have been seen before. Hardware interlocks prevent user-land (application, OS) code from accessing policy-land code or metadata, and policy code cannot access user-land code or data.
In this talk, we will focus on applications of software-defined security policies. An earlier paper (from the SAFE project) outlined the wide range of policies enforceable by a Dover system, as well as an approach to formally verifying policy definitions. We will first present a few policies that we have already worked out (heap memory safety, simple stack protection, simple control flow integrity, RWX). After reviewing baseline policies, we will present some proposed (more speculative) uses of Dover’s policy enforcement mechanisms, in order to spur discussion.
Privacy Preserving Federated Search and Sharing
- Shane Clark (BBN)
- Partha Pal (BBN)
- Aaron Adler (BBN)
- Nathaniel Lageman (BBN)
- Lalana Kagal (MIT)
- K. Krasnow Waterman (MIT)
The Privacy Preserving Federated Search and Sharing (PPFS2) project is building a distributed database middleware that allows data owners to share potentially sensitive data without violating the legal or regulatory measures that protect that data. PPFS2 also protects the privacy of the party searching a database by not revealing any sensitive information contained in a query. These are important practical concerns in contexts such as government inter-agency sharing, where the relevant statues can be extraordinarily complex.
PPFS2 is an ongoing collaboration between BBN Technologies and MIT CSAIL. MIT’s AIR Policy Reasoner performs query analysis to check compliance with applicable legal/regulatory policy and give user’s a human-readable justification for the decision. BBN provides the cryptographic protections necessary to execute compliant queries with strong policy enforcement. PPFS2 uses ciphertext-policy attribute-based encryption to enforce fine-grained access to data based on user roles and predicate-based encryption to support approximate ciphertext matching that protects the privacy of the query itself.
The integrated PPFS2 prototype enables seamless sharing subject to constraints that would normally require human expertise to interpret and enforce on a case-by-case basis while simultaneously protecting the privacy of searchers and those whose sensitive information may be stored in a given database.
A Study on Designing Video Tutorials for Promoting Security Features: A Case Study in the Context of Two-Factor Authentication (2FA)
- Mohammad Maifi Hasan Khan (University of Connecticut)
- Yusuf Albayram (University of Connecticut)
- Michael Fagan (University of Connecticut)
This work investigates the effectiveness of informational videos that are designed to provide information about two-step verification (i.e., 2FA) and in turn seeks to improve users’ adoption rate of 2FA. Towards that, 8 video tutorials based on three themes (e.g., Risk, Self-efficacy, and Contingency) were designed and a three way between-group study with 399 participants on Amazon MTurk was conducted. Furthermore, a follow-up study was run to see the changes in participants’ behavior (e.g., enabling of 2FA). The Self-efficacy and Risk themes were found to be most effective in making the videos more interesting, informative, and useful. Willingness to try 2FA was found to be higher for participants who were exposed to both the Risk and Self efficacy themes. Participants’ decisions regarding actually enabling 2FA was found to be significantly correlated with how interesting, informative, and useful the videos were.
AnonRep: Towards Tracking-Resistant Anonymous Reputation
- Ennan Zhai (Yale University)
- David Isaac Wolinsky (Facebook)
- Ruichuan Chen (Nokia Bell Labs)
- Ewa Syta (Trinity College)
- Chao Teng (Facebook)
- Bryan Ford (EPFL)
Reputation systems help users evaluate information quality and incentivize civilized behavior, often by tallying feedback from other users such as “likes” or votes and linking these scores to a user’s long-term identity. This identity linkage enables user tracking, however, and appears at odds with strong privacy or anonymity. This work presents AnonRep, a practical anonymous reputation system offering the benefits of reputation without enabling long-term tracking. AnonRep users anonymously post messages, which they can verifiably tag with their reputation scores without leaking sensitive information. AnonRep reliably tallies other users’ feedback (e.g., likes or votes) without revealing the user’s identity or exact score to anyone, while maintaining security against score tampering or duplicate feedback. A working prototype demonstrates that AnonRep scales linearly with the number of participating users. Our experiments show that the latency for a user to generate anonymous feedback is less than ten seconds in a 10,000-user anonymity group.
Hyp3rArmor: Reducing Web Application Exposure to Automated Attacks
- William Koch (Boston University)
- Azer Bestavros (Boston University)
End-to-end IoT Security and Privacy
- Xinwen Fu (University of Massachusetts Lowell)
The emergence of Internet of Things (IoT) provides the capabilities of connecting smart devices, small actuators, and people anywhere and anytime to the Internet. However, on Oct. 21, 2016, a huge DDoS attacked Dyn DNS servers and caused the shutdown of many network services including Twitter. Behind it were hundreds of thousands of compromised IoT devices. In the wake of such attacks, IoT security and privacy currently attracts great attention from both academic research community and industry. To secure IoT, we have to look at the issues from the perspective of end-to-end security and privacy. We need to investigate vulnerabilities of various IoT devices from the perspective of communication protocols, software security and hardware security.
In this talk, we use an example IoT device, smart plug, to demonstrate our work on attacks against IoT Security and Privacy. Smart plugs, as one type of fast emerging IoT devices, are gaining increasing popularity in home automation, with which users can remotely monitor and control their homes. Various applications can be implemented over such a system. We in this research case study the security problems of a typical smart plug system from Edimax. We notified Edimax the vulnerabilities of the smart plug in May 2016 and are the first of exposing the plug’s security issues. The company is patching their system based on our provided information. The Edimax smart plug system has three components: a plug, cloud servers, and the control app on devices like smartphones. In our research, with reverse engineering, we disclose its entire communication protocols and identify its vulnerabilities that could open the door to different attacks.
Catena: Preventing Lies with Bitcoin
- Alin Tomescu (MIT)
- Srinivas Devadas (MIT)
We present Catena, an efficiently-verifiable Bitcoin witnessing scheme. Catena enables any number of thin clients, such as mobile phones, to efficiently agree on a log of application-specific statements managed by an adversarial server. Catena implements a log as an OP_RETURN transaction chain and prevents forks in the log by leveraging Bitcoin’s security against double spends. Specifically, if a log server wants to equivocate it has to double spend a Bitcoin transaction output. Thus, Catena logs are as hard to fork as the Bitcoin blockchain: an adversary without a large fraction of the network’s computational power cannot fork Bitcoin and thus cannot fork a Catena log either. However, different from previous Bitcoin-based work, Catena decreases the bandwidth requirements of log auditors from 90 GB to only tens of megabytes. More precisely, our clients only need to download all Bitcoin block headers (currently less than 35 MB) and a small, 600-byte proof for each statement in a block. We implemented Catena in Java using the bitcoinj library and used it to extend CONIKS, a recent key transparency scheme, to witness its public-key directory in the Bitcoin blockchain where it can be efficiently verified by auditors. We show that Catena can be used to secure many systems today such as public-key directories, Tor directory servers or software transparency schemes.
Anomaly Detection in Computer Networks Using Robust Principal Component Analysis
- Randy Paffenroth (WPI)
- Anura Jayasumana (CSU)
- Kathleen Kay (WPI)
- Louis Scharf (CSU)
- Chong Zhou (WPI)
In this talk we will present theory and algorithms for detecting weak distributed patterns in network data. The patterns we seek are sparse correlations between signals recorded at sensor nodes across a network. We are especially interested in detecting weak patterns in computer networks where the nodes (terminals, routers, servers, etc.) are sensors that provide measurements (of packet rates, user activity, CPU usage, IDS logs, etc.). One of the key concepts in our work is a focus on detecting anomalous correlations between sensors rather than targeting outliers, or other similar phenomena, for individual sensors. In particular, we use robust matrix completion and second order analysis to detect distributed patterns that are not discernible at the level of individual sensors. When viewed independently, the data at each node cannot provide a definitive determination of the underlying pattern, but when fused with data from across the network the relevant patterns emerge. The approach is applicable to many other types of sensor networks including wireless networks, mobile sensor networks, and social networks where correlated phenomena are of interest..
In Depth Enforcement of Dynamic Integrity Taint Analysis
- Christian Skalka (University of Vermont)
- Sepehr Amir-Mohammadian (University of Vermont)
Dynamic taint analysis can be used as a defense against low-integrity data in applications with untrusted user interfaces. An important example is defense against XSS and injection attacks in programs with web interfaces. Data sanitization is commonly used in this context, and can be treated as a precondition for endorsement in a dynamic integrity taint analysis. However, sanitization is often incomplete in practice. We develop a model of dynamic integrity taint analysis for Java that addresses imperfect sanitization by combining access control and audit-based measures in a uniform policy framework. We then use this policy to establish correctness conditions for a program rewriting algorithm that instruments code for the analysis. The taint analysis specification can also be shown to support a security property related to so-called explicit secrecy.
Intra-cloud and Inter-cloud Authentication is No Good
- Kevin Walsh (College of the Holy Cross)
- Tom Roeder (Google)
- John Manferdelli (Google)
Authentication mechanisms available in existing cloud platforms are inadequate and poorly-suited to modern cloud-based systems. Building a system as a collection of small, mutually-suspicious components is a fundamental and well established technique to make systems easier to analyze and to improve robustness to partial failures or compromises. In the cloud, these ideas underlie trends towards micro-services and containers, in which systems are built as collections of potentially many semi-independent, interconnected components, and trends towards Software-as-a-Service, where the components of a system are not only architecturally isolated from each other, but execute under different management and security domains. To realize the potential benefits of such approaches, a component must be able to reliably determine the source of messages it receives. Yet in many deployed cloud systems, good mechanisms for authentication are sorely lacking. When components authenticate messages at all, they often rely on ad-hoc, retrofitted mechanisms. With few exceptions, the mechanisms in use were not designed for the cloud and are poorly-suited for use in such environments. A wide variety of mutually-incompatible authentication mechanisms are used where a single mechanism would suffice, and implicit dependencies enlarge the trusted computing base unnecessarily. As evidence, we survey authentication mechanisms available in major cloud platforms, and we detail the many and varied ways authentication is actually performed in one large open-source cloud-based service. From this we draw observations and make suggestions for improving inter-cloud and intra-cloud authentication. In essence, we argue that authentication mechanisms for the cloud should take as an initial premise that the principals being authenticated are code, not people, and the differences between these should be explicitly accounted for.
Beyond “I have a bad feeling about this”: Jedi CWEs for parser weaknesses.
- Sergey Bratus (Dartmouth)
- Falcon Momot
- Sven Hallberg
- Meredith L. Patterson
Many famous exploitable bugs of the past few years, such as Heartbleed, Android Master Key, etc., have been parser bugs. These parsers tended to give experienced code auditors the proverbial ``bad feeling about this’’. However, a feeling is not a finding and is not actionable. Until there is an agreement on which programming patterns are likely to lead to parser vulnerabilities, auditors are powerless to stop the spread of weak parser code.
We propose a system of CWEs that capture the most pernicious patterns of parser design, in the hope of giving auditors a weapon to dissect and excise dangerous input-handling code.
A Vision for Trustworthy Bare Metal Clouds
- Jason Hennessey (Boston University)
- Nabil Schear (MIT Lincoln Labs)
- Trammell Hudson (Two Sigma)
- Orran Krieger (Boston University)
- Gerardo Ravago (Boston University)
- Kyle Hogan (Boston University)
- Ravi S. Gudimetla (Northeastern University)
- Larry Rudolph (Two Sigma)
- Mayank Varia (Boston University)
Most modern production clouds use Virtual Machines (VMs) as the primary unit of cloud computing. This offers resource utilization benefits to the cloud provider and cuts costs to the tenants, but may not be appropriate for all use cases. Tenants concerned about the large trusted computing base of virtualized clouds, side-channel attacks, privacy against the cloud providers themselves, maximum performance, use of specialized hardware like GPUs and infiniband, or predictable latency, performance and network availability would benefit from bare metal resource allocation. This need has been recognized, leading to the development of tools for managing bare metal clusters and commercial implementations.
Providing tenants with remote root-level access to the hardware, however, creates a trust issue between past and future tenants of a machine. Future tenants would be able to read any state a previous tenant leaves in RAM or on disk. Also, with root access to the machine, tenants may have the capability to modify its system or peripheral firmware, allowing a tenant to introduce malware that could compromise future tenants of that machine.
We are exploring how a cloud provider could design and implement a security-focused bare metal cloud that would prevent these issues while still allowing tenants root access to bare metal nodes. We plan to make available to tenants the primitives necessary to enable them to achieve the level of assurance they require while reducing the level of trust tenants are required to have in the cloud provider. Such primitives could also add defense in depth to virtualization-based clouds in the case of a hypervisor vulnerability.
Measuring Protocol Strength with Security Goals
- Joshua Guttman (WPI and MITRE)
- Moses D. Liskov
- Paul D. Rowe
Flaws in published standards for security protocols are found regularly, often after systems implementing those standards have been deployed. Because of deployment constraints and disagreements among stakeholders, different fixes may be proposed and debated. In this process, security improvements must be balanced with issues of functionality and compatibility.
This paper provides a family of rigorous metrics for protocol
security improvements. These metrics are sets of first order
formulas in a goal language associated with a protocol.
The semantics of the goal language is compatible with many ways to analyze protocols, and some metrics in this family are supported by many protocol analysis tools. Other metrics are supported by our Cryptographic Protocol Shapes Analyzer (CPSA).
This family of metrics refines several ``hierarchies’’ of security goals in the literature. Our metrics are applicable even when, to mitigate a flaw, participants must enforce policies that constrain protocol execution. We recommend that protocols submitted to standards groups characterize their goals using formulas in the language, and that discussions comparing alternative protocol refinements measure their security in these terms.
Characterizing the Nature and Dynamics of Tor Exit Blocking
- Rachee Singh (UMass Amherst)
- Rishab Nithyanand (Stony Brook University)
- Sadia Afroz (UC Berkeley and ICSI)
- Michael Tschantz (ICSI)
- Phillipa Gill (UMass Amherst)
- Vern Paxson (UC Berkeley and ICSI)
The growth of the Tor network has provided a boon both for users seeking to protect their anonymity and, unfortunately, for miscreants who leverage the network to cloak abusive activities. These dual uses have led to tensions between Tor users and content providers, who—in their efforts to block undesired traffic—often end up blocking Tor users en masse.
In this study we characterize the blocking imposed on Tor exit nodes with consideration of the nature of the traffic originating from the Tor network—a task complicated by the fact that measuring activities of Tor users is antithetical to the goals of the system. We grapple with this challenge by leveraging multiple independent data sources to understand problematic traffic originating from the Tor network. To understand the impact of malicious Tor usage, we characterize the discrimination of Tor users by Web sites, including more subtle forms than complete blocking.