Projects

Bypassing Tor Exit Blocking

Tor exit blocking, in which websites disallow clients arriving from Tor, is a growing and potentially existential threat to the anonymity network. We introduce two architectures that provide ephemeral exit bridges for Tor which are difficult to enumerate and block. Our techniques employ a micropayment system that compensates exit bridge operators for their services, and a privacy-preserving reputation scheme that prevents freeloading. We show that our exit bridge architectures effectively thwart server-side blocking of Tor with little performance overhead.

SecLab participants:

More info…

Security and Privacy for Distributed Machine Learning and Optimization

Consider a network of agents wherein each agent has a private cost function. In the context of distributed machine learning, the private cost function of an agent may represent the “loss function” corresponding to the agent’s local data. The objective here is to identify parameters that minimize the total cost over all the agents. In machine learning for classification, the cost function is designed such that minimizing the cost function should result in model parameters that achieve higher accuracy of classification. Similar optimization problems arise in the context of other applications as well.

This project addresses privacy and security of distributed optimization with applications to machine learning. In privacy-preserving machine learning, the goal is to optimize the model parameters correctly while preserving the privacy of each agent’s local data. In security, the goal is to identify the model parameters correctly while tolerating adversarial agents that may be supplying incorrect information. When a large number of agents participate in distributed optimization, security compromise or failure of some of the agents becomes increasingly likely.

SecLab participants:

Reliable Anonymous Communication Evading Censors And Repressors (RACECAR)

The Reliable Anonymous Communication Evading Censors And Repressors (RACECAR) project develops unobservable and compromise-resistant obfuscation channels (sometimes called pluggable transports) for censorship-resistant communication.

RACECAR is a collaborative project between Georgetown University, the U.S. Naval Research Laboratory, and the Tor Project; and is funded through the DARPA RACE project.

SecLab participants:

  • Micah Sherr (PI)
  • Eric Burger
  • Clay Shields
  • Samanta Troper
  • Ryan Wails
  • Wenchao Zhou

More info…

Smart DNS

Smart DNS (SDNS) services advertise access to geofenced content, or content that is normally inaccessible unless the client is within a prescribed geographic region (typically, video streaming sites such as Netflix or Hulu). SDNS is simple to use and involves no software installation. Instead, it requires only that users modify their DNS settings to point to an SDNS resolver. The SDNS resolver “smartly” identifies geofenced domains and, in lieu of their proper DNS resolutions, returns IP addresses of proxy servers located within the geofence. These servers then transparently proxy traffic between the users and their intended destinations, allowing for the bypass of these geographic restrictions. In this project, we explore the architecture of SDNS systems, the ecosystem of SDNS service providers and the privacy and security risks associated with using these systems.

SecLab participants:

  • Rahel Fainchtein
  • Micah Sherr
  • Adam Aviv (The George Washington University)

Hidden Voice Commands

Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this project how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy.

SecLab participants:

  • Micah Sherr (PI)
  • Clay Shields
  • Tavish Vaidya
  • Yuankai Zhang
  • Wenchao Zhou

More info…