News Story
MC2 Researchers Have Six Papers Accepted to USENIX Security Symposium
Faculty, postdocs and students in the Maryland Cybersecurity Center (MC2) had six papers accepted to the 2018 USENIX Security Symposium.
The annual event—held this year from August 15–17 in Baltimore—brings together researchers, practitioners, system administrators, system programmers and others to share and explore the latest advances in the security and privacy of computer systems and networks.
The MC2 researchers are presenting on a wide array of security-related topics, including targeted poisoning attacks against machine learning systems, compromised code-signing certificates, enterprise-level cyberattacks, and more.
“The USENIX Security Symposium is one of the top conferences in the field of cybersecurity,” says Jonathan Katz, professor of computer science and director of MC2. “Having six papers accepted is an incredible accomplishment, and speaks to the quality and diversity of work being done in MC2.”
One paper creating a lot of buzz at the conference involves a processor vulnerability issue that could put millions of computers at risk.
“Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution” examines a newly discovered processor vulnerability that could have potentially put secure information at risk in almost every Intel-based machine manufactured since 2008. The flaw is similar to Spectre and Meltdown, the hardware-based attacks that shook the computer security world earlier this year.
The paper is co-authored by Daniel Genkin, who just completed a postdoctoral fellowship in MC2 and is starting as an assistant professor in the Department of Electrical Engineering and Computer Science at the University of Michigan in September.
Genkin and his team—which includes researchers from the Technion–Israel Institute of Technology, University of Michigan, University of Adelaide, Data61 and imec-DistriNet, KU Leuven—were able to break several security safeguards present in most Intel-based machines.
One example was their exploitation of a security feature called Software Guard Extensions (SGX). At a high level, SGX creates a digital lockbox called an “enclave” inside the user’s machine, allowing applications to safely run inside the lockbox, while being completely isolated from the rest of the machine. Even if a security vulnerability compromises the entire machine, the data protected by SGX remains secure and inaccessible to anyone besides the owner. But Foreshadow is able to break that lockbox, the researchers say.
Another MC2 paper, “When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks,” introduces StingRay, a practical poisoning attack against machine-learning systems that causes the misclassification of a specific target.
The paper is authored by Tudor Dumitras, an assistant professor of electrical and computer engineering and member of MC2; Hal Daumé III, a UMD professor of computer science; computer science doctoral students Octavian Suciu and Yiğitcan Kaya; and Radu Mărginean, who previously completed an internship under Dumitras.
Machine-learning techniques drive the success of many applications, such as malware detectors tasked with classifying programs as benign or malicious. These techniques start from a training set that includes a few known benign and malicious examples, and learn models of malicious activity without requiring a predetermined description. In a poisoning attack, an adversary introduces a few, specially crafted examples in the training set that cause the algorithm to learn a skewed model.
The StingRay attack developed by the researchers can induce targeted misclassifications, for example, causing a competitor’s benign application to be flagged as malware. The researchers demonstrated practical attacks of this type against four machine-learning systems: an Android malware detector, a Twitter-based vulnerability exploit detector, a data breach prediction system and an image recognition system. To achieve this, they developed a model for realistic adversaries—called FAIL—that evaluates the effectiveness against machine learning systems. They ultimately found that StingRay is able to bypass two existing anti-poisoning defenses.
Other MC2-authored papers being presented at the symposium are:
“The Broken Shield: Measuring Revocation Effectiveness in the Windows Code-Signing PKI” seeks to understand if code-signing certificates that have been compromised are revoked promptly and effectively.
The paper is authored by Dumitras, Doowon Kim, a computer science doctoral student; and Bum Jun Kwon, an electrical and computer engineering doctoral student.
Malware signed with compromised certificates can bypass operating-system protections and anti-virus scanners; the research team explains that the ability to revoke these certificates is crucial for protecting users. MC2 researchers had previously studied revocations of SSL certificates, which protect websites, but barriers to data collection on code signing have prevented studies of the revocation process in this ecosystem.
The team, which includes a former UMD intern now at Masaryk University as well as a researcher from Symantec Research Labs, collected seven datasets—including the largest corpus of code-signing certificates—and combined them to analyze the entire revocation process. The research shows that revocations are more challenging to do effectively for code-signing certificates.
“From Patching Delays to Infection Symptoms: Using Risk Profiles for an Early Discovery of Vulnerabilities Exploited in the Wild” presents a method for inferring which vulnerabilities are being exploited in the wild, before observing an exploit.
The paper is authored by Dumitras and researchers from Harvard University, UC Santa Cruz, University of Illinois Urbana–Champaign and the University of Michigan.
The research team explains that their method is based on the intuition that variations in the patching rates for different vulnerabilities indicate which vulnerabilities present a higher risk of exploitation. The researchers utilized data previously collected by MC2 researchers through a large-scale measurement of patching delays for more than 1,000 vulnerabilities.
They also used measurements of infection symptoms (e.g., a large number of blacklisted IP addresses) to discover clusters of networks that exhibit similar patterns of compromise. They then created risk profiles for these clusters, using their recorded patching delays. Finally, they used those profiles to determine when a vulnerability is actively exploited. By observing up to 10 days’ worth of patching data, they found it is possible to detect vulnerability exploitation with a true-positive rate of 90 percent and a false-positive rate of only 10 percent.
“Meltdown: Reading Kernel Memory from User Space” is also authored by Genkin, and brings to light a serious security flaw that allows hackers to bypass the hardware barrier between applications run by users and the computer’s core memory, which is normally highly protected. Meltdown exploits side-channel information available on most modern processors, including modern Intel microarchitectures since 2010 and potentially on other CPUs of other vendors.
Genkin collaborated on this paper with researchers from Graz University of Technology, University of Adelaide, Data61, Rambus, and Cyberus Technology GmbH.
The research team found that the countermeasure KAISER (Kernel Address Isolation to have Side-channels Efficiently Removed)—originally developed to prevent side-channel attacks targeting KASLR (Kernel Address Space Layout Randomization) on Linux—had the important but inadvertent side effect of impeding Meltdown. Since the release of the paper earlier this year, patches have also been developed for Windows and Apple OS X.
“The Battle for New York: A Case Study of Applied Digital Threat Modeling at the Enterprise Level” contains a case study in which formal threat modeling was introduced to the New York City Cyber Command, the city’s primary digital defense organization.
The paper is authored by Michelle Mazurek, an assistant professor of computer science and member of MC2, as well as computer science doctoral students Daniel Votipka, Elissa Redmiles and Rock Stevens.
The team, which includes researchers from NYC Cyber Command and Wake Forest University, explains that digital security professionals use threat modeling to assess and improve the security posture of an organization or product. However, no threat-modeling techniques have been systematically evaluated in a real-world, enterprise environment.
In their case study, the team found that threat modeling improved self-efficacy; 20 of 25 participants regularly incorporated it within their daily duties 30 days after training, without further prompting. After 120 days, implemented participant-designed threat mitigation strategies provided tangible security benefits for the city, including blocking 541 unique intrusion attempts, preventing the hijacking of five privileged user accounts, and addressing three public-facing server vulnerabilities.
In addition to their work in MC2, Dumitras and Mazurek have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS), as does Daumé.
MC2 is supported by the College of Computer, Mathematical, and Natural Sciences and the A. James Clark School of Engineering. It is one of 16 centers and labs in UMIACS.
—Story by Melissa Brachfeld
Published August 14, 2018