Remote PhD Dissertation Defense: Abhishek Chakraborty

Friday, April 9, 2021
11:00 a.m.
Zoom link
Maria Hoo
301 405 3681

ANNOUNCEMENT:  Remote PhD Dissertation Defense
Name: Abhishek Chakraborty
Professor Ankur Srivastava, Chair/Advisor
Professor Dana Dachman-Soled
Professor Manoj Franklin
Professor Gang Qu
Professor John Aloimonos

Date/Time: Friday, April 9, 2021 at 11:00 am

Location:  Zoom link
Title: Design Techniques for Enhancing Hardware-Oriented Security Using Obfuscation


The increasing trend of outsourcing hardware designs to offshore foundries for fabrication cost reduction has raised several security concerns related to intellectual property (IP) piracy, reverse engineering, counterfeiting, etc. The exposure of chip designs to a potentially malicious offshore foundry is of major concern for both government and private organizations and hence, there has been extensive research on security and privacy issues of integrated circuit (IC) supply chain. In this dissertation, we study the effectiveness of hardware-oriented obfuscation approaches for enhancing security and trust at different levels of design abstractions.

At the circuit-level of design abstraction, we analyze the security offered by state-of-the-art technique called delay locking which uses a secret key for obfuscating the functionality as well as the timing profile of a circuit such that the critical design details are not exposed to an untrusted foundry. We propose a novel Boolean satisfiability (SAT) formulation based attack to defeat the delay locking countermeasure by utilizing the detailed timing characterization of gates present in a circuit. Subsequently, we develop a new circuit-level obfuscation technique called stripped-functionality delay locking which is provably secure against all known attacks on logic locking. In addition, we also analyze the vulnerability of circuit-level obfuscation schemes to power side-channel analysis attacks.

Next, we study the limitations of circuit-level obfuscation approaches to provide reasonable security guarantees at the architecture-level of design abstraction. We demonstrate the applicability of an iterative SAT formulation based attack strategy against a many-core processor design (obfuscated using circuit-level techniques) to find an approximate key for running applications with almost no errors. Such an attack poses a major threat in the supply chain of processor designs as unlike earlier attack strategies, our proposed attack does not require any activated hardware for SAT formulation based analysis.
Subsequently, we develop a couple of efficient architecture-level locking techniques which are highly resilient to SAT based attacks.

Finally, we develop a hardware-assisted obfuscation framework for protecting the IPs of neural network (NN) models, thus enhancing application-level security. The generation of production-level NN models is not a trivial task as it requires a long training time using high power computing resources along with the availability of massive amounts of labeled training data. Hence, the protection of IP rights of well-trained NN models has become a matter of major concern for the model owners. In this research direction, we demonstrate the utilization of a hardware root of trust based obfuscation approach to safeguard the IPs of such NN models. Our proposed framework ensures that only authorized end-users who possess trusted edge devices are able to run the intended applications with high accuracy.

Audience: Graduate  Faculty 

remind we with google calendar


April 2021

28 29 30 31 1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 1
Submit an Event