Date/Time
Date(s) - 02/16/2022
12:00 PM - 1:00 PM
Add to Google Calendar or iCal/Outlook Calendar
To watch the recorded webinar, click on the recording.
Play Video
Speaker:
Dr. Aydin Aysu, North Carolina State University
Abstract:
Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography to ML frameworks. A common defense technique is called masking, which randomizes all intermediate computations while preserving the same functionality. Although masking is well-known for cryptography its extension to ML is non-trivial. In this talk, I will explain different mechanisms to mask neural networks in hardware and describe related opportunities and challenges. I will first discuss how a straightforward masking adaptation leaks side-channel information on neural networks and how to address this vulnerability. I will then describe a fundamentally new approach that redefines neural networks to make them easier to mask in hardware.
Speaker Bio:
Registration
Bookings are closed for this event.