SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud

Home / Publications / SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud

Zahra Ghodsi, Tianyu Gu and Siddharth Garg

Inference using deep neural networks is often outsourced to the cloud since it is a computationally demanding task. However, this raises a fundamental issue of trust. How can a client be sure that the cloud has performed inference correctly? A lazy cloud provider might use a simpler but less accurate model to reduce its own computational load, or worse, maliciously modify the inference results sent to the client. We propose SafetyNets, a framework that enables an untrusted server (the cloud) to provide a client with a short mathematical proof of the correctness of inference tasks that they perform on behalf of the client.