pull down to refresh

Zero-Knowledge Proofs of Training for Deep Neural Networks

Abstract

A zero-knowledge proof of training (zkPoT) enables a party to prove that they have correctly trained a committed model based on a committed dataset without revealing any additional information about the model or the dataset. An ideal zkPoT should offer provable security and privacy guarantees, succinct proof size and verifier runtime, and practical prover efficiency. In this work, we present Kaizen, a zkPoT targeted for deep neural networks (DNNs) that achieves the above ideals all at once. In particular, our construction enables a prover to iteratively train their model by the (mini-batch) gradient-descent algorithm where the number of iterations need not be fixed in advance; at the end of each iteration, the prover generates a commitment to the trained model attached with a succinct zkPoT, attesting to the correctness of the entire training process. The proof size and verifier time are independent of the iteration number.

... read more

How can you know that a NN was trained correctly without having trained it already? This feels like the halting problem.
reply
That's the point of ZKPs. The prover trains it and creates the proof, then anybody can verify that it happened without revealing any more information.
reply
reply
What’s the benefit?
reply
Did you read the abstract?
reply