PDF4PRO ⚡AMP

Modern search engine that looking for books and documents around the web

Example: stock market

Machine Learning with Adversaries: Byzantine Tolerant ...

Machine Learning with Adversaries: Byzantine Tolerant Gradient DescentPeva BlanchardEPFL, Mahdi El Mhamdi EPFL, GuerraouiEPFL, StainerEPFL, study the resilience to Byzantine failures of distributed implementations ofStochastic Gradient Descent (SGD). So far, distributed Machine Learning frame-works have largely ignored the possibility of failures, especially arbitrary ( , Byzantine ) ones. Causes of failures include software bugs, network asynchrony,biases in local datasets, as well as attackers trying to compromise the entire a set ofnworkers, up tofbeing Byzantine , we ask how resilient canSGD be, without limiting the dimension, nor the size of the parameter space. Wefirst show that no gradient aggregation rule based on a linear combination of the vec-tors proposed by the workers ( , current approaches) tolerates a single Byzantinefailure. We then formulate a resilience property of the aggregation rule capturingthe basic requirements to guarantee convergence despitefByzantine workers.

aggregation rule, that selects, among the proposed vectors, the vector “closest to the barycenter” (for example by taking the vector that minimizes the sum of the squared distances to every other vector), might look appealing. Yet, such a squared-distance-based aggregation rule tolerates only a single Byzantine worker.

Tags:

  With, Learning, Aggregation, Tolerant, Byzantine, Adversaries, Learning with adversaries, Byzantine tolerant

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Spam in document Broken preview Other abuse

Transcription of Machine Learning with Adversaries: Byzantine Tolerant ...

Related search queries