Skip to content

Instantly share code, notes, and snippets.

@shagunsodhani
Created April 2, 2016 07:10
Show Gist options
  • Star 18 You must be signed in to star a gist
  • Fork 12 You must be signed in to fork a gist
  • Save shagunsodhani/5733fffe6b1a268998bd93f29ec9fbeb to your computer and use it in GitHub Desktop.
Save shagunsodhani/5733fffe6b1a268998bd93f29ec9fbeb to your computer and use it in GitHub Desktop.
Notes for "Large Scale Distributed Deep Networks" paper

Large Scale Distributed Deep Networks

Introduction

  • In machine learning, accuracy tends to increase with an increase in the number of training examples and number of model parameters.
  • For large data, training becomes slow on even GPU (due to increase CPU-GPU data transfer).
  • Solution: Distributed training and inference - DistBelief
  • Link to paper

DistBelief

  • Framework for parallel and distributed computation in neural networks.
  • Exploits both model parallelism and data parallelism.
  • Two methods implemented over DistBelief - Downpour SGD and Sandblaster L-BFGS

Previous work in this domain

  • Distributed Gradient Computation using relaxed synchronization requirements and delayed gradient descent.
  • Lock-less asynchronous stochastic gradient descent in case of sparse gradient.
  • Deep Learning
    • Most focus has been on training small models on a single machine.
    • Train multiple, small models on a GPU farm and combine their predictions.
    • Alternate approach is to modify standard deep networks to make them more parallelizable.
  • Distributed computational tools
    • MapReduce - Not suited for iterative computations
    • GraphLab - Can not exploit computing efficiencies available in deep networks.

Model Parallelism

  • User specifies the computation at each node in each layer of the neural network and the messages to be exchanged between different nodes.
  • Model parallelism supported using multithreading (within a machine) and message passing (across machines).
  • Model can be partitioned across machines with DistBelief taking care of all communication/synchronization tasks.
  • Diminishing returns in terms of speed gain if too many machines/partitions are used as partitioning benefits as long as network overhead is small.

Data Parallelism

  • Distributed training across multiple model instances (or replicas) to optimize a single function.

Distributed Optimization Algorithms

Downpour SGD

  • Asynchronous stochastic gradient descent.
  • Divide training data into subsets and run a copy of model on each subset.
  • Centralized sharded parameter server
    • Keeps the current state of all the parameters of the model.
    • Used by model instances to share parameters.
  • Asynchronous approach as both model replicas and parameter server shards run independently.
  • Workflow
    • Model replica requests an updated copy of model parameters from parameter server.
    • Processes mini batches and sends the gradient to the server.
    • Server updates the parameters.
  • Communication can be limited by limiting the rate at which parameters are requested and updates are pushed.
  • Robust to machine failure since multiple instances are running.
  • Relaxing consistency requirements works very well in practice.
  • Warm starting - Start training with a single replica and add other replicas later.

Sandblaster L-BFGS

  • Sandblaster is the distributed batch Optimization framework.

  • Provides distributed implementation of L-BFGS.

  • A single 'coordinator' process interacts with replicas and the parameter server to coordinate the batch optimization process.

  • Workflow

    • Coordinator issues commands like dot product to the parameter server shards.
    • Shards execute the operation, store the results locally and maintain additional information like history cache.
    • Parameters are fetched at the beginning of each batch and the gradients are sent every few completed cycles.
  • Saves overhead of sending all parameters and gradients to a single server.

  • Robust to machine failure just like Downpour SGD.

  • Coordinator uses a load balancing scheme similar to "backup tasks" in MapReduce.

Observation

Downpour SGD (with Adagrad adaptive learning rate) outperforms Downpour SGD (with fixed learning rate) and Sandblaster L-BFGS. Moreover, Adagrad can be easily implemented locally within each parameter shard. It is surprising that Adagrad works so well with Downpour SGD as it was not originally designed to be used with asynchronous SGD. The paper conjectures that "Adagrad automatically stabilizes volatile parameters in the face of the flurry of asynchronous updates, and naturally adjusts learning rates to the demands of different layers in the deep network."

Beyond DistBelief - TensorFlow

  • In November 2015, Google open sourced TensorFlow as its second-generation machine learning system. DistBelief had limitations like tight coupling with Google's infrastructure and was difficult to configure. TensorFlow takes care of these issues and is twice fast as DistBelief.Google also published a white paper on the topic and the implementation can be found here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment