码迷,mamicode.com
首页 > 其他好文 > 详细

horovod使用学习之一--hvd.init()

时间:2018-06-11 13:54:35      阅读:2811      评论:0      收藏:0      [点我收藏+]

标签:uil   from   oca   setup   mic   OWIN   设置   wing   number   

horovod使用学习方式参考:https://github.com/uber/horovod#usage

To use Horovod, make the following additions to your program:

  1. Run hvd.init().

  2. Pin a server GPU to be used by this process using config.gpu_options.visible_device_list. With the typical setup of one GPU per process, this can be set to local rank. In that case, the first process on the server will be allocated the first GPU, second process will be allocated the second GPU and so forth.

  3. Scale the learning rate by number of workers. Effective batch size in synchronous distributed training is scaled by the number of workers. An increase in learning rate compensates for the increased batch size.

  4. Wrap optimizer in hvd.DistributedOptimizer. The distributed optimizer delegates gradient computation to the original optimizer, averages gradients using allreduce or allgather, and then applies those averaged gradients.

  5. Add hvd.BroadcastGlobalVariablesHook(0) to broadcast initial variable states from rank 0 to all other processes. This is necessary to ensure consistent initialization of all workers when training is started with random weights or restored from a checkpoint. Alternatively, if you‘re not using MonitoredTrainingSession, you can simply execute the hvd.broadcast_global_variables op after global variables have been initialized.

  6. Modify your code to save checkpoints only on worker 0 to prevent other workers from corrupting them. This can be accomplished by passing checkpoint_dir=None to tf.train.MonitoredTrainingSession if hvd.rank() != 0.

Example (see the examples directory for full training examples):

import tensorflow as tf
import horovod.tensorflow as hvd


# Initialize Horovod
hvd.init()

# Pin GPU to be used to process local rank (one GPU per process)
config = tf.ConfigProto()
config.gpu_options.visible_device_list = str(hvd.local_rank())

# Build model...
loss = ...
opt = tf.train.AdagradOptimizer(0.01 * hvd.size())

# Add Horovod Distributed Optimizer
opt = hvd.DistributedOptimizer(opt)

# Add hook to broadcast variables from rank 0 to all other processes during
# initialization.
hooks = [hvd.BroadcastGlobalVariablesHook(0)]

# Make training operation
train_op = opt.minimize(loss)

# Save checkpoints only on worker 0 to prevent other workers from corrupting them.
checkpoint_dir = ‘/tmp/train_logs‘ if hvd.rank() == 0 else None

# The MonitoredTrainingSession takes care of session initialization,
# restoring from a checkpoint, saving to a checkpoint, and closing when done
# or an error occurs.
with tf.train.MonitoredTrainingSession(checkpoint_dir=checkpoint_dir,
                                       config=config,
                                       hooks=hooks) as mon_sess:
  while not mon_sess.should_stop():
    # Perform synchronous training.
    mon_sess.run(train_op)

  

第一步就是horovod的init操作,本节就重点学习hvd.init()过程。

1. hvd.init()调用horovod.common.__init__.py(代码)中的

def init():
    """A function that initializes Horovod.
    """
    return MPI_COMMON_LIB_CTYPES.horovod_init()

  

2. MPI_COMMON_LIB_CTYPES.horovod_init()调用horovod.common.operations.cc(代码)中的

void horovod_init() { InitializeHorovodOnce(); }

  

3.InitializeHorovodOnce()调用horovod.common.operations.cc(代码)中的

// Start Horovod background thread. Ensure that this is
// only done once no matter how many times this function is called.
void InitializeHorovodOnce() {
  // Ensure background thread is only started once.
  // initialize_flag是atomic_flag类型的,如果已经被设置为true,test_and_set()返回true,否则返回false
  if (!horovod_global.initialize_flag.test_and_set()) {
    // 启动后台线程
    // 后台线程调用BackgroundThreadLoop()方法,参数为horovod_global的引用
    horovod_global.background_thread =
        std::thread(BackgroundThreadLoop, std::ref(horovod_global));
  }

  // Wait to ensure that the background thread has finished initializing MPI.
  // 初始化完MPI之后,initialization_done会在BackgroundThreadLoop()方法中被设置为true
  while (!horovod_global.initialization_done) {
    std::this_thread::sleep_for(std::chrono::milliseconds(1));
  }
}

  

  

horovod使用学习之一--hvd.init()

标签:uil   from   oca   setup   mic   OWIN   设置   wing   number   

原文地址:https://www.cnblogs.com/lixiaolun/p/9166341.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!