Skip to content

Instantly share code, notes, and snippets.

View zheyuye's full-sized avatar
🎯
Focusing

Zheyu Ye zheyuye

🎯
Focusing
View GitHub Profile
#!/bin/bash
set -e
set -x
export TASK=SQUAD
export SQUAD_VERSION=2.0
export MODEL_NAME=base
export SQUAD_DATA=/home/ubuntu/SQuAD_data
export NUM_GPUS=4
import mxnet as mx
x = mx.np.random.normal(0,1,(1,1,512))
table = mx.np.random.normal(0,1,(128,30))
extra_table = mx.np.random.normal(0,1,(384,30))
bias = mx.np.random.normal(0,1,(30,))
y = mx.np.concatenate([table, extra_table], axis=0)
res1 = mx.np.dot(x,y) + bias
a = mx.np.dot(x[:,:,:128], table)
b = mx.np.dot(x[:,:,128:], extra_table)
#!/bin/bash
set -e
set -x
export TASK=SQUAD
export SQUAD_VERSION=2.0
export MODEL_NAME=large
export SQUAD_DATA=/home/ubuntu/SQuAD_data
export BS=2
#!/bin/bash
set -e
set -x
export TASK=SQUAD
export SQUAD_VERSION=2.0
export MODEL_NAME=large
export SQUAD_DATA=/home/ubuntu/SQuAD_data
export BS=2
#!/bin/bash
set -e
set -x
export TASK=SQUAD
export SQUAD_VERSION=2.0
export MODEL_NAME=large
export SQUAD_DATA=/home/ubuntu/SQuAD_data
export BS=2
@zheyuye
zheyuye / gluon_roberta_large.log
Last active July 17, 2020 01:18
Speed comparison: huggingface + torch.distributed (13.6 hours) vs gluonnlp + horovod (8.76 hours), resources: aws g4.12xlarge CUDA 10.1, V10.1.243, model: roberta large, hyper-parameters: global batch size = 48. \n gluon: em/f1=85.88/88.73; huggingface: em/f1=84.88/88.08;
2020-07-14 08:24:58,197 - root - INFO - GPU communication supported by horovod
2020-07-14 08:24:58,197 - root - INFO - GPU communication supported by horovod
2020-07-14 08:24:58,197 - root - INFO - GPU communication supported by horovod
2020-07-14 08:24:58,197 - root - INFO - GPU communication supported by horovod
2020-07-14 08:25:06,274 - root - INFO - Loading Backbone Model from /home/ubuntu/.mxnet/models/nlp/fairseq_roberta_large/model-6b043b91.params, with total/fixd parameters=354307072/0
2020-07-14 08:25:06,286 - root - INFO - Loading Backbone Model from /home/ubuntu/.mxnet/models/nlp/fairseq_roberta_large/model-6b043b91.params, with total/fixd parameters=354307072/0
2020-07-14 08:25:06,298 - root - INFO - Prepare training data
2020-07-14 08:25:06,317 - root - INFO - Prepare training data
2020-07-14 08:25:06,340 - root - INFO - Loading Backbone Model from /home/ubuntu/.mxnet/models/nlp/fairseq_roberta_large/model-6b043b91.params, with total/fixd parameters=354307072/0
2020-07-14 08:25:06,381 - root - IN
+ unset _mlre _mlIFS _mlshdbg
+ '[' 0 = 1 ']'
+ '[' -n x ']'
+ _mlIFS='
'
+ IFS=' '
+ '[' -n '' ']'
++ /usr/bin/tclsh /usr/lib/x86_64-linux-gnu/modulecmd.tcl bash autoinit
+ eval 'module()' '{
' unset _mlre _mlIFS '_mlshdbg;
@zheyuye
zheyuye / .vimrc
Last active January 24, 2022 14:35
" Configuration file for vim
set modelines=0 " CVE-2007-2438
" Normally we use vim-extensions. If you want true vi-compatibility
" remove change the following statements
set nocompatible " Use Vim defaults instead of 100% vi compatibility
set autoindent
" Tab键的宽度
set tabstop=4
stderr: /usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
stderr: and will be removed in future. Use torchrun.
stderr: Note that --use_env is set by default in torchrun.
stderr: If your script expects `--local_rank` argument to be set, please
stderr: change it to read from `os.environ['LOCAL_RANK']` instead. See
stderr: https://pytorch.org/docs/stable/distributed.html#launch-utility for
stderr: further instructions
stderr:
stderr: FutureWarning,
stderr: WARNING:torch.distributed.run: