HW4 Test Runners
group tests
wd="./test_results/"
ts=$(date +"%Y_%m_%d_%H_%M_%S")
git_hash=$(git rev-parse --verify --short HEAD)
echo "run id: " ${ts}__${git_hash}$
fast and cpu
fname_full=${wd}/${ts}__${git_hash}__${fname}
python3 -m pytest -l -v -k "not training and cpu" 2>&1 | tee ${fname_full}
tail -n 1 ${fname_full} | sed 's/=//g'
fast and cuda
fname_full=${wd}/${ts}__${git_hash}__${fname}
python3 -m pytest -l -v -k "not training and cuda" 2>&1 | tee ${fname_full}
tail ${fname_full}
tail -n 1 ${fname_full} | sed 's/=//g'
slow and cpu
fname_full=${wd}/${ts}__${git_hash}__${fname}
python3 -m pytest -l -v -k "training and cpu" 2>&1 | tee ${fname_full}
tail -n 1 ${fname_full} | sed 's/=//g'
slow and cuda
fname_full=${wd}/${ts}__${git_hash}__${fname}
python3 -m pytest -l -v -k "training and cuda" 2>&1 | tee ${fname_full}
tail -n 1 ${fname_full} | sed 's/=//g'
Tests logs
- assert ndarray is compact in reshape method. 3 failing.
- force self.compact in reshape method - all pass. it also get me to 10/10 for resnet9. test_results/2022_1211_0522__fast.log
- above, slow run. test_results/2022_1211_0522__slow.log
- deploy above changes to debian and run cuda, a lot of failures. all cuda tests are passed (thought some were wrong before). test_results/2022_12_11_07_25_03__b67842e__fast_and_cuda.log
- fix cuda matmul vanilla version, init answer to 0 explicitly. a lot of errors from LSTM. test_results/2022_12_11_12_14_23__1985085__fast_and_cuda.log
- fix grid setup, from (M, N) to (P, M)! test_results/2022_12_11_12_48_44__fce5edb__fast_and_cuda.log
- ensure all data/parameters are in the right device. cpu and cuda, all pass! milestone. test_results/2022_12_11_13_51_22__f43d7ab__fast_and_cuda.log