Skip to content

Instantly share code, notes, and snippets.

View CaoZhongZ's full-sized avatar

Zhong, Tsao CaoZhongZ

View GitHub Profile

As outlined here, there are a couple of situations where you may want/need to authenticate with GitHub by using an Access Token:-

  1. If you have Two-Factor Authentication (2FA) enabled.
  2. You are accessing an organisations protected content using SAML Single-Sign On (SSO).

Using an Access Token for the first time

Create an Access Token

In your GitHub account, go to Settings / Developer settings / Personal access tokens and select Generate New Token. Make a note of the token somewhere safe since this is the only chance you get to see it.

@CaoZhongZ
CaoZhongZ / Penn Treebank II Tags.md
Created February 27, 2021 16:13 — forked from nlothian/Penn Treebank II Tags.md
Penn Treebank II Tags
@CaoZhongZ
CaoZhongZ / compile_pytorch_with_icx.md
Last active October 6, 2020 08:11
Compile PyTorch with Intel Compiler Next Generation

Intel Compiler Next Generation

After Intel® Parallel® Studio XE 2020, you could use icx/icpx to enable intel compiler next generation code generator. It combined clang frontend and icc's cutting edge optimization, ideal for opensource projects which already supported opensource compilers.

Setup Intel Compiler Environment

After installed Parallel® Studio, setup compilation environment.

. /opt/intel/bin/compilervars.sh intel64

Start Compilation

@CaoZhongZ
CaoZhongZ / freeze_params_in_torchscript.md
Last active August 31, 2019 02:50
Freeze parameters in TorchScript Graph

Freeze 'weights', 'bias', or buffers etc. in TorchScript.

MKL-DNN requires specific format for weight to do convolution faster. By freezing weight inside a TorchScript, one could embed more information of weight tensor into graph and use constant propagation to propagate its content closer to its use, possibly avoid the whole computation of transformation in runtime.

Optimized passes for MKL-DNN enabled ops

We insert ops before aten::conv2d to transform weight format in favour of MKL-DNN computation, for example.

We got an IR:

%30 : Float(*, *, *, *) = prim::GetAttr[name="weight"]
%289 : Float(*, *, *, *) = aten::conv2d(%x.1, %30, %4, %611, %612, %613, %23)