Skip to content

Instantly share code, notes, and snippets.

View mingfeima's full-sized avatar
:octocat:
i do not stand by in the presence of evil

Ma Mingfei mingfeima

:octocat:
i do not stand by in the presence of evil
  • Intel Asia-Pacific R&D
View GitHub Profile
@mingfeima
mingfeima / part_1_memory_format_and_channels_last_optimization.md
Last active May 6, 2024 05:46
PyTorch CPU Performance Optimization Tutorial - Section I
@mingfeima
mingfeima / pytorch_performance_profiling.md
Last active May 4, 2024 02:51
How to do performance profiling on PyTorch

(Internal Tranining Material)

Usually the first step in performance optimization is to do profiling, e.g. to identify performance hotspots of a workload. This gist tells basic knowledge of performance profiling on PyTorch, you will get:

  • How to find the bottleneck operator?
  • How to trace source file of a particular operator?
  • How do I indentify threading issues? (oversubscription)
  • How do I tell a specific operator is running efficiently or not?

This tutorial takes one of my recent projects - pssp-transformer as an example to guide you through path of PyTorch CPU peformance optimization. Focus will be on Part 1 & Part 2.

@mingfeima
mingfeima / pytorch_cpu_perf_bkm.md
Last active February 16, 2024 21:31
BKM for PyTorch CPU Performance

General guidelines for CPU performance on PyTorch

This file serves a BKM to get better performance on CPU for PyTorch, mostly focusing on inference or deployment. Chinese version available here.

1. Use channels last memory format

Right now, on PyTorch CPU path, you may choose to use 3 types of memory formats.

  • torch.contiguous_format: default memory format, also referred as NHCW.
  • torch.channels_last: also referred as NHWC.
  • torch._mkldnn: mkldnn blocked format.
@mingfeima
mingfeima / part_3_vectorization_techniques.md
Last active December 26, 2023 07:16
PyTorch CPU Performance Optimization Tutorial - Section III
@mingfeima
mingfeima / pytorch_channels_last_perf_optimization.md
Last active September 1, 2023 03:02
PyTorch Channels Last memory format perf optimization and oneDNN integration plan.

PyTorch Channels Last Memory Format Performance Optimization on CPU Path

("mkldnn" has been renamed to "oneDNN", but exsiting PyTorch APIs still use "mkldnn", future work will align PyTorch user level APIs to "oneDNN")

Table of Contents

  • PyTorch Channels Last memory format introduction
  • oneDNN API for NHWC layout
  • Generic Channels Last memory format optimization with ATen native
  • oneDNN NHWC integration

NB: Memory format refers to data representation that describes how multidimensional arrays (nD) are stored in linear (1D) memory address space. Memory format has the same semantic with layout in oneDNN. Layout in PyTorch has other semantic ofdescribing dense or sparse with the attributes: 'torch.strided', 'torch.sparse_coo'.

@mingfeima
mingfeima / part_2_parallelization_techniques.md
Last active May 23, 2023 11:45
PyTorch CPU Performance Optimization Tutorial - Section II
@mingfeima
mingfeima / rnn_perf_optimization.md
Last active May 10, 2023 10:58
MKLDNN RNN integration in PyTorch

This gist keeps a record of MKLDNN RNN integration job into PyTorch and serves a backup of PR26387, only inference feature is provided at the moment.

To use MKLDNN RNN in PyTorch:

  1. convert model to mkldnn
  2. (optional) convert input and hx/cx to mkldnn

example: how to enable mkl-dnn RNN

import torch
from torch.utils import mkldnn as mkldnn_utils
@mingfeima
mingfeima / bert_optimization.md
Last active July 8, 2022 06:13
BERT Optimization

benchmark

Based on huggingface repo for performance evaluation, actual benchmark run script placed at repo. How to reproduce performance:

  1. prepare dataset according to link.
  2. update GLUE_DIR to actual dataset path in run_inference.sh.
  3. change env settings, the default setting is using 20 cores;

MKL v.s. MKLDNN

Inference performance result on Xeon 6148 (2x20 cores), single socket and single thread.

@mingfeima
mingfeima / pytorch_check_mkl_mkldnn.md
Last active July 8, 2022 06:09
BKMs to check whether mkl or mkldnn is enabled on PyTorch

BKMs to check whether mkl or mkldnn is enabled on PyTorch

PyTorch can be installed via different channels: conda, pip, docker, source code...

By default, mkl and mkl-dnn are enabled; But this might not always be true, so it is still useful to learn how to check this by yourself:

1. How to check whether mkl is enabled?

### check where your torch is installed
python -c 'import torch; print(torch.__path__)'
@mingfeima
mingfeima / part_4_bfloat16_kernel_optimization.md
Last active July 8, 2022 06:04
PyTorch CPU Performance Optimization Tutorial - Section IV