Skip to content

Instantly share code, notes, and snippets.

@hewumars
Last active June 12, 2019 14:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hewumars/a73008aab3bbee8a388303efea5bf23e to your computer and use it in GitHub Desktop.
Save hewumars/a73008aab3bbee8a388303efea5bf23e to your computer and use it in GitHub Desktop.

[TOC]

TVM如何优化GPU卷积

本教程,我们将演示如何在TVM中编写高性能卷积实现。我们使用方形尺寸的输入张量和滤波器作为示例,并假设卷积的输入具有大批量。在此示例中,我们使用不同的布局来存储数据,以实现更好的数据局部性。缓冲区布局为HWCN,代表高度,宽度,通道,批次。

准备和算法

我们使用固定尺寸14x14x256(HWC)的输入张量,batch为256。卷积核为3x3x512(HWC)。我们使用stride=1,padding=1,以下代码定义了TVM中的卷积算法。

import numpy as np
import tvm

#定义inputs和filters的维度大小
batch = 256
in_channel = 256
out_channel = 512
in_size = 14 
kernel = 3
pad = 1
stride = 1

#算法
A = tvm.placeholder((in_size, in_size, in_channel, batch), name='A')
W = tvm.placeholder((kernel, kernel, in_channel, out_channel), name='W')
#输出featuremap size
out_size = (in_size - kernel + 2*pad)//stride + 1
#Pad填充(条件判断在[in_size,in_size]以为的赋值为0,其他为自己本身)
Apad = tvm.compute((in_size+2*pad,in_size+2*pad,in_channel,batch),
                  lambda yy,xx,cc,nn: tvm.if_then_else(tvm.all(yy>=pad,yy-pad<in_size,xx>=pad,xx-pad<in_size), A[yy - pad, xx - pad, cc, nn], tvm.const(0., "float32")),name='Apad')
#创建reduce轴
rc = tvm.reduce_axis((0, in_channel), name='rc')
ry = tvm.reduce_axis((0, kernel), name='ry')
rx = tvm.reduce_axis((0, kernel), name='rx')
#计算卷积
B = tvm.compute((out_size,out_size,out_channel,batch),lambda yy,xx,ff,nn: tvm.sum(Apad[yy*stride+ry,xx*stride+rx,rc,nn]*W[ry,rx,rc,ff],axis=[ry,rx,rc]),name='B')          

内存层次结构

我们首先指定buffer的内存层次结构。下图显示了GPU内存层次结构。GPU与CPU内存层次结构的一个重要区别,是提供了一个称为共享内存的缓冲区,由程序员管理。因此,在GPU内核函数中,如何在共享内存中对于实现高性能至关重要。

memory hierarchy

在这个例子中,我们将Apad和W加载到位于共享内存的缓冲区AA和WW中。这些buffer在稍后的卷积中,被同一个线程块block的所有线程thread共享。然后,每个线程从共享buffer加载属于自己的部分数据到它们本地寄存器AL和WL。BL是输出B的本地cache,它也存储在线程本地寄存器中。

#指定内存层次结构
s = tvm.create_schedule(B.op)
s[Apad].compute_inline() #计算Apad内联
AA = s.cache_read(Apad, "shared", [B])
WW = s.cache_read(W, "shared", [B])
AL = s.cache_read(AA, "local", [B])
WL = s.cache_read(WW, "local", [B])
BL = s.cache_write(B, "local")

分块

以下代码将工作负载拆分为线程块和单个线程。我们遵循矩阵乘法中的分块方案。如下图所示,给定像素坐标(y,x),线程块负责计算输出channel和batch的block_factor x block_factor(64 x 64)的区域。由于共享内存空间的限制,我们每次只从Apad和B加载block_factor(8 x 64)数据到共享内存中的缓冲区。

#平铺常量
tile = 8
num_thread = 8
block_factor = tile * num_thread
step = 8
vthread = 2
#获取GPU线程标记(范围,线程标记)
block_x = tvm.thread_axis("blockIdx.x")
block_y = tvm.thread_axis("blockIdx.y")
block_z = tvm.thread_axis("blockIdx.z")
thread_x = tvm.thread_axis((0, num_thread), "threadIdx.x")
thread_y = tvm.thread_axis((0, num_thread), "threadIdx.y")
thread_xz = tvm.thread_axis((0, vthread), "vthread", name="vx")
thread_yz = tvm.thread_axis((0, vthread), "vthread", name="vy")

#分裂工作负载
hi, wi, fi, ni = s[B].op.axis
bz = s[B].fuse(hi, wi)
by, fi = s[B].split(fi, factor=block_factor)
bx, ni = s[B].split(ni, factor=block_factor)

#绑定迭代变量到GPU线程标识
s[B].bind(bz, block_z)
s[B].bind(by, block_y)
s[B].bind(bx, block_x)

虚拟线程分裂

我们进一步将工作负载从线程块拆分为单个线程。为了避免内存库冲突,我们使用虚拟线程将区域分成4个部分,然后平铺成8x8网格。因此,如下图所示,每个线程计算步幅stride为4的网格,每个网格的大小为4 x 4。

tyz, fi = s[B].split(fi, nparts=vthread)  # virtual thread split
txz, ni = s[B].split(ni, nparts=vthread)  # virtual thread split
ty, fi = s[B].split(fi, nparts=num_thread)
tx, ni = s[B].split(ni, nparts=num_thread)
s[B].reorder(bz, by, bx, tyz, txz, ty, tx, fi, ni)

s[B].bind(tyz, thread_yz)
s[B].bind(txz, thread_xz)
s[B].bind(ty, thread_y)
s[B].bind(tx, thread_x)

并发数据获取

如前所述,每个时间步,我们需要分step x次将block_factor数据从GPU全局内存传输到共享内存。为了减少每个线程的内存传输,以下代码允许同一线程块中的线程并发地从全局内存中获取相关数据。

# Schedule BL local write
s[BL].compute_at(s[B], tx)
yi, xi, fi, ni = s[BL].op.axis
ry, rx, rc = s[BL].op.reduce_axis
rco, rci = s[BL].split(rc, factor=step)
s[BL].reorder(rco, ry, rx, rci, fi, ni)

# Attach computation to iteration variables
s[AA].compute_at(s[BL], rx)
s[WW].compute_at(s[BL], rx)
s[AL].compute_at(s[BL], rci)
s[WL].compute_at(s[BL], rci)

# Schedule for A's shared memory load
yi, xi, ci, ni = s[AA].op.axis
ty, ci = s[AA].split(ci, nparts=num_thread)
tx, ni = s[AA].split(ni, nparts=num_thread)
_, ni = s[AA].split(ni, factor=4)
s[AA].reorder(ty, tx, yi, xi, ci, ni)
s[AA].bind(ty, thread_y)
s[AA].bind(tx, thread_x)
s[AA].vectorize(ni)  # vectorize memory load

# Schedule for W's shared memory load
yi, xi, ci, fi = s[WW].op.axis
ty, ci = s[WW].split(ci, nparts=num_thread)
tx, fi = s[WW].split(fi, nparts=num_thread)
_, fi = s[WW].split(fi, factor=4)
s[WW].reorder(ty, tx, yi, xi, ci, fi)
s[WW].bind(ty, thread_y)
s[WW].bind(tx, thread_x)
s[WW].vectorize(fi)  # vectorize memory load

生成CUDA内核

最后,我们使用TVM生成和编译CUDA内核函数,并评估卷积的性能。

func = tvm.build(s, [A, W, B], 'cuda')
ctx = tvm.gpu(0)
a_np = np.random.uniform(size=(in_size, in_size, in_channel, batch)).astype(A.dtype)
w_np = np.random.uniform(size=(kernel, kernel, in_channel, out_channel)).astype(W.dtype)
a = tvm.nd.array(a_np, ctx)
w = tvm.nd.array(w_np, ctx)
b = tvm.nd.array(np.zeros((out_size, out_size, out_channel, batch), dtype=B.dtype), ctx)
func(a, w, b)
evaluator = func.time_evaluator(func.entry_name, ctx, number=1)
print('Convolution: %f ms' % (evaluator(a, w, b).mean * 1e3))

输出:

Convolution: 37.071140 ms #1066
Convolution: 16.331274 ms #1080TI
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment