[论文学习]使用Exo生成Arm上的高性能microkernel

简介

Tackling the Matrix Multiplication Micro-kernel Generation with EXO,https://arxiv.org/pdf/2310.17408 不知道发在什么级别的会或者期刊,
但是作者列表里有mit和ucb的人,质量应该没问题。
论文作者使用EXO写了一个Arm neon矩阵乘micro kernel代码生成器。https://github.com/adcastel/EXO_ukr_generator
因为是生成的,所以可以方便的尝试不同size的kernel。
文章包含以下内容:

  • 单线程 BLIS gemm实现算法
  • 介绍Exo语言/编译器
  • 详细介绍如何使用Exo生成arm neon高性能micro kernel
  • 展示优越性

BLIS gemm

老生常谈的分块packing,macrokernel,microkernel,由于手写kernel太费劲了,所以blis给每个平台只写了一个kernel。

EXO介绍

和大多数编译和调优框架(halid,tvm)类似,给Exo一个操作定义,用户这个操作上,
使用Exo提供的调度原语手动应用一系列优化手段(tiling,vectorize),
不同的是Exo没有和llvm或者特定硬件绑定在一起,具体生成什么平台的代码,需要用户以输入的形式指定。Exo最终生成使用intrinsic的C代码。
作者在这一部分还顺便批判了一下TVM,强调了直接生成C代码的好处(TVM也可以生成C代码,不过好像只能生成调库的那种C代码):1. 不需要什么运行时依赖,2. 可以轻松的以源代码的形式集成到现有库里;
还强调了搜索参数的不必要性,并举了很多例子来辅助论证AutoTVM这种在用户定义模板的基础上搜索参数的方法会造成极大的浪费。

如何使用Exo生成arm neon高性能micro kernel

通过@proc注解标注一个函数是可调度的,然后使用Exo提供的调度函数来手动显式应用一些优化和约束,比如设定分块大小(8x12),
切分循环(8x12->(2x4)x(3x4))等,然后产生调度后的Exo代码,最后根据该代码搭配输入(体系结构特定的intrinsic函数)生成C代码

Exo定义的原始未做优化和调度的gemm micro kernel

作者在这里假设alpha和beta都是1。。。。代码我直接复制的(忽略缩进问题),这里默认C矩阵列主序,A矩阵列主序,B矩阵行主序,保证连续访问

1
2
3
4
5
6
7
8
9
10
11
12
1 @proc
2 def ukernel_ref( MR: size, NR: size, KC: size,
3 alpha: f32[1], Ac: f32[KC, MR] @ DRAM,
4 Bc: f32[KC, NR] @ DRAM, beta: f32[1],
5 C: f32[NR, MR] @ DRAM,
6 ):
7
8 # C += Ac * Bc
9 for k in seq(0, KC):
10 for j in seq(0, NR):
11 for i in seq(0, MR):
12 C[j, i] += Ac[k,i] * Bc[k,j]

设定mr和nr的值为8x12

这里USER CODE是自己手写的,下面的RESULTING EXO GENERATED CODE据说是生成的中间的Exo的表示。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1 # USER CODE
2 MR, NR = 8, 12
3 p = rename(ukernel_ref, "uk{}x{}".format(MR,NR))
4 p = p.partial_eval(MR,NR)
5
6 # RESULTING EXO GENERATED CODE
7 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
8 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
9 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
10 ):
11
12 # C += Ac * Bc
13 for k in seq(0, KC):
14 for j in seq(0, 12):
15 for i in seq(0, 8):
16 C[j, i] += Ac[k, i] * Bc[k, j]

按向量长度分割循环

m维分成2x4,n维分成3x4,Exo可以自动生成tile后的ABC矩阵的寻址表达式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1 # USER CODE
2 p = divide_loop(p,’i’, 4, [’it’,’itt’], perfect=True)
3 p = divide_loop(p,’j’, 4, [’jt’,’jtt’], perfect=True)
4
5 # RESULTING EXO GENERATED CODE
6 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
7 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
8 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
9 ):
10
11 # C += Ac * Bc
12 for k in seq(0, KC):
13 # Loop splits to match the vector length (4)
14 for jt in seq(0, 3):
15 for jtt in seq(0, 4):
16 for it in seq(0, 2):
17 for itt in seq(0, 4):
18 C[jtt + 4 * jt, itt + 4 * it] +=
19 Ac[k, itt + 4 * it] *
20 Bc[k, jtt + 4 * jt]

目前为止还没有利用寄存器和向量化

将8x12的C矩阵绑定到向量寄存器并使用向量化load/store

据作者说,这一步是最复杂的步骤之一,Exo的写法太自由了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
1 # USER CODE
2 # 1) Map C buffer to vectorial register C_reg
3 Cp = ’C[4 * jt + jtt, 4 * it + itt]’
4 p = stage_mem(p, ’C[_] += _’, Cp, ’C_reg’)
5
6 # 2) Build a 3D structure of C_reg
7 p = expand_dim(p, ’C_reg’, 4, ’itt’, ...)
8 p = expand_dim(p, ’C_reg’, MR//4, ’it’, ...)
9 p = expand_dim(p, ’C_reg’, NR, ’jt*4+jtt’, ...)
10
11 # 3) Move the register declaration to the top
12 p = lift_alloc(p, ’C_reg’, n_lifts=5)
13
14 # 4) Extract the C load and store from the k-loop
15 p = autofission(p, p.find(’C_reg[_] = _’).after(),
16 n_lifts=5)
17 p = autofission(p, p.find(’C[_] = _’).before(),
18 n_lifts=5)
19
20 # 5) Replace the indicated loops by Neon intrinsics
21 p = replace(p, ’for itt in _: _’, neon_vld_4xf32)
22 p = replace(p, ’for itt in _: _’, neon_vst_4xf32)
23
24 # 6) Set the C_reg memory to Neon
25 p = set_memory(p, ’C_reg’, Neon)
26
27 # RESULTING EXO GENERATED CODE
28 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
29 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
30 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
31 ):
32
33 # Registers for C
34 C_reg: f32[12, 2, 4] @ Neon
35
36 # Load C to registers
37 for jt in seq(0, 3):
38 for jtt in seq(0, 4):
39 for it in seq(0, 2):
40 neon_vld_4xf32(
41 C_reg[4 * jt + jtt, it, 0:4],
42 C[4 * jt + jtt, 4 * it:4 * it + 4]
43 )
44
45 # C += Ac * Bc
46 for k in seq(0, KC):
47 for jt in seq(0, 3):
48 for jtt in seq(0, 4):
49 for it in seq(0, 2):
50 for itt in seq(0, 4):
51 C_reg[jt * 4 + jtt, it, itt] +=
52 Ac[k, itt + 4 * it] *
53 Bc[k, jtt + 4 * jt]
54
55 # Store C from registers
56 for jt in seq(0, 3):
57 for jtt in seq(0, 4):
58 for it in seq(0, 2):
59 neon_vst_4xf32(
60 C[jtt + 4 * jt, 4 * it:4 * it + 4],
61 C_reg[jtt + 4 * jt, it, 0:4]
62 )

将A,B操作数绑定到向量寄存器并使用向量化load

USER CODE用到的X应该是因为AB操作类似,作者不想在论文里写两遍,所以用的X,实际代码里应该是A和B。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
1 # USER CODE
2 # 1) Map Xc buffer to vectorial register X_reg
3 p = bind_expr(p, ’Xc[_]’,’X_reg’)
4
5 # 2) Build a 2D structure of X_reg
6 p = expand_dim(p, ’X_reg’ , 4, ’xtt’, ...)
7 p = expand_dim(p, ’X_reg’, XR//4, ’xt’, ...)
8
9 # 3) Move the register declaration to the top
10 p = lift_alloc(p, ’X_reg’, n_lifts=5)
11
12 # 4) Move the Xc load to the k-loop
13 p = autofission(p, p.find(’X_reg[_] = _’).after(),
14 n_lifts=4)
15
16 # 5) Replace the xtt loop by Neon intrinsics
17 p = replace(p, ’for xtt in _: _’, neon_vld_4xf32)
18
19 # 6) Set the X_reg memory to Neon
20 p = set_memory(p, ’X_reg’, Neon)
21
22 # RESULTING EXO GENERATED CODE
23 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
24 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
25 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
26 ):
27
28 # Registers for C and load C as in the
29 # previous figure. Omitted for brevity
30
31 # Registers for Ac and Bc
32 A_reg: R[2, 4] @ Neon
33 B_reg: R[3, 4] @ Neon
34 for k in seq(0, KC):
35 # Load Ac and Bc to registers
36 for it in seq(0, 2):
37 neon_vld_4xf32(
38 A_reg[it, 0:4],
39 Ac[k, 4 * it:4 + 4 * it])
40 for jt in seq(0, 3):
41 neon_vld_4xf32(
42 B_reg[jt, 0:4],
43 Bc[k, 4 * jt:4 + 4 * jt])
44 for jt in seq(0, 3):
45 for jtt in seq(0, 4):
46 for it in seq(0, 2):
47 for itt in seq(0, 4):
48 C_reg[jtt + 4 * jt, it, itt] +=
49 A_reg[it, itt] * B_reg[jt, jtt]
50
51 # Store C from registers as in the
52 # previous figure. Omitted for brevity

交换循环顺序,使用fmla指令完成乘加操作

交换it和jtt循环,循环变为jt->it->jtt->itt->标量乘加,然后将itt->标量乘加替换为向量乘加fmla,最终循环变为jt->it->jtt->fmla

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
1 # USER CODE
2 p = reorder_loops(p,’jtt it’)
3 p = replace(p, ’for itt in _: _’,
4 neon_vfmla_4xf32_4xf32)
5
6 # RESULTING EXO GENERATED CODE
7 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
8 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
9 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
10 ):
11
12 # Registers for C, Ac and Bc and loads as in
13 # previous figures. Omitted for brevity
14
15 # C += Ac * Bc
16 for k in seq(0, KC):
17 # Load Ac and Bc to registers as in
18 # previous figures. Omitted for brevity
19
20 # Computation with registers
21 for jt in seq(0, 3):
22 for it in seq(0, 2):
23 for jtt in seq(0, 4):
24 neon_vfmla_4xf32_4xf32(
25 C_reg[jtt + 4 * jt, it, 0:4],
26 A_reg[it, 0:4], B_reg[0:4, jt], jtt)
27
28 # Store C from registers as in
29 # previous figures. Omitted for brevity

k维度循环展开

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
1 # USER CODE
2 p = unroll_loop(p,’it’)
3 p = unroll_loop(p,’jt’)
4
5 # RESULTING EXO GENERATED CODE
6 def uk_8x12( KC: size, alpha: f32[1] @ DRAM,
7 Ac: f32[KC, 8] @ DRAM, Bc: f32[KC, 12] @ DRAM,
8 beta: f32[1] @ DRAM, C: f32[12, 8] @ DRAM
9 ):
10
11 # Registers for C and load C as in
12 # previous figures. Omitted for brevity
13
14 # Registers for Ac and Bc
15 A_reg: R[2, 4] @ Neon
16 B_reg: R[3, 4] @ Neon
17
18 # C += Ac * Bc
19 for k in seq(0, KC):
20 # Unrolled loads from Ac and Bc to registers
21 neon_vld_4xf32(A_reg[0, 0:4], Ac[k, 0:4 + 0])
22 neon_vld_4xf32(A_reg[1, 0:4], Ac[k, 4:4 + 4])
23 neon_vld_4xf32(B_reg[0, 0:4], Bc[k, 0:4 + 0])
24 neon_vld_4xf32(B_reg[1, 0:4], Bc[k, 4:4 + 4])
25 neon_vld_4xf32(B_reg[2, 0:4], Bc[k, 8:4 + 8])
26 # Computation with registers
27 for jt in seq(0, 3):
28 for it in seq(0, 2):
29 for jtt in seq(0, 4):
30 neon_vfmla_4xf32_4xf32(
31 C_reg[jtt + 4 * jt, it, 0:4],
32 A_reg[it, 0:4], B_reg[0:4, jt], jtt)
33
34 # Store C from registers as in
35 # previous figures. Omitted for brevity

到此结束,没考虑逻辑寄存器的数量,而且生成的C代码是intrinsic,作者不放心,用编译器生成汇编文件检查了一下,
发现和blis手写的差不多(作者说的)。

边缘情况(edge case)

GEMMshape比较奇怪时,比如12554x64x147,分块后,不做padding的话,不是所有的块都能用8x12的kernel,在这个时候,
kernel生成器的作用就体现出来了,只需要改一下MRNR的参数就可以生成和8x12用到相同调度的处理边缘情况的kernel,
虽然同样没考虑可用的逻辑寄存器的数量,但是有tile,向量化,循环展开。作者提到有时候不需要pack,比如size特别小,不需要切分,
需要改一改kernel调度(USER CODE)。
然后讲了一下可移植性和数据类型的切换,只需要替换相应的intrinsic。

性能评估

solo mode

直接单跑5s的kernel,算gflops,图示Exo生成的跟blis的比具有一定优势,8x12情况下据说是因为手写的kernel需要考虑边缘情况,引入了开销。

方阵

对比了blis和把kernel集成进blis还有上层算法的性能,性能有高有低

矩形阵(异形矩阵)

也是有高有低