Skip to content

Instantly share code, notes, and snippets.

@csullivan
Last active November 14, 2022 17:29
Show Gist options
  • Save csullivan/6598dd9cad8772d945685b9ee823a78d to your computer and use it in GitHub Desktop.
Save csullivan/6598dd9cad8772d945685b9ee823a78d to your computer and use it in GitHub Desktop.

Authored-by: Eric Lunderberg

Notes summarizing discussion between @Lunderberg and @csullivan on 2022_10_25

Considerations of Pad/Crop represented separately from bijective transformations

From previous conversation, possibility of representing pad/crop separately from the layout transform. This would allow algebraic proofs to be done in the simpler coordinate system, before applying the layout transform.

However, not all types of padding can be represented in this manner. As an example, suppose we want to pad a buffer such that every 8th value is padding. This could be represented in one step with a padded transform, but would require three steps when the padding is introduced in separate steps.

# With padded transforms
transform_layout(index_map = lambda i: [i//7, (i%7)%8], pad_value=0)

# With pad/crop and bijective transforms
insert_pad_crop(new_shape = [7*ceildiv(A.shape, 7)])
transform_layout(index_map = lambda i: [i//7, i%7])
insert_pad_crop(new_shape = [buf.shape[0], 8])

Any cancellation of the second pad/crop would need to be done after the layout transform. Therefore, we can't get away from performing algebraic proofs within the transformed layout.

While this is a somewhat contrived example, it could easily occur in practice. Support a conv1d with filter size 2 uses vector operations of size 8. The implementation uses a sliding window of size 8, which advances by 7 elements at a time. (Assume alignment restrictions are handled by a cache_read.) Each application of the vector operations would produce 8 values, the last of which is junk. If the output of the conv1d is then matrix multiplied by a constant matrix, the above example could be applied to the constant matrix. This would result in a pad value (zero) at every location corresponding to a junk value, which could be used to vectorize the matrix multiplication.

@Lunderberg
Copy link

Is your argument the following: hoisting y_crop_1 TIR block into a compact representation with cropped_value = 0 would require dataflow analysis?

Essentially, yes. The difficulties involved in identifying a TIR block and hoisting out a compact representation are roughly the same as the difficulties in proving a memcpy in TIR.

But that is not my main argument for splitting layout transformation into pad, crop and pure_layout_transform primitives. The main argument for splitting is -- it is hard to prove that two layout transforms are inverse of each other if there is implicit padding and cropping in the compact representation or TIR block representation.

Hmm. I suppose I'm not seeing the difficulty in proving two layout transforms to be inverses of each other. I would see three different cases for compact representations that could be canceled out.

  1. A layout_transform(A) followed by a layout_transform(B). Since the layout_transform can introduce implicit padding, if either layout_transform introduces padding, the sequence of two transforms introduces padding, and is therefore not a no-op. The transformations cancel out if A(B(indices)) == indices and both transformations are bijective.

  2. An inv_layout_transform(A, pad_value=x) followed by a layout_transform(B, pad_value=y). The inv_layout_transform can crop out padding, which is then added back in by the inv_layout_transform. The two compact representations cancel out if A is equivalent to B, and x == y.

  3. A layout_transform(A, pad_value=x) followed by an inv_layout_transform(B, pad_value=y). The layout_transform can introduce implicit padding, which is removed by the inv_layout_transform. The two compact representations cancel out if A is the same as B.

  4. An inv_layout_transform(A) followed by an inv_layout_transform(B). Since the inv_layout_transform can crop out implicit padding, if either inv_layout_transform crops out padding, the sequence of two inverse transforms changes the size of the buffer padding, and is therefore not a no-op. The transformations cancel out if B(A(indices)) == indices and both transformations are bijective.

@csullivan
Copy link
Author

Memory layout for the above mentioned IndexMap supposing an input buffer of 16 elements.


  ┌─Physical-index-space───IndexMap:[i//7,i%7]─────────────────┐
  │                                                            │
 ┌▼─┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬─▼┐
 │00│01│02│03│04│05│06│07│08│09│10│11│12│13│14│15│16│17│18│19│20│
 └▲─┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴─▲└──┴──┴──┴──┴──┘
  │                                             │
  └─Logical-index-space─────────────────────────┘



  ┌─────IndexMap:[i//7,i%7]─┐
  │                         │
  │      ┌──┬──┬──┬──┬──┬──┐▼─┐
  │      │00│01│02│03│04│05│06│
  │      ├──┼──┼──┼──┼──┼──┼──┤
  │      │07│08│09│10│11│12│13│
  │      └──┼──┼──┼──┼──┼──┼──┤
  └──────►14│15│16│17│18│19│20│
         └──┴──┘▲─┴──┴──┴──┴─▲┘
                │            │
                │            │
                └─pad-values─┘


  ┌─IndexMap:[i//7,(i%7)%8]─┐
  │                         │
  │      ┌──┬──┬──┬──┬──┬──┐▼─┬──┐
  │      │00│01│02│03│04│05│06│07◄─┐
  │      ├──┼──┼──┼──┼──┼──┼──┼──┐ │
  │      │08│09│10│11│12│13│14│15◄─┤
  │      └──┼──┼──┼──┼──┼──┼──┼──┐ │
  └──────►16│17│18│19│20│21│22│23◄─┤
         └──┴──┘▲─┴──┴──┴──┴─▲└─▲┘ │
                │            │  │  │
                │            │  │  │
                └─pad-values─┴──┴──┘


  ┌─Physical-index-space───IndexMap:[i//7,(i%7)%8]─────────────────────┐
  │                                                                    │
 ┌▼─┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┬──┐▼─┐
 │00│01│02│03│04│05│06│xx│08│09│10│11│12│13│14│xx│16│17│18│19│20│21│22│xx│
 └──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┴──┘



Note the pad values every 8th value in the physical memory layout in the last figure above.

@sunggg
Copy link

sunggg commented Nov 7, 2022

@csullivan thank you for the beautiful figure! 😄
One question - this example adds padding at the rightmost side of axis=1.
How does this repr add the padding at the leftmost side?

@Lunderberg
Copy link

@sunggg Padding at the left side would be represented as [i//7, ( (i%7) + 1) %8]. The %8 introduces the same requirement that its left argument be padded out to be divisible by the right argument, and the expression dictates the exact mapping from pre-transformation indices to post-transformation indices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment