This is Felix Kuehling, long time KFD driver architect. I started looking into the TinyGrad source code yesterday, focusing on ops_kfd.py, ops_hsa.py and driver/hsa.py, to understand how TinyGrad talks to our HW and help with the ongoing debugging effort from the top down. This analysis is based on this commit: https://github.com/tinygrad/tinygrad/tree/3de855ea50d72238deac14fc05cda2a611497778
I'm intrigued by the use of Python for low-level programming. I think I can learn something from your use of ctypes and clang2py for fast prototyping and test development. I want to share some observations based on my initial review.
ops_kfd looks pretty new, and I see many problems with it based on my long experience working on KFD. I think it's interesting, but probably not relevant for the most pressing problems at hand, so I'll cover that last.
ops_hsa uses ROCr APIs to manage GPU memory, create a user mode AQL queue for GPU kernel dispatch, async SDMA copies, and signal-based synchronization with barrier packets between the two. There is also some host-side synchronization used for lazy cleanup of reusable signals and freeing memory. I only see one potential problem so far:
- AQLQueue.blit_packets writes multiple packets, header first. This is problematic because the AQL packet processor can start reading packets with a valid header even before you ring the doorbell and update the write-index and doorbell. I only see this used in HSAGraph, and I don't understand the rest of TinyGrad well enough yet to know, whether this can happen in a typical ResNet run
- Even in submit_kernel and submit_barrier, you may need a memory barrier before writing the header, to make sure the writes complete in the right order in the CPU. I don't know if python does that implicitly, e.g. because of overheads in the interpreter
Now my notes on ops_kfd. There is a good chance I missed something and I pick up something new every time I look a the code, so please take these with a grain of salt:
- In HWComputeQueue.submit AQL packet headers must be written after the packet contents. You may also need a memory barrier to ensure the writes complete in the rigth order in the CPU. The AQL packet processor can start working on packets as soon as it sees a valid header, even before you ring the doorbell
- Sharing device.completion_signal: This can cause race conditions when overwriting or waiting for a signal value before the previous dispatch has completed. Before reusing a signal, you need to wait for it. KFDAllocator.copyout waits for the signal, but then reuses it for multiple SDMA commands in the loop. The wait in the end may get triggered by something that's not the last SDMA command. To avoid this, I'd only signal after the last SDMA command. In copyin I don't see any waiting at all before using the signal
- AQLAllocator.transfer seems to use the destination device for the data copy. I would expect writing to be faster than reading (easier to hide latency), so using the source device may perform better
- Is there some code I'm missing to map either the source or destination on the other GPU for AQLAllocator.transfer?
- Operations on wptr and doorbells may not be atomic: This could cause race conditions if the HW sees half-complete values. I don't know ctypes very well, so I don't know what atomicity guarantees it makes
- No virtual address alignments to optimize for huge pages: This will lead to bad TLB efficiency, more page table allocations, slower memory allocation and reduced access performance
- No suballocator for small VRAM allocations: Similar to above, if you have many small allocations, it will lead to more memory management overhead and reduced access performance
- Highest queue priority, I don't think this gains anything if all queues end up with the same priority but may risk other issues by starving kernel queues (if you ever need interop, mostly for video processing)
- Mapping only one doorbell page per GPU: Each process has two doorbell pages per GPU. You should map both. Otherwise you may have problems if you're using more SDMA queues later that end up using some of the doorbells in the second page due to how doorbells get routed in the HW
- Queue overruns are only detected after corrupting the queues
- No fallback to shader-based copies when SDMA queues run out: There are a limited number of SDMA queues in the HW and we don't oversubscribe them at the moment because low latency is one of the big advantages of using SDMA over shader-based copies. When they run out, SDMA queue creation will fail. ROCr has a fallback to use shader-based copies for this. As long as you run a small number of processes concurrently and use a small number of SDMA queues per device, this is no problem
- Using same BO for compute and SDMA read/write pointers
- Not a problem now, but be aware that the SDMA engine writes some queue usage information and internal scratch data after the RPTR
- Circumventing ROCr breaks rocm-gdb. You won't be able to use it for debugging compute kernels
GTT memory is allocated as a buffer object in the kernel mode driver. This memory is not pageable in the traditional sense, though the memory manager does have its own swap mechanism. However, in the upstream kernel, it limits us to less than 1/2 of system memory capacity due to some limitations with memory accounting and the OOM killer.
To get around that, we use pageable memory (plain mmap) for most system memory allocations, and then map them for GPU access using userptr BOs or our newer SVM API. This memory can be paged freely by the Linux kernel, so our driver needs to handle MMU notifiers when that happens, so we can keep our GPU page tables in sync. Fortunately we've optimized things to make that rare (except when NUMA balancing is on).
We use pageable memory for most system memory allocations so we get access to the full system memory capacity. We use GTT for memory that must be accessed in kernel mode, or shared between processes with DMABufs. If you don't need the full capacity, using GTT may be slightly faster (no MMU notifiers, faster to allocate and free).
KFD_IOC_QUEUE_TYPE_COMPUTE uses PM4 packets. We use that in kfdtest for low-level testing with tiny hand-written assembly shaders. IME, doing dispatches with PM4 packets is a pain, requires low-level HW specs for writing GPU-specific dispatch registers and handling scratch memory, synchronization, cache flushes, HDP flushes etc. manually. There is lots of potential for getting things subtly wrong, or causing more hardware hangs by corrupting the wrong registers.
AQL is our "architected queuing language" that came out of the HSA initiative. It's very abstract by design to make it portable with a stable ABI, multi-producer semantics, well defined memory coherence, synchronization primitives and kernel calling conventions, which makes all of these features usable in 3rd-party runtime code such as your own. It also enables interoperability between different language runtimes. You've already seen the PM4 escape through a vendor packet that we use for icache flushes in the code-object loader. I find it unfortunate that we need that escape. Other than that, all our compute language runtimes for ROCm use AQL.