This is Felix Kuehling, long time KFD driver architect. I started looking into the TinyGrad source code yesterday, focusing on ops_kfd.py, ops_hsa.py and driver/hsa.py, to understand how TinyGrad talks to our HW and help with the ongoing debugging effort from the top down. This analysis is based on this commit: https://github.com/tinygrad/tinygrad/tree/3de855ea50d72238deac14fc05cda2a611497778
I'm intrigued by the use of Python for low-level programming. I think I can learn something from your use of ctypes and clang2py for fast prototyping and test development. I want to share some observations based on my initial review.
ops_kfd looks pretty new, and I see many problems with it based on my long experience working on KFD. I think it's interesting, but probably not relevant for the most pressing problems at hand, so I'll cover that last.
ops_hsa uses ROCr APIs to manage GPU memory, create a user mode AQL queue for GPU kernel dispatch, async SDMA copies, and signal-based synchronization with barrier packets between the two. There is also some host-side synchronization used for lazy cleanup of reusable signals and freeing memory. I only see one potential problem so far:
- AQLQueue.blit_packets writes multiple packets, header first. This is problematic because the AQL packet processor can start reading packets with a valid header even before you ring the doorbell and update the write-index and doorbell. I only see this used in HSAGraph, and I don't understand the rest of TinyGrad well enough yet to know, whether this can happen in a typical ResNet run
- Even in submit_kernel and submit_barrier, you may need a memory barrier before writing the header, to make sure the writes complete in the right order in the CPU. I don't know if python does that implicitly, e.g. because of overheads in the interpreter
Now my notes on ops_kfd. There is a good chance I missed something and I pick up something new every time I look a the code, so please take these with a grain of salt:
- In HWComputeQueue.submit AQL packet headers must be written after the packet contents. You may also need a memory barrier to ensure the writes complete in the rigth order in the CPU. The AQL packet processor can start working on packets as soon as it sees a valid header, even before you ring the doorbell
- Sharing device.completion_signal: This can cause race conditions when overwriting or waiting for a signal value before the previous dispatch has completed. Before reusing a signal, you need to wait for it. KFDAllocator.copyout waits for the signal, but then reuses it for multiple SDMA commands in the loop. The wait in the end may get triggered by something that's not the last SDMA command. To avoid this, I'd only signal after the last SDMA command. In copyin I don't see any waiting at all before using the signal
- AQLAllocator.transfer seems to use the destination device for the data copy. I would expect writing to be faster than reading (easier to hide latency), so using the source device may perform better
- Is there some code I'm missing to map either the source or destination on the other GPU for AQLAllocator.transfer?
- Operations on wptr and doorbells may not be atomic: This could cause race conditions if the HW sees half-complete values. I don't know ctypes very well, so I don't know what atomicity guarantees it makes
- No virtual address alignments to optimize for huge pages: This will lead to bad TLB efficiency, more page table allocations, slower memory allocation and reduced access performance
- No suballocator for small VRAM allocations: Similar to above, if you have many small allocations, it will lead to more memory management overhead and reduced access performance
- Highest queue priority, I don't think this gains anything if all queues end up with the same priority but may risk other issues by starving kernel queues (if you ever need interop, mostly for video processing)
- Mapping only one doorbell page per GPU: Each process has two doorbell pages per GPU. You should map both. Otherwise you may have problems if you're using more SDMA queues later that end up using some of the doorbells in the second page due to how doorbells get routed in the HW
- Queue overruns are only detected after corrupting the queues
- No fallback to shader-based copies when SDMA queues run out: There are a limited number of SDMA queues in the HW and we don't oversubscribe them at the moment because low latency is one of the big advantages of using SDMA over shader-based copies. When they run out, SDMA queue creation will fail. ROCr has a fallback to use shader-based copies for this. As long as you run a small number of processes concurrently and use a small number of SDMA queues per device, this is no problem
- Using same BO for compute and SDMA read/write pointers
- Not a problem now, but be aware that the SDMA engine writes some queue usage information and internal scratch data after the RPTR
- Circumventing ROCr breaks rocm-gdb. You won't be able to use it for debugging compute kernels
Some more background about the hang you were seeing and a glimpse into the work we're doing on this issue.
ROCm uses user mode queues. Hangs in those queues are not handled by the kernel mode driver as long as the queue is preemptible. For example if you dispatch a persistent shader kernel that basically executes an infinite loop, that queue is going to "soft hang" indefinitely. But as long as the GPU scheduler firmware (MES) can preempt the queue, there is no problem. Other queues can still run, new processes can come along and create more queues and get their work executed on the GPU (assuming the persistent one isn't blocking all the wave slots).
If your kernel causes a page fault (e.g. some out-of-bounds memory access) that is handled by the kernel mode driver, and as long as the queue is still preemptible, it can just terminate your process and not affect any other process in the system. So far this is how things are supposed to work.
What you're seeing in the kernel log is when the MES scheduler firmware is becoming unresponsive after a queue in the CP failed to respond to a preemption request. We're looking into ways to improve the robustness in the scheduler or the driver to recover such situations without a GPU reset if possible, by killing the wavefronts of the offending queue. If that fails, a full GPU reset is still the last resort. This will kill all the applications currently running on the GPU. But new processes should be able to use the GPU after the reset.
When you disable SDMA, you're only disabling its use in the user mode runtime. It's always needed in kernel mode for some buffer management operations. An SDMA hang detected by the kernel mode driver could also be a symptom of something else going wrong in the GPU.
We're finding and fixing some issues with the GPU reset programming sequence in our Linux driver on Navi3. We're also working on the robustness of the MES scheduler so we can recover from more situations without a full GPU reset. At the same time we're looking into understanding what's causing the hangs in the first place. We have some reproductions of such issues with Tinygrad in AMD now, so we're making progress. Getting to the bottom of that may require a bunch of low-level driver hacking and JTAG debugging of the hardware state. Our goal is to make handling of application errors as robust as possible, so that you can get back to debugging your application.