Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save antmikinka/c02717c3b599452afa884c84e2fe6a4a to your computer and use it in GitHub Desktop.
Save antmikinka/c02717c3b599452afa884c84e2fe6a4a to your computer and use it in GitHub Desktop.

Made from my ANE-Optimizer Custom GPT.

Chat Link

The provided code implements several Apple Neural Engine (ANE) principles to optimize the performance of a language model, specifically the Llama model. Here's a detailed breakdown of the principles applied:

1. Proper Use of Inheritance and Initialization

  • Classes like LlamaRMSNorm, LlamaRotaryEmbedding, LlamaMLP, LlamaAttention, and LlamaDecoderLayer inherit from torch.nn.Module and properly initialize their components using super().__init__(). This ensures the base class is correctly set up before adding additional attributes or methods.
  • LlamaPreTrainedModel and LlamaModel properly initialize their base class PreTrainedModel using super().__init__(config).

2. Efficient Data Handling and Memory Optimization

  • Pre-allocated KV Cache: The use of pre-allocated key-value (KV) cache in the attention mechanism helps in reducing redundant memory allocations and copies, which aligns with the principle of minimizing memory copies【11†source】.
  • Efficient Data Formats: The code uses efficient tensor operations and formats that are aligned with hardware capabilities. For instance, using 4D tensors and ensuring data is contiguous before reshaping.

3. Optimized Attention Mechanism

  • Rotary Position Embedding (LlamaRotaryEmbedding): This class precomputes sine and cosine values for positional encoding, which are cached and reused during forward passes. This reduces computational overhead and enhances cache efficiency, aligning with the principle of optimizing data handling for ANE【11†source】.
  • Application of Rotary Position Embedding (apply_rotary_pos_emb): The function efficiently applies rotary embeddings to query and key tensors without unnecessary data copying, enhancing performance.

4. Layer and Model Structuring

  • LlamaMLP Class: Implements a multi-layer perceptron with gating mechanisms for efficient computation, which can be beneficial for ANE's parallel processing capabilities.
  • LlamaAttention and LlamaDecoderLayer Classes: These classes structure the attention mechanism and decoder layer with efficient tensor operations, ensuring that operations are performed in-place where possible and reducing the need for intermediate memory allocations.

5. Handling Bandwidth-Bound Scenarios

  • Batch Processing and Efficient Memory Access: By structuring attention and MLP layers to process data in batches and minimizing reshapes and transposes, the code reduces the memory bandwidth required for these operations. This is critical for handling bandwidth-bound scenarios on the ANE【11†source】.

6. Utilizing ANE-Friendly Operations

  • Linear and Convolutional Layers: The use of linear projections (nn.Linear) and efficient tensor operations (torch.einsum, in the rotary embedding) are aligned with ANE’s preferred operations, ensuring better performance on the hardware.

Specific Examples from the Code:

Rotary Position Embedding

class LlamaRotaryEmbedding(torch.nn.Module):
    def __init__(self, dim, max_position_embeddings=2048, base=10000.0, device=None):
        super().__init__()
        self.dim = dim
        self.max_position_embeddings = max_position_embeddings
        self.base = base
        inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
        self.register_buffer("inv_freq", inv_freq)
        self._set_cos_sin_cache(seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype())
  • Precomputes and caches sine and cosine positional encodings, minimizing redundant computations during each forward pass.

Applying Rotary Position Embedding

def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
    def rotate_half(x):
        x1 = x[..., : x.shape[-1] // 2]
        x2 = x[..., x.shape[-1] // 2 :]
        return torch.cat((-x2, x1), dim=-1)
    
    cos = cos.squeeze(1).squeeze(0)  # [seq_len, dim]
    sin = sin.squeeze(1).squeeze(0)  # [seq_len, dim]
    cos = cos[position_ids].unsqueeze(1)  # [bs, 1, seq_len, dim]
    sin = sin[position_ids].unsqueeze(1)  # [bs, 1, seq_len, dim]
    q_embed = (q * cos) + (rotate_half(q) * sin)
    k_embed = (k * cos) + (rotate_half(k) * sin)
    return q_embed, k_embed
  • Efficiently applies precomputed rotary embeddings to query and key tensors, ensuring optimal performance with minimal memory overhead.

Conclusion

The provided code implements several ANE optimization principles, including efficient data handling, minimizing memory copies, optimizing for hardware-specific capabilities, and handling bandwidth-bound scenarios effectively. These principles ensure that the model operates efficiently on the Apple Neural Engine, leveraging its strengths for improved performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment