Skip to content

Instantly share code, notes, and snippets.

@yaoyaoding
Last active February 22, 2023 02:09
Show Gist options
  • Save yaoyaoding/29356f1481397c1d4ed177374c213b42 to your computer and use it in GitHub Desktop.
Save yaoyaoding/29356f1481397c1d4ed177374c213b42 to your computer and use it in GitHub Desktop.

(Beta) Hidet: a dynamo backend focuses on inference acceleration

With torch dynamo, we can dispatch a pytorch model to other awesome deep learning framework/compilers for acceleration. Hidet is one of such deep learning compilers that accelerates your model with a bunch of optimizations (e.g., subgraph fusion, rewriting and kernel tuning). To use hidet, please first install it via

$ pip install hidet

Then you can enable it via torch.compile(model, backend='hidet') as shown in the code snippet below:

import torch
import hidet 

# Define pytorch model
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).cuda().eval()
x = torch.rand(1, 3, 224, 224).cuda()

# Compile the model through Hidet
hidet.torch.dynamo_config.search_space(2)  # tune the kernel performance
model_opt = torch.compile(model, backend='hidet')  

# Run the optimized model
y = model_opt(x)

Here are some benchmarks image (Batch Size = 1, NVIDIA RTX 3090, Bert sequence length=128, with float32 data type)

Learn more about hidet and its optimization options in the tutorial and GitHub repository. Hidet originates our research work that tries to simplify writing tensor program with our proposed task-mapping programming paradigm. Please checkout our paper for more details.

@yaoyaoding
Copy link
Author

Our current length is about 3x to 4x longer than the average length of pytorch 1.13 feature section.

@yaoyaoding
Copy link
Author

I plan to leave to PyTorch people to decide whether to put the benchmark result.

@yaoyaoding
Copy link
Author

yaoyaoding commented Feb 22, 2023

My point is that you can't just rely on every MLSys folk to read every paper in ASPLOS to know about Hidet (maybe they will eventually get to it in a year, but by then it's already too late for the purpose of building a community); you need a media like PyTorch's release blog to get as wider as coverage as possible.

That's why I also present hidet to TVMCon, and put hidet paper information at the landing page of hidet repo. Anyway, I put a sentence to menion task-mapping programming paradigm at the end. But we do not have any documentation about hidet script to refers to. Thus, as I said previously, we can create a new post in the future when the hidet script documentation is more complete and we have a proto-type to show how to extend new pytorch operators using hidet script.

@yaoyaoding
Copy link
Author

Let me know if you have any suggestion on current version @wangshangsam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment