Skip to content

Instantly share code, notes, and snippets.

@patharanordev
Created March 6, 2021 04:27
Show Gist options
  • Save patharanordev/cd68e942499b6c0f2c259667f07b223f to your computer and use it in GitHub Desktop.
Save patharanordev/cd68e942499b6c0f2c259667f07b223f to your computer and use it in GitHub Desktop.
How-to-convert `.pt` to `.onnx` model file format.

Convert .pt to .onnx

The function using in Scaled-YOLOv4, please refer to Scaled-YOLOv4 repository.

Before start, the export script requires onnxsim, you need to install it first :

$ pip install -q onnx-simplifier

the export script should existing in ./ScaledYOLOv4/models/, then start export onnx :

$ export PYTHONPATH="$PWD" && python models/export-onnx.py \
--weights './runs/exp0_yolov4-csp-results/weights/best_yolov4-csp-results.pt'

You should see output like this:

Namespace(batch_size=1, img_size=[896, 896], weights='./runs/exp0_yolov4-csp-results/weights/best_yolov4-csp-results.pt')
Fusing layers... Model Summary: 235 layers, 5.28044e+07 parameters, 5.04494e+07 gradients

Starting ONNX export with onnx 1.8.1...
ONNX export success, saved as ./runs/exp0_yolov4-csp-results/weights/best_yolov4-csp-results.onnx

Export complete. Visualize with https://github.com/lutzroeder/netron.
import argparse
import torch
import torch.nn as nn
import models
from models.experimental import attempt_load
from utils.activations import Mish
from onnxsim import simplify
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='./weights/yolov4-p5.pt', help='weights path') # from yolov5/models/
parser.add_argument('--img-size', nargs='+', type=int, default=[896, 896], help='image size') # height, width
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
opt = parser.parse_args()
opt.img_size *= 2 if len(opt.img_size) == 1 else 1 # expand
print(opt)
# Input
img = torch.zeros((opt.batch_size, 3, *opt.img_size)) # image size(1,3,320,192) iDetection
# Load PyTorch model
model = attempt_load(opt.weights, map_location=torch.device('cpu')) # load FP32 model
# Update model
for k, m in model.named_modules():
m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatability
if isinstance(m, models.common.Conv) and isinstance(m.act, models.common.Mish):
m.act = Mish() # assign activation
if isinstance(m, models.common.BottleneckCSP) or isinstance(m, models.common.BottleneckCSP2) \
or isinstance(m, models.common.SPPCSP):
if isinstance(m.bn, nn.SyncBatchNorm):
bn = nn.BatchNorm2d(m.bn.num_features, eps=m.bn.eps, momentum=m.bn.momentum)
bn.training = False
bn._buffers = m.bn._buffers
bn._non_persistent_buffers_set = set()
m.bn = bn
if isinstance(m.act, models.common.Mish):
m.act = Mish() # assign activation
# if isinstance(m, models.yolo.Detect):
# m.forward = m.forward_export # assign forward (optional)
model.eval()
model.model[-1].export = True # set Detect() layer export=True
# y = model(img) # dry run
# ONNX export
try:
import onnx
print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
f = opt.weights.replace('.pt', '.onnx') # filename
torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'],
output_names=['output'])
# Checks
onnx_model = onnx.load(f) # load onnx model
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, f)
# print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model
print('ONNX export success, saved as %s' % f)
except Exception as e:
print('ONNX export failure: %s' % e)
# Finish
print('\nExport complete. Visualize with https://github.com/lutzroeder/netron.')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment