Skip to content

Instantly share code, notes, and snippets.

@ketan4373
Forked from otmb/README.md
Created January 11, 2018 13:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ketan4373/c86f5343e4debab3968a12f4c8465ad5 to your computer and use it in GitHub Desktop.
Save ketan4373/c86f5343e4debab3968a12f4c8465ad5 to your computer and use it in GitHub Desktop.
OpenPose Caffe Model Convert to CoreML Model

Base Refelence at melito

Start Coremltool

Before Setup coremltools

$ export PATH="$HOME/miniconda2/bin:$PATH"
$ source activate coreml

Edit pose_deploy_linevec.prototxt

edit input_dim of pose_deploy_linevec.prototxt.
320 = multiple of 16.

input: "image"
input_dim: 1
input_dim: 3
input_dim: 320 # This value will be defined at runtime
input_dim: 320 # This value will be defined at runtime

Exec convert.py

$ python convert.py

================= Starting Conversion from Caffe to CoreML ======================
Layer 0: Type: 'Input', Name: 'input'. Output(s): 'image'.
Ignoring batch size and retaining only the trailing 3 dimensions for conversion. 
Layer 1: Type: 'Convolution', Name: 'conv1_1'. Input(s): 'image'. Output(s): 'conv1_1'.
Layer 2: Type: 'ReLU', Name: 'relu1_1'. Input(s): 'conv1_1'. Output(s): 'conv1_1'.
Layer 3: Type: 'Convolution', Name: 'conv1_2'. Input(s): 'conv1_1'. Output(s): 'conv1_2'.
Layer 4: Type: 'ReLU', Name: 'relu1_2'. Input(s): 'conv1_2'. Output(s): 'conv1_2'.

...

Layer 181: Type: 'Concat', Name: 'concat_stage7'. Input(s): 'Mconv7_stage6_L2', 'Mconv7_stage6_L1'. Output(s): 'net_output'.

================= Summary of the conversion: ===================================
Detected input(s) and shape(s) (ignoring batch size):
'image' : 3, 320, 320

Network Input name(s): 'image'.
Network Output name(s): 'net_output'.

Swift Sample Memo

import coremltools
proto_file = 'pose_deploy_linevec.prototxt'
caffe_model = 'pose_iter_440000.caffemodel'
coreml_model = coremltools.converters.caffe.convert((caffe_model, proto_file)
, image_input_names='image'
, image_scale=1/255.
)
coreml_model.save('pose_coco.mlmodel')
import UIKit
import CoreML
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
coremlTest()
}
let model = pose_coco()
@IBOutlet weak var imageView1: UIImageView!
@IBOutlet weak var imageView2: UIImageView!
public func coremlTest(){
let image = UIImage(named: "hoge.jpg")!
if let pixelBuffer = image.pixelBuffer(width: 320, height: 320) {
imageView2.image = UIImage(pixelBuffer: pixelBuffer)
if let prediction = try? model.prediction(image: pixelBuffer) {
print(prediction.net_output)
let p = prediction.net_output
let m = try! MLMultiArray(shape:[40,40], dataType: .double)
let n = 18 * m.count
for i in 0..<m.count {
m[i] = p[i+n]
}
imageView1.image = m.image(offset: 0, scale: 255)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment