This is sample code to use Core ML models in Swift Playgrounds. For this code snippet, the MobileNetV2 image classification model and YOLOv3 object detection model are used. The Core ML model files and sample projects for usage in Xcode are provided by Apple. The models can be used to classify images or detect dominant objects in a camera frame or image. You can replace these placeholder models with you own custom models to leverage the power of machine learning inside Swift Playgrounds.
For Core ML models to be used in Playground, combined binary files or the model as well as corresponding model classes are needed. The procedure to obtain them is described below. The compiled binary file needs to be places in the Resources folder or the Playground page, the model class can conveniently be places in a Swift file in the Sources folder of the Playground page.
The Image Classification Playground Page is initialising an UIImage that should be in the Resources folder of the Playground page and the Core ML model instance of MobileNetV2. The image is then resized and converted to a CVPixelBuffer as the required input type for the Core ML model. Finally, the CVPixelBuffer is run through the Core ML model to get a prediction. The resulting classification from the model is printed on the console.
The Object Detection Playground Page is initialising an UIImage that should be in the Resources folder of the Playground page and a VNCoreMLModel that uses a Core ML based model as an input partner. For this the Core ML model instance of YOLOv3 is used. Then, a VNCoreMLRequests is created to be used with the VNCoreMLModel. The request is printing the detected objects on the console. The request itself is handled by a VNImageRequestHandler that uses a CVPixelBuffer as a required input. To perform the request the image is resized and converted to a CVPixelBuffer.
With this Extension for UIImage, you can resize an UIImage to a provided size with the resizeImageTo function. This is useful as many machine learning models requires images to be provided in specific dimensions. MobileNetV2 expects square images with 224 x 224 resolution. YOLOv3 expects a resolution of 416 x 416. The convertToBuffer function creates a CVPixelBuffer from an UIImage which is the data type Core ML models such as MobileNetV2 or YOLOv3 are expecting as an input parameter.
For a more detailed elaboration on how this extension works, check out the snippet UIImage+Resize+CVPixelBuffer.
In order to work in Playground, the .mlmodelc compiled Core ML model binary and a corresponding Swift source file containing stub classes for the model are required.
The .mlmodelc is generated when the model (.mlmodel) is added to an Xcode Project once the project is compiled and build. It can be found in the ~/Library/Developer/Xcode/DerivedData where all projects are build by default. In the corresponding projects folder it is located in /Build/Products/Debug-iphonesimulator/NameOfTheApp.app/NameOfTheMachineLearningModerl.mlmodelc. You can access the .app bundle content by control clicking on the .app file and selecting Show Package Content. Copy the .mlmodelc file to the Resources Folder of your Playground page.
The Swift source file can be found by accessing the Model Class when selecting the .mlmodel file in either Playground or any other Xcode project. The source code can be added in a .swift file to the Playground Sources folder. The swift file should have the exact same name of the model, in this case it would be MobileNetV2.swift or YOLOv3.swift.
By default the Swift auto-generated stub sources have internal protection levels. All classes, their properties, functions and initializers in MobileNetV2.swift and YOLOv3.swifthave to be made public for Swift Playground to access the source, as any source that is not part of a page needs to have public accessibility to be usable from a Playground page.
The example code provided here is intended for use inside Swift Playgrounds. To use Core ML models inside Xcode projects for iPhone, iPad, Apple Watch or Mac, consider the various sample projects provided by Apple, such as