Last year Apple has come up with the framework called Core ML and Vision, two brand-new frameworks introduced in iOS 11.
Core ML allow iOS developers to easily use the trained model in their applications. Also, with the help of Vision framework it will allow applications to detecting faces, face landmarks, text, barcodes and etc. So, how to Create ML Model in swift using Xcode.
Getting Started
To work through this Create ML tutorial, you need:
- a Mac running macOS 10.14 Mojave beta
- Xcode 10.x beta
The Image Classifier Model
The Data
We’ll first get started on building an image classifier model. We can add as many images with as many labels as we want, but for simplicity, we’ll be building an image classifier that recognizes fruits like apples or bananas.
Now, let’s open Xcode and click on Get Started with a Playground. When you do this, a new window opens up. This is the important part: under macOS, select the Blank
template as shown below.
The Code
1 2 3 4 |
import CreateMLUI let builder = MLImageClassifierBuilder() builder.showInLiveView() |
The User Interface
In the Live View, you’ll see that we need to drop images to begin! This is quite simple. Take the Training Data folder, and drop the entire folder into the area.
The moment you drop the folder, you’ll see the playground start to train the image classifier! In the console, you’ll see the number of images processed in what time and how much percentage of your data was trained!
The Processed Result
This should take around 30 seconds (depending on your device). When everything is done processing, you should see something like this:
You’ll see a card with three labels: Training, Validation, and Evaluation. Training refers to the percentage of training data Xcode was successfully able to train. This should read 100%.
While training, Xcode distributes the training data into 80-20. After training 80% of training data, Xcode runs the classifier on the remaining 20%. This is what Validation refers to the percentage of training images the classifier was able to get right. Usually, this can vary because Xcode may not always split the same data. In my case, Xcode had an 88% validation. I wouldn’t worry too much about this. Evaluation is empty because we did not give the classifier any testing data. Let’s do that now!
This should happen pretty quick. When everything is finished, your evaluation score should ready 100%. This means that the classifier labeled all the images correctly!
If you’re satisfied with your results, all that’s left is saving the file! Click on the arrow next to the Image Classifier title. A dropdown menu should appear displaying all the metadata. Change your metadata to how you would like it and save it to where you want to!
Open the CoreML model and view the metadata. It has everything you filled out! Congratulations! You are the author of your own Image Classifier model that’s super powerful, and takes only 17 KB!
Thank You!!!