Native Shopify Mobile App with 20 new features
Kickstart your hyperlocal marketplace in Corona pandemic with a starter guide
Android App Development
iOS App Development
Cross Platform App Development
Hire on-demand project developers and turn your idea into working reality.
Big thanks to Webkul and his team for helping get Opencart 18.104.22.168 release ready!
Owner and Founder. Opencart
Last year Apple has come up with the framework called Core ML and Vision, two brand-new frameworks introduced in iOS 11.
Core ML allow iOS developers to easily use the trained model in their applications. Also, with the help of Vision framework it will allow applications to detecting faces, face landmarks, text, barcodes and etc. So, how to Create ML Model in swift using Xcode.
To work through this Create ML tutorial, you need:
We’ll first get started on building an image classifier model. We can add as many images with as many labels as we want, but for simplicity, we’ll be building an image classifier that recognizes fruits like apples or bananas.
Now, let’s open Xcode and click on Get Started with a Playground. When you do this, a new window opens up. This is the important part: under macOS, select the Blank template as shown below.
In the Live View, you’ll see that we need to drop images to begin! This is quite simple. Take the Training Data folder, and drop the entire folder into the area.
The moment you drop the folder, you’ll see the playground start to train the image classifier! In the console, you’ll see the number of images processed in what time and how much percentage of your data was trained!
This should take around 30 seconds (depending on your device). When everything is done processing, you should see something like this:
You’ll see a card with three labels: Training, Validation, and Evaluation. Training refers to the percentage of training data Xcode was successfully able to train. This should read 100%.
While training, Xcode distributes the training data into 80-20. After training 80% of training data, Xcode runs the classifier on the remaining 20%. This is what Validation refers to the percentage of training images the classifier was able to get right. Usually, this can vary because Xcode may not always split the same data. In my case, Xcode had an 88% validation. I wouldn’t worry too much about this. Evaluation is empty because we did not give the classifier any testing data. Let’s do that now!
This should happen pretty quick. When everything is finished, your evaluation score should ready 100%. This means that the classifier labeled all the images correctly!
If you’re satisfied with your results, all that’s left is saving the file! Click on the arrow next to the Image Classifier title. A dropdown menu should appear displaying all the metadata. Change your metadata to how you would like it and save it to where you want to!
Open the CoreML model and view the metadata. It has everything you filled out! Congratulations! You are the author of your own Image Classifier model that’s super powerful, and takes only 17 KB!
Your email address will not be published. Required fields are marked*
Save my name email and website in this browser for the next time I comment.
Be the first to comment.
Excellent work, fast, good quality and understood the brief perfectly! Quick responses developing the project and very good cooperation. I suggest to anyone.
Enquiry or Requirement
If you have more details or questions, you can reply to the received confirmation email.