Core ML
Core ML provide the Various facility which is based on Machine learning , Here We are implementing related to Image , with the help of this we can search the image name or can get the name of product.
Here is Link:
https://developer.apple.com/machine-learning/
For configuring in App , here we are taking the example of camera and after taking the image CoreML will provide the prediction name of image.
1: Take the permission in info.plist
To access your camera and photo library, there is still one last thing you must do. Go to your Info.plist
and two entries: Privacy – Camera Usage Description and Privacy – Photo Library Usage Description. Starting from iOS 10, you will need to specify the reason why your app needs to access the camera and photo library.
2: Now go to this Sites:
https://developer.apple.com/machine-learning/
and Download “Inception v3” Model
3: Now drag and copy to target folder.
4: let’s add the model in our code. Go back to ViewController.swift
. First, import the CoreML framework at the very beginning:
1 2 3 4 5 |
var model: Inceptionv3! override func viewWillAppear(_ animated: Bool) { model = Inceptionv3() } |
5: Now create outlet of Image View and label where we show the messages:
1 2 |
@IBOutlet weak var imageView: UIImageView! @IBOutlet weak var classifier: UILabel! |
6: Now on click the button:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
@IBAction func camera(_ sender: Any) { if !UIImagePickerController.isSourceTypeAvailable(.camera) { return } let cameraPicker = UIImagePickerController() cameraPicker.delegate = self cameraPicker.sourceType = .camera cameraPicker.allowsEditing = false present(cameraPicker, animated: true) } |
7: On Select Image there delegate method will call:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
extension ViewController: UIImagePickerControllerDelegate { func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { dismiss(animated: true, completion: nil) } func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { picker.dismiss(animated: true) classifier.text = "Analyzing Image..." guard let image = info["UIImagePickerControllerOriginalImage"] as? UIImage else { return } //1 UIGraphicsBeginImageContextWithOptions(CGSize(width: 299, height: 299), true, 2.0) image.draw(in: CGRect(x: 0, y: 0, width: 299, height: 299)) let newImage = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext() let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary var pixelBuffer : CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(newImage.size.width), Int(newImage.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer) guard (status == kCVReturnSuccess) else { return } CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!) let rgbColorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext(data: pixelData, width: Int(newImage.size.width), height: Int(newImage.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) //3 context?.translateBy(x: 0, y: newImage.size.height) context?.scaleBy(x: 1.0, y: -1.0) UIGraphicsPushContext(context!) newImage.draw(in: CGRect(x: 0, y: 0, width: newImage.size.width, height: newImage.size.height)) UIGraphicsPopContext() CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) imageView.image = newImage // Core ML guard let prediction = try? model.prediction(image: pixelBuffer!) else { return } classifier.text = "I think this is a \(prediction.classLabel)." } } |
8: Here “prediction.classLabel” this will return the prediction name of image.