iOS Tutorials: Detecting image with Core Machine Learning

OVERVIEW

Apple recently introduced Core Machine Learning which allows you to load a pre-trained model and make predictions. Apple provides really helpful information about Machine Learning on their site. You may also be able to download models and use it in your project. As of now Machine Learning capabilities is only pre-trained model in which you will not be able to use your data yet. Some other limitations of Core Machine Learning as of now is that, there is no training, only using static model and is not encrypted. However, let’s look forward to more improvement that Apple will implement into their Core Machine Learning.

Head to Apple Machine Learning site, and download Inception v3 as shown below.

When you have the file, drag it into the project. With that, we are going to also design our storyboard.

  1. Drag the Inception Model into the Project Explorer as highlighted on the left
  2. Embed Navigation Controller for your View Controller as shown below
  3. Use Label, ImageView and a Bar Button Item for your Storyboard

Let’s begin importing what we need and declare the protocol that is required.

  1. Import CoreML which allows you to use machine learning model
  2. Import Vision which allows you to perform machine learning on the image
  3. Use UIImagePickerControllerDelegate protocol for picture taking using camera
  4. Use UINavigationControllerDelegate protocol to push camera view
  5. Create IBOutlet for Image View and Label
  6. Create a variable which inherit UIImagePickerController for camera usage
import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

    @IBOutlet weak var imageView: UIImageView!
    let imagePicker = UIImagePickerController()
    @IBOutlet weak var descriptionLabel: UILabel!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        imagePicker.delegate = self
        imagePicker.sourceType = .camera
        imagePicker.allowsEditing = false

Let’s write a function for detecting the image for Machine Learning. With the functionality below:

  1. We create a model using VNCoreMLModel as container to get the model and classify the image
  2. And with this, we are showing the percentage and the label for the prediction
func detect(image: CIImage){
    guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else {
        fatalError("Loading CoreML Model Failed")
    }
    let request = VNCoreMLRequest(model: model) { (request, error) in
        guard let result = request.results as? [VNClassificationObservation] else {
            fatalError("Mode Failed to process Image")
        }
        var percentage = result.first!.confidence * 100
        self.descriptionLabel.text = "\(percentage)% \(result.first!.identifier)"

    }
    let handler = VNImageRequestHandler(ciImage: image)
    do {
        try handler.perform([request])
    } catch {
        print(error)
    }
}

Let’s create a function for the camera bar button item

  1. Create an IBAction for the bar button camera and present the camera
  2. Implement the detect functionality in the image
@IBAction func cameraPressed(_ sender: UIBarButtonItem) {
   present(imagePicker, animated: true, completion: nil)
}


func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
   if let image = info[UIImagePickerControllerOriginalImage] as? UIImage {
      imageView.image = image
      guard let ciimage = CIImage(image:image) else {
            atalError("Could not convert UIImage to CIImage")
      }
      detect(image: ciimage)
      }
    imagePicker.dismiss(animated: true, completion: nil)
}

You will get the following on your screen that shows the prediction score and label. WARNING: This will not work right 100% but its definitely good to explore around with Machine Learning.

The code is uploaded on GitHub

 

  • Article By :
    Founder of DaddyCoding. Studied Computer Science, Information System and Information Technology at BYU-Hawaii. Currently spending most of my time researching and learning on helping to expose making iOS apps.

Random Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*