Introduction
These days the hype for Machine Learning is real. Everyone just wants a piece of it in their product development. Let it be a spam filter or just a cookie machine. That being said, the demand is undoubtedly high. True. But another fact that is true is that not everybody can just go in all guns blazing and develop intelligent systems. It requires shenanigans with specialized knowledge to actually create and train and make a system mature. You can follow the tutorials online but they just go over the top of the bun and never give you an idea that the patty is drier than the Sahara Desert. So, the problem is there, people who aren’t expert at building intelligent systems using AI/ Machine Learning techniques – how do they make their applications and products intelligent?
Recently Apple acquired Turi and released Turi Create module for python. Turi Create is a blessing for people who want to make their products smart without thinking too much about AI delicacies. In this article, I’m going to show to get started with Turi Create by developing a simple image classifier application for iOS.
What we need
- Python 2.7+ (Since Python3 support isn’t there for Turi Create yet)
- XCode 9.2 (9.1 is also fine)
- macOS 13.12+
- You can also use Turi Create on Windows and Linux but for developing the application you’ll need Xcode, unless you’re using Xamarin.
So, let’s get started.
What kind of images to classify?
First, let’s plan what kind of image classifier we’re going to develop. The popular examples around the web will tell you to make Cats Vs Dogsclassifiers. Let’s make it a bit better. Our classifier will classify flowers.
Project Structure
Let’s create our project in the following directory structure.
Getting the data
They say a machine learning model is as good as the data it’s been trained with. So how are we going to get the data? Let’s get the flower image dataset available with Tensorflow example repository which is there at
http://download.tensorflow.org/example_images/flower_photos.tgz
If you’re on a Linux Distro or macOS, you can use curl to download and unzip inside training/images folder.
Give it some time to download. The dataset is 218 MB.
The Dataset
We have the following kind of images in the dataset which should serve as our categories.
Now For training,
Preparing the training env
Create a virtualenv for python, then install turi create using,
- # Create a Python virtual environment
- virtualenv turi_create_env
-
- # Activate your virtual environment
- source turi_create_env/bin/activate
-
- (venv) pip install -U turicreate
Alternatively you can create an anaconda env and install turi create using pip.
Training Script
First we need to load all the image data inside a dataframe to use with our model later. To save data to dataframe, we will be using the following script,
- import turicreate as tc
-
- # load data
- image_data = tc.image_analysis.load_images('images', with_path=True)
-
- labels = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
-
- def get_label(path, labels=labels):
- for label in labels:
- if label in path:
- return label
-
- image_data['label'] = image_data['path'].apply(get_label)
-
- # save data
- image_data.save('flowers.sframe')
-
- # explore
- image_data.explore()
Let’s run the python script. Now we have the dataframe. Note that this should take some time depending on the processing power of your computer. We should see the following output on terminal and Visualization from TuriCreate Visualizer.
Next we create our model, the one that will be giving us the result on what kind of flower it is when we show our app an image.
- import turicreate as tc
-
- # Load the data
- data = tc.SFrame('flowers.sframe')
-
- # Make a train-test split
- train_data, test_data = data.random_split(0.8)
-
- # Create a model
- model = tc.image_classifier.create(train_data, target='label', max_iterations=1000)
-
- # Save predictions to an SFrame (class and corresponding class-probabilities)
- predictions = model.classify(test_data)
-
- # Evaluate the model and save the results into a dictionary
- results = model.evaluate(test_data)
- print "Accuracy : %s" % results['accuracy']
- print "Confusion Matrix : \n%s" % results['confusion_matrix']
-
- # Save the model for later usage in Turi Create
- model.save('Flowers.model')
Let’s run this script and let our model train. It’ll also show the accuracy of predictions of the model when put to use. We achieved an accuracy somewhere around 89% which isn’t bad.
Training model to CoreML model
We’ve trained our model but how should we use it with the mobile application. To use the trained model in an iOS Application we need to convert it to a CoreML model using the following python script.
- import turicreate as tc
- model = tc.load_model('Flowers.model')
- model.export_coreml('Flowers_CoreML.mlmodel')
Now we’re ready to add it to our iOS application.
iOS Application
Let’s open up Xcode and create a single view application. Then drag and drop the coreml model inside the project navigator. Xcode will create the references automatically. To keep things organized you can create a group named Models and drag drop the core ml model there.
It should look like this,
Now lets Create the following interface. Alternatively you can replace both Storyboard files with the files in zip folder accompanying this article.
Now add a new file to the project named CGImagePropertyOrientation+UIImageOrientation.swift and add the following code inside.
- import UIKit
- import ImageIO
-
- extension CGImagePropertyOrientation {
-
-
-
-
-
-
-
- init(_ orientation: UIImageOrientation) {
- switch orientation {
- case .up: self = .up
- case .upMirrored: self = .upMirrored
- case .down: self = .down
- case .downMirrored: self = .downMirrored
- case .left: self = .left
- case .leftMirrored: self = .leftMirrored
- case .right: self = .right
- case .rightMirrored: self = .rightMirrored
- }
- }
- }
Now edit ViewController.swift and add the following code.
- import UIKit
- import CoreML
- import Vision
- import ImageIO
-
- class ImageClassificationViewController: UIViewController {
-
-
- @IBOutlet weak var imageView: UIImageView!
- @IBOutlet weak var cameraButton: UIBarButtonItem!
- @IBOutlet weak var classificationLabel: UILabel!
-
-
-
-
- lazy var classificationRequest: VNCoreMLRequest = {
- do {
-
-
-
-
-
- let model = try VNCoreMLModel(for: Flowers_CoreML().model)
-
- let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
- self?.processClassifications(for: request, error: error)
- })
- request.imageCropAndScaleOption = .centerCrop
- return request
- } catch {
- fatalError("Failed to load Vision ML model: \(error)")
- }
- }()
-
-
- func updateClassifications(for image: UIImage) {
- classificationLabel.text = "Classifying..."
-
- let orientation = CGImagePropertyOrientation(image.imageOrientation)
- guard let ciImage = CIImage(image: image) else { fatalError("Unable to create \(CIImage.self) from \(image).") }
-
- DispatchQueue.global(qos: .userInitiated).async {
- let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
- do {
- try handler.perform([self.classificationRequest])
- } catch {
-
-
-
-
-
- print("Failed to perform classification.\n\(error.localizedDescription)")
- }
- }
- }
-
-
-
- func processClassifications(for request: VNRequest, error: Error?) {
- DispatchQueue.main.async {
- guard let results = request.results else {
- self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
- return
- }
-
- let classifications = results as! [VNClassificationObservation]
-
- if classifications.isEmpty {
- self.classificationLabel.text = "Nothing recognized."
- } else {
-
- let topClassifications = classifications.prefix(2)
- let descriptions = topClassifications.map { classification in
-
- return String(format: " (%.2f) %@", classification.confidence, classification.identifier)
- }
- self.classificationLabel.text = "Classification:\n" + descriptions.joined(separator: "\n")
- }
- }
- }
-
-
-
- @IBAction func takePicture() {
-
- guard UIImagePickerController.isSourceTypeAvailable(.camera) else {
- presentPhotoPicker(sourceType: .photoLibrary)
- return
- }
-
- let photoSourcePicker = UIAlertController()
- let takePhoto = UIAlertAction(title: "Take Photo", style: .default) { [unowned self] _ in
- self.presentPhotoPicker(sourceType: .camera)
- }
- let choosePhoto = UIAlertAction(title: "Choose Photo", style: .default) { [unowned self] _ in
- self.presentPhotoPicker(sourceType: .photoLibrary)
- }
-
- photoSourcePicker.addAction(takePhoto)
- photoSourcePicker.addAction(choosePhoto)
- photoSourcePicker.addAction(UIAlertAction(title: "Cancel", style: .cancel, handler: nil))
-
- present(photoSourcePicker, animated: true)
- }
-
- func presentPhotoPicker(sourceType: UIImagePickerControllerSourceType) {
- let picker = UIImagePickerController()
- picker.delegate = self
- picker.sourceType = sourceType
- present(picker, animated: true)
- }
- }
-
- extension ImageClassificationViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
-
-
- func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String: Any]) {
- picker.dismiss(animated: true)
-
-
- let image = info[UIImagePickerControllerOriginalImage] as! UIImage
- imageView.image = image
- updateClassifications(for: image)
- }
- }
We’re using Vision and CoreML frameworks from iOS SDK to use the trained model. Remember the model is the key component that does all the work. Here your app will take a picture, either from camera or galley and send it to the model. The model will process the image and return the result. The app will send the image as a request, which will be handled by Vision framework and sent to CoreML model via CoreML framework.
Ready to run
The app is now ready to run, you can either run it inside the emulator or use an actual device.
We’ll be testing here inside the simulator. So, to get images, we press go to home on the simulator, open Safari and collect a rose and a sunflower image from Google Image Search and put them to test. Here are the screenshots containing results from the simulator.
Now if you want, you can load the app on an actual device and test on real life flowers (flowers should be of the categories trained on, else you’ll get random results, that’s how intelligent systems work. They only know what you trained them for)
Application in a nutshell
Conclusion
That was just an example of integrating some intelligence using machine learning in your iOS apps. However image classification isn’t the only smart thing you can add. There’re a lot of usage of machine learning with mobile applications and the list is growing day by day.
References
- https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml
- https://github.com/apple/turicreate