Core ML, announced at WWDC 2017 is a new set of APIs built by Apple for use with iOS 11 or higher devices. With Core ML, developers can incorporate machine learning models in to their mobile apps, and have the inference accelerated using the Metal APIs. This means the processing of models will be significantly faster than using other systems such as TensorFlow or Caffe2.
Notably during the WWDC keynote, it was mentioned that Core ML will support models such as those coming from Keras or Caffe. So many existing models can be converted to work on-device with the new acceleration.
Core ML supports the following key features:
– Deep neural networks
– Recurrent neural networks
– Convolutional neural networks
– Support vector machines
– Tree ensembles
– Linear models
This is all done with on-device processing, and supports iOS, macOS, watchOS, and tvOS.
Under the Core ML banner is a Model Converter, Natural Language API, and a Vision API.
Once the APIs are made public, I’ll collect the references here for the official Core ML documentation. Follow along with me and let’s start learning!
For now, we know that the APIs will support Face tracking, Face detection, Landmarks, Text detection, Rectangle detections, Barcode detection, Object tracking, and Image registration.
Did this tutorial help you?
Your support on Patreon allows me to make better tutorials more often.Subscribe via RSS