Caffe2 in an iOS App Deep Learning Tutorial
At this years’s F8 conference, Facebook’s annual developer event, Facebook announced Caffe2 in collaboration with Nvidia. This framework gives developers yet another tool for building deep learning networks for machine learning. But I am super pumped about this one, because it is specifically designed to operate on mobile devices! So I couldn’t resist but start digging in immediately.
I’m still learning, but I want to share my journey in working with Caffe2. So, in this tutorial I’m going to show you step by step how to take advantage of Caffe2 to start embedding deep learning capabilities in to your iOS apps. Sound interesting? Thought so… let’s roll 🙂
Building Caffe2 for iOS
The first step here is to just get Caffe2 built. Mostly their instructions are adequate so I won’t repeat too much of it here. You can learn how to build Caffe2 for iOS here.
The last step for their iOS install process is to run build_ios.sh, but that’s about where they leave you off with the instruction. So from here, let’s take a look at the build artifacts. The core library for Caffe2 on iOS is located inside the caffe2 folder:
And in the root folder:
NNPack is sorta like CUDNN for mobile, in that it accelerates the neural network operations for a gaming cpu or for mobile CPUs. PThreadPool is a thread pool library.
Create an Xcode project
Now that the library was built, I created a new iOS app project in Xcode with a single-view template. From here I drag and drop the
libCaffe2_CPU.a file in to my project heirarchy along with the other two libs,
libCAFFE2_PTHREADPOOL.a. Select ‘Copy’ when prompted. The file is located at
caffe2/build_ios/caffe2/libCaffe2_CPU.a. This pulls a copy of the library in to my project and tells Xcode I want to link against the library. We need to do the same thing with protobuf, which is located in
In my case I wanted to also include OpenCV2, which has it’s own requirements for setting up. You can learn about how to install OpenCV2 on their site. The main problem I ran in to with OpenCV2 was figuring out that I needed to create a
Prefix.h file, and then in the settings of the project set the Prefix Header file to be
MyAppsName/Prefix.h. In my example project I called the project
DayMaker, so for me it was
DayMaker/Prefix.h. Then I could put the following in the Prefix.h file so that OpenCV2 would get included before any Apple headers:
Include the Caffe2 headers
In order to actually use the library, we’ll need to pull in the right headers. Assuming you have a directory structure where your caffe2 files are a level above your project. (I cloned caffe2 in to
~/Code/caffe2 and set up my project in
~/Code/DayMaker.) You’ll need to add the following User Header Search Path in your project settings:
You’ll also need to add the following to “Header Search Paths”
Now you can also try importing some caffe2 C++ headers in order to confirm it’s all working as expected. I created a new Objective-C class to wrap the Caffe2 C++ API around. To follow along, create a new Objective-C class called Caffe2. Then rename the
Caffe2.m file it creates to
Caffe2.mm. This causes the compiler to see this as Objective-C++ instead of just Objective-C, a requirement for making this all work.
Next, I added some Caffe2 headers to the
.mm file. At this point this is my entire
According to this Github issue a reasonable place to start with a C++ interface to the Caffe2 library is this standalone
predictor_verifier.cc app. So let’s expand the
Caffe2.mm file to include some of this stuff and see if everything works on-device.
With a few tweaks we can make a class that loads up the caffe2 environment and loads in a set of predict/net files. I’ll pull in the files from Squeezenet on the Model Zoo. Copy these in to the project heirarchy, and we’ll load it up just like any iOS binary asset…
Next, we can just instantiate this from the AppDelegate to test it out… (Note you’ll need to import Caffe2.h in your Bridging Header if you’re using Swift, like me.
This for me produced some linker errors from clang:
-force_load DayMaker/libCaffe2_CPU.a as an additional linker flag corrected this issue, but then it presented an issue not being able to find opencv. The
DayMaker part will be your project name, or just whatever folder your
libCaffe2_CPU.a file is located in. This will show up as two flags, just make sure theyre in the right order and it should perform the right concatenation of the flags.
Building and running the app crashes immediately with this output:
Success! I mean, it doesn’t look like success jut yet, but this is an error coming from caffe. The issue here is just that we never set anything for the input. So let’s fix that by providing data from an image.
Loading up some image data
Here you can add a cat jpg to the project or some similar image to work with, and load it in:
I refactored this a bit and moved my logic out in to a
predictWithImage method, as well as creating the predictor in a seperate function:
The predictWithImage method is using openCV to get the GBR data from the image, then I’m loading that in to Caffe2 as the inputVector. Most of the work here is actually done in OpenCV with the cvtColor line…
The imagenet_classes are defined in a new file, classes.h. It’s just a copy from the Android example repo here.
Most of this logic was pulled and modified from bwasti’s github repo for the Android example.
With these changes I was able to simplify the initCaffe method as well:
So you’ll notice I’m pulling in the cat.jpg here. I used this cat pic:
The output when running on iPhone 7:
Identified: tabby, tabby cat
Hooray! It works on a device!
I’m going to keep working on this and publishing what I learn. If that sounds like something you want to follow along with you can get new posts in your email, just join my mobile development newsletter. I’ll never spam you, just keep you up-to-date with deep learning and my own work on the topic.
Thanks for reading! Leave a comment or contact me if you have any feedback 🙂
Side-note: Compiling on Mac OS Sierra with CUDA
When compiling for Sierra as a target (not the iOS build script, but just running
make) I ran in to a problem in protobuf that is related to this issue. This will only be a problem if you are building against CUDA. I suppose it’s somewhat unusual to do so because most Mac computers do not have NVIDIA chips in them, but in my case I have a 2013 MBP with an NVIDIA chip that I can use CUDA with.
To resolve the problem in the most hacky way possible, I applied the changes found in that issue pull. Just updating protobuf to the latest version by building from source would probably also work… but this just seemed faster. I open up my own version of this file in
/usr/local/Cellar/protobuf/3.2.0_1/include/google/protobuf/stubs/atomicops.h and just manually commented out lines 198 through 205:
I’m not sure what the implications of this are, but it seems to be what they did in the official repo, so it must not do much harm. With this change I’m able to make the Caffe2 project with CUDA support enabled. In the official version of protobuf used by tensorflow, you can see this bit is actually just removed, so it seems to be the right thing to do until protobuf v3.2.1 is released, where this is fixed using the same approach.
Did this tutorial help you?
Your support on Patreon allows me to make better tutorials more often.Subscribe via RSS