info
This blog post is still a work in progress. If you require further clarifications before the contents are finalized, please get in touch with me here, on LinkedIn, or Twitter.
For many data scientist (including myself), we pride ourselves in training a model, seeing the loss graph go down, and claim victory when the test set accuracy reaches 99.99235%.
Why not?
This is the after all the juiciest part of the job. “Solving” one dataset after another, it may seem like anything around you can be conquered with a simple model.fit
.
That was me two years ago.
The naive version of me thought that was all about it with machine learning (ML). As long as we have a dataset, ML is the way to go.
Almost nobody talked about what happens to the model after that.
Like a painting not shown in an artist’s studio, a machine learning model not deployed is a missed opportunity to enrich and enhance the lives of those it was intended to serve.
Without deployment the model you’ve trained only benefits you.
So how do we maximize the number of people you can serve with the model?
Mobile device.
It’s 2023, if you’re reading this, chances are you own a mobile device.
Hands down, having a model that can work on mobile is going to reach many.
In this blog post, I will show you how you can make a model accessible through your mobile phone with Hugging Face and Flutter.
✅ Yes, for free.
tip
⚡ By the end of this post you will learn how to:
💡 NOTE: Code and data for this post are available on my GitHub repo here.
Demo on iOS iPhone 14 Pro
Demo on Android - Google Pixel 3 XL.
I’ve also uploaded the app to Google Playstore. Download and try it out here.
If that looks interesting, let’s start!
Making computer vision models (especially large ones) available on mobile devices sounds interesting in theory.
But in practice there are many hurdles -
I know that sounds complicated. Don’t worry because we are NOT going to deal with any of that in this blog post!
Enter 👇
Hugging Face is a platform that allows users to host and share machine learning models and dataset. It’s most notable for its Transformers model for Natural Language Processing (NLP).
Recently Hugging Face has been expanding its territory beyond NLP and venturing into computer vision.
Ross Wightman, the creator of the wildly popular PyTorch Image Model (TIMM) repo joins forces.
TIMM is a open-source computer vision repo used in research and commercial application. I boasts close to a thousand (and counting) state-of-the-art PyTorch image models, pretrained weights and scripts for training, validation and inference.
TIMM joins Hugging Face.
tip
Check out the TIMM repo here.
What does it mean for you?
Now you can use any models from TIMM with Hugging Face on platforms of your choice. The Hugging Face docs shows how you can do it using Python.
Spaces are one of the most popular ways to share ML applications and demos with the world.
Hardware specs here.
Hardware specs on Spaces.
Details on how I trained the model is here.
Here’s the model that I trained using Fastai ahd hosted on Hugging Face Space.
Try it out 👇
View on the Hugging Face webpage here.
Deployed using Gradio.
|
|
If we want to use other language then we’ll need an API endpoint.
All applications deployed using Gradio has an API endpoint.
TIMM joins Hugging Face.
View the API endpoint here
Calling the endpoint in Flutter
.
import 'dart:convert';
import 'package:http/http.dart' as http;
Future<Map> classifyRiceImage(String imageBase64) async {
final response = await http.post(
Uri.parse(
'https://dnth-edgenext-paddy-disease-classifie-dc60651.hf.space/run/predict'),
headers: <String, String>{
'Content-Type': 'application/json; charset=UTF-8',
},
body: jsonEncode(<String, List<String>>{
'data': [imageBase64]
}),
);
if (response.statusCode == 200) {
// If the server did return a 200 CREATED response,
// then decode the image and return it.
final classificationResult = jsonDecode(response.body)["data"][0];
return classificationResult;
} else {
// If the server did not return a 200 OKAY response,
// then throw an exception.
throw Exception('Failed to classify image.');
}
}
GitHub repo here.
Demo on iOS iPhone 14 Pro
Demo on Android - Google Pixel 3 XL.
Use image picker or camera.
I’ve also uploaded the app to Google Playstore. Download and try it out here.
That’s a wrap! In this post, I’ve shown you how you can start from a model, train it, and deploy it on a mobile device for edge inference.
tip
⚡ By the end of this post you will learn how to:
💡 NOTE: Code and data for this post are available on my GitHub repo here.
I hope you’ve learned a thing or two from this blog post. If you have any questions, comments, or feedback, please leave them on the following Twitter/LinkedIn post or drop me a message.