End-to-end object detection on Android phone with analysis: Pothole on road
Published:
Introduction
Although object detection may not be as trendy as LLMs these days, deploying lightweight models on edge devices can be highly satisfying—especially when solving real-world problems with limited resources.
In this article, I document an end-to-end approach where we fine-tune a custom object detector on a pothole dataset.
Why potholes?
We often say that roads are bumpy and full of potholes, but hardly anyone quantifies the problem. According to one report, potholes in India caused 9,423 accidents in 2017, leading to more than 3,000 deaths.
To make this use case practical, we deploy the model on an Android phone (< €200) that logs pothole detections along with GPS coordinates for later analysis. This kind of analytics pipeline can help city administrations identify high-risk regions and allocate resources more effectively (especially when combined with traffic density and other infrastructure data).
Fine-tuning the object detector
For this use case, we use PyTorch-based pretrained models due to their strong community support and maturity. Specifically, we fine-tune the YOLOv8 model from Ultralytics, which offers a very simple and intuitive interface for both training and inference.
I used a Google Colab notebook for training. The full code is included below and is also available in this GitHub repository. As you’ll see, the training pipeline is just a few lines of code—thanks to the abstraction provided by the Ultralytics team.
Deployment on edge
For deployment, we use an Android mobile phone and implement the app in Kotlin. Since my familiarity with TFLite in Kotlin is limited, I started from this excellent open-source project: YOLOv8-TFLite Object Detector
I extended the original codebase to add:
- logging of detections
- automatic snapshots
- GPS tracking for every pothole event
You can find the full project (with setup instructions) in my repository: Road Analytics
After deployment, the app’s detection screen looks as follows:

What’s next?
This is a toy example using a relatively small dataset, so there is significant potential for more specialized and larger real-world datasets.
Because we deploy on a resource-constrained device, the model was quantized to improve inference speed. With more capable devices, one could afford higher-precision models or hybrid on-device + cloud processing.
Connect with me on LinkedIn for any questions!

