Detecting Plant Health with a YOLOv4 Model
In recent years, we’ve seen huge growth in demand for houseplants, particularly among millennials and Gen Z. Plants are thought to be a way for younger generations to fill the need for nurture, not to mention providing both physical and mental benefits. The plant trend has been fueled by social media, creating a generation of first-time plant owners (source).
For this project, I wanted to classify whether a plant is healthy or unhealthy using object detection. The goal is to help novice plant owners determine if their plants are healthy or need some special attention. If the model is able to accurately classify an unhealthy plant, then users can take the next step to diagnose and determine whether their plants are being over/under watered, receiving too much/too little sunlight, infested, etc.
Below, I’ll walk through the general steps I took to train a YOLOv4 model to detect plant health. For more details and code, you can navigate to my Github or see the Resources section at the bottom.
YOLOv4 Model
While I initially trained CNN models to get a sense of baseline performance, the ultimate goal was to train a YOLO (you only look once) model for the purposes of real-time object detection.
Data
Three different datasets were collected: healthy and diseased crop leaves, healthy and wilted houseplants, and images from r/plantclinic.
- This Kaggle dataset contains 88K lab images of healthy and diseased crop leaves. Crops included apple, corn, strawberry, tomato, etc.
- Any poor quality and duplicate images were removed in the sample used for modeling.
- This Kaggle dataset contains 904 Google images of healthy and wilted houseplants.
- There was minimal cleaning needed as images were already classified and were mostly good quality.
- r/plantclinic is where Reddit users can submit images of their unhealthy plants to get diagnosed. Approximately 500 images were scraped from the subreddit, a majority of which were images of unhealthy plants.
- Each individual picture was reviewed to confirm the plants were indeed unhealthy and any low quality or ambiguous pictures were removed.
Preprocessing
For the training dataset, each image was manually labeled and annotated using LabelImg. The dataset composed of 905 healthy plant images from the crop leaves and houseplant datasets and 1,028 unhealthy plant images from the crop leaves, houseplant, and Reddit datasets.
Training
Additional inputs for training a YOLO model included 1) a cfg file that contains the model configurations, 2) a names file that has the class names, and 3) a data file that points to the data and backup folders (note: every 100 iterations is backed up when training a YOLO model).
The specific model configurations used included:
- batch = 64
- subdivisions = 16
- max_batches = 6000
- steps = 4800, 5400
- classes = 2
- filters = 21
After training, the 1000 weights had the highest mean Average Precision (mAP) followed by 3000 weights at 84% and 83% respectively. However, mAP performance remained in the low eighty percent for each thousand iterations.
Model Evaluation
For evaluation purposes, the 3000 weights iteration was used as it gives us high precision with additional training as opposed to the 1000 weights. The below are three kinds of results observed.
High Confidence + Accuracy
Ambiguous Cases
Confusion
Conclusion & Next Steps
Based on the mAP staying relatively flat with every thousand weights, it is likely that the YOLO model has reached its upper limit in terms of performance. The model capping could be due to the wide variety seen in plants, where an unhealthy characteristic of one plant might not be for another. An example of this as seen above, is that for some plants their leaves may naturally grow downwards but the model would mistake it to be wilted and therefore classified as unhealthy.
For next steps, I plan to train the model on additional plant images scraped from r/plantclinic. This would help expose the model to a greater variety of plants in “real-life” settings. Ultimately, the goal is to use the model to create a mobile app where users can either use their phone cameras to detect plant health on the spot or set up a webcam and view detections on their phone while they are away.
Resources
- Notebook used to train the YOLOv4 model in Google Colab using the free GPU acceleration (adapted from The AI Guy)
- Files used to train the model are located in Gdrive (also linked in the notebook)
- To train your own YOLO model with a custom dataset, follow this tutorial on how to build and train a YOLO model and this tutorial on how to label a custom dataset.