Enhancing fruit and vegetable detection in unconstrained environment with a novel dataset

Jan 1, 2024·
Sandeep Khanna
,
Chiranjoy Chattopadhyay
,
Suman Kundu
· 1 min read
Abstract
Automating the detection of fruits and vegetables using computer vision is essential for modernizing agriculture, improving efficiency, ensuring food quality, and contributing to sustainable and technologically advanced farming practices. This paper presents an end-to-end pipeline for detecting and localizing fruits and vegetables in real-world scenarios. To achieve this, a dataset named FRUVEG67 was curated that includes images of 67 classes of fruits and vegetables captured in unconstrained scenarios, with only a few manually annotated samples per class. A semi-supervised data annotation algorithm (SSDA) was developed that generates bounding boxes for objects to label the remaining nonannotated images. For detection, Fruit and Vegetable Detection Network (FVDNet) was proposed, an ensemble version of YOLOv8n featuring three distinct grid configurations. In addition, an averaging approach for the prediction of the bounding box and a voting mechanism for the prediction of the classes was implemented. Finally Jensen–Shannon Divergence (JSD) in conjunction with focal loss was integrated as the overall loss function for better detection of smaller objects. Experimental results highlight the superiority of FVDNet compared to recent versions of YOLO, showcasing remarkable improvements in detection and localization performance. An impressive mean average precision (mAP) score of 0.82 across all classes was achieved. Furthermore, the efficacy of FVDNet on open-category refrigerator images were evaluated, where it demonstrates promising results.
Type
Publication
Scientia Horticulturae

Add the full text or supplementary notes for the publication here using Markdown formatting.