The fruit load of entire mango orchards was estimated well before harvest using (i) in-field machine vision on mobile platforms and (ii) WorldView-3 satellite imagery. For in-field machine vision, two imaging platforms were utilized, with (a) day time imaging with LiDAR based tree segmentation and multiple views per tree, and (b) night time imaging system using two images per tree. The machine vision approaches
involved training of neural networks with image snips from one
orchard only, followed by use for all other orchards (varying in location and cultivar). Estimates of fruit load per tree achieved up to a R2 = 0.88 and a RMSE = 22.5 fruit/tree against harvest fruit count per tree (n = 18 trees per orchard). With satellite imaging, a regression was established between a number of spectral indices and fruit number for a set (n=18) of trees in each orchard (example: R2 = 0.57, RMSE = 22 fruit/tree), and this
model applied across all tree associated pixels per orchard. The
weighted average percentage error on packhouse counts (weighted by packhouse fruit numbers) was 6.0, 8.8 and 9.9% for the day imaging system, night imaging machine vision system and the satellite method, respectively, averaged across all orchards assessed. Additionally, fruit sizing was achieved with a RMSE = 5 mm (on fruit length and width). These estimates are useful for harvest resource planning and marketing and set the foundation for automated harvest.
History
Start Page
1
End Page
6
Number of Pages
6
Start Date
2018-05-25
Finish Date
2018-05-25
Location
Brisbane, Australia
Publisher
ICRA 2018 Workshop on Robotic Vision and Action in Agriculture