CQUniversity
Browse

Evaluating faster-RCNN and YOLOv3 for target detection in multi-sensor data

chapter
posted on 2024-09-16, 20:30 authored by Anwaar Ulhaq, Asim Khan, Randall Robinson
Intelligent and autonomous systems like driverless cars are seeking the capability to navigate around at any time of the day and night. Therefore, it is vital to have the capability to reliably detect objects to predict any situation. One way to capture such imagery is through multi-sensor data like FLIR (Forward Looking Infrared) and visible cameras. Contemporary deep object detectors like YOLOv3 (You Look Once Only) (Redmon and Farhadi, Yolov3: an incremental improvement, arXiv) and Faster-RCNN (Faster Region based Convolutional Neural Networks) (Girshick Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1440–1448, 2015) are well-trained for daytime images. However, no performance evaluation is available against multi-sensor data. In this paper, we argue that diverse contextual multi-sensor data and transform learning can optimise the performance of deep object detectors to detect objects around the clock. We explore how contextual multi-sensor data can play a pivotal role in modelling and recognizing objects especially at night. For this purpose, we have proposed the use of contextual data fusion on available training data before training these deep detectors.We show that such enhancement significantly increases the performance of deep learning based object detectors.

History

Editor

Rehman A

Start Page

185

End Page

193

Number of Pages

9

ISBN-13

9789811517341

Publisher

Springer

Place of Publication

Singapore

Open Access

  • No

Era Eligible

  • Yes

Chapter Number

14

Parent Title

Statistics for data science and policy analysis

Usage metrics

    CQUniversity

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC