CQUniversity
Browse

File(s) not publicly available

Adversarial domain adaptation for action recognition around the clock

conference contribution
posted on 2024-05-14, 05:04 authored by Anwaar Ulhaq
Due to the numerous potential applications in visual surveillance and nighttime driving, recognizing human action in low-light conditions remains a difficult problem in computer vision. Existing methods separate action recognition and dark enhancement into two distinct steps to accomplish this task. However, Isolating the recognition and enhancement impedes end-to-end learning of the space-time representation for video action classification. This paper presents a domain adaptation-based action recognition approach that uses adversarial learning in cross-domain settings to learn cross-domain action recognition. Supervised learning can train it on a large amount of labelled data from the source domain (daytime action sequences). However, it uses deep domain invariant features to perform unsupervised learning on many unlabelled data from the target domain (nighttime action sequences). The resulting augmented model, named 3D-DiNet can be trained using standard backpropagation with an additional layer. It achieves SOTA performance on InFAR and XD145 actions datasets.

History

Start Page

279

End Page

285

Number of Pages

7

Start Date

2023-11-30

Finish Date

2023-12-02

ISBN-13

9781665456425

Location

Sydney, Australia

Publisher

IEEE

Place of Publication

Piscataway, NJ

Peer Reviewed

  • Yes

Open Access

  • No

Era Eligible

  • Yes

Name of Conference

2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)

Parent Title

Proceedings of the Digital Image Computing: Technqiues and Applications (DICTA)