|
||||||
|
||||||
Introduction:Automating visual detection and tracking of moving objects by intelligent independent systems has been an active research topic for the past decades in computer vision. The research has diverse applications extending from surveillance, military, security systems, auto-navigation, object recognition, to human-machine interactions. In this research, we aim to develop an autonomous intelligent vision system that assists in autonomous navigation for unmanned aerial vehicles (UAV) with a forward-looking camera. The system can automatically localize and track the obstacles, which may be present in the path of the UAV. When the UAV is flying, it may collide with other UAVs, airplanes, birds, or some other flying objects. Therefore, it is essential to identify the obstacles and localize them in real-time throughout their trajectory for successful autonomous navigation and collision avoidance. In this research, a fast and robust approach is proposed by integrating an adaptive object detection technique within a kernelized correlation filter (KCF) framework. The KCF tracker is automatically initialized via salient object detection and localization. An adaptive object detection strategy is proposed to refine the location and boundary of the object when the tracking confidence value is below a certain threshold. In addition, a reliable post-processing technique is designed to accurately localize the object from a saliency map. Extensive quantitative and qualitative experiments on the challenging datasets have been performed to verify the proposed approach. The proposed approach can successfully and accurately detect as well as track the salient object throughout the sequence, even when the appearance of the flying object suffers from deformations, scale variations, illumination variations, and camera instabilities. It greatly outperforms the state-of-the-art methods in terms of tracking speed and accuracy. | ||||||
Tracking Examples:
The videos demenstrate two tracking results in the datasets. Observe the robust adaptive capability of the approach despite variations in scale, rotation, illumination, partial occlusion or camera instability. |
||||||
Some Comparative Results: |
||||||
|
||||||
Download: |
||||||
Citations:
|
||||||
|
||||||
Contact UsComputer Vision and Intelligence Systems Laboratory
©2022, All Rights Reserved. Last updated December, 2022 |