Development of Framework for Moving Object Detection and Tracking in Video Sequences

Development of Framework for Moving Object Detection and Tracking in Video Sequences

Abstract: -Object detection and tracking is an important and challenging task required in many computer vision applications and is an active research area in computer vision. Object detection involves locating the object in a frame of a video and tracking involves locating the moving object over a period of time. The task of moving object detection and tracking is a difficult because of illumination changes, dynamic background, occlusion, cluttered background, presence of shadows, motion of camera and video noise. The aim of this paper is to propose a framework for moving object detection and tracking in a video sequence. This framework detects and tracks moving object from video sequences and plots its motion trajectories which can be used for many applications like people tracking, vehicle tracking, traffic monitoring, video surveillance, in robotics and many more. We have used correlation based approach to track the moving object from video sequences.

Keywords -Image Processing, Object Detection, Object Tracking, Motion Trajectory

I. INTRODUCTION

Tracking of moving object is an important step in many applications of computer vision like video surveillance, sports reporting, video annotation, and traffic monitoring system. But, under certain circumstances it is not easy to track a moving object e.g. camera motion as well as object motion change dynamically. The main issues which need to be handled for tracking of moving object are initial segmenting problems, detection, and tracking even in occlusion [3]. Various challenges of moving object detection are discussed in section II. In literature, various techniques are explored for moving object detection and tracking. Section III discusses these techniques. Section IV includes description of our technique for moving object detection and tracking. Experimental results are presented and discussed in section V. We conclude with a discussion on our result in section
Read More