INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XI, November 2025
Although object detection algorithms have advanced significantly in recent years, research remains heavily
focused on detecting held firearms, handguns in surveillance footage, and concealed weapons. Very few studies
address the detection of abandoned weapons, especially in outdoor natural environments where objects may be
partially covered, camouflaged, or visually degraded. Furthermore, most high-accuracy detection algorithms—
such as Faster R-CNN or Mask R-CNN—are computationally intensive and unsuitable for real-time deployment
on drones with limited processing capabilities. Lightweight architectures designed for mobile and embedded
inference offer promising alternatives but remain under-examined for abandoned-weapon detection tasks. With
UAV adoption rapidly increasing, the need for efficient, onboard real-time hazard detection is becoming more
essential.
Mobile-optimized neural networks have become increasingly prevalent due to their efficiency and reduced
computational cost. The MobileNet family, in particular, incorporates depthwise separable convolutions to
reduce latency and parameter count, enabling deployment on resource-constrained hardware. MobileNetV3,
developed through neural architecture search and improved attention mechanisms, achieves significantly better
performance per watt than earlier mobile backbones [2]. Lightweight architectures combined with fast one-stage
detectors such as SSD (Single Shot Multibox Detector) enable real-time inference on portable devices. Such
properties are crucial for UAV platforms, which demand low power consumption, compact model sizes, and
high inference speed during flight operations. Prior studies demonstrate that UAV-based detection systems
benefit greatly from mobile-optimized models, particularly for tasks requiring high throughput in dynamic
environments [3].
REVIEW OF RELATED LITERATURE
The detection of hazardous objects in outdoor environments, particularly abandoned weapons, intersects
multiple research domains, including UAV-based computer vision, lightweight deep learning architectures, and
embedded edge-AI systems. This chapter reviews the foundational works and recent advances relevant to the
proposed EdgeAI-Drone framework.
UAV-Based Computer Vision for Outdoor Object Detection
Unmanned Aerial Vehicles (UAVs) have progressively evolved from remote-controlled platforms to
autonomous sensing systems capable of performing complex computer vision tasks. Numerous studies highlight
the advantages of UAV-based detection for wide-area surveillance, environmental monitoring, and search-and-
rescue operations. For example, Torresan et al. demonstrated that UAVs equipped with onboard vision greatly
improve situational awareness in hazardous scenarios where ground surveillance is impractical [4]. In parallel,
enhanced resolution and multi-angle imaging allow UAVs to capture diverse object appearances, improving
recognition performance in cluttered environments. Recent advances in drone-based object detection emphasize
small-object recognition from aerial viewpoints. Research by Du et al. shows that aerial images pose unique
challenges, including varying altitudes, scale inconsistencies, and background clutter, necessitating specialized
detection approaches [5]. These findings support the viability of UAVs as platforms for real-time hazard
detection, particularly for stationary, partially occluded, or blended objects—conditions common in abandoned-
weapon scenarios.
One-Stage Object Detectors for Real-Time Performance
Modern object detection approaches are typically categorized into one-stage and two-stage detectors. One-stage
detectors such as SSD and YOLO process images in a single feed-forward pass, enabling real-time performance
with reduced computational overhead. Liu et al. introduced SSD as a fast, multi-scale detector that balances
speed and accuracy, making it suitable for embedded and mobile applications [6]. YOLO-based models have
also achieved widespread adoption in UAV detection due to their high inference speed. Redmon and Farhadi’s
series of YOLO improvements have progressively enhanced detection accuracy and robustness in dynamic
scenes [7]. While two-stage detectors like Faster R-CNN deliver superior accuracy, their high computational
cost limits deployment on low-power UAV platforms. Given the real-time constraints of aerial detection, one-
stage architectures remain the preferred choice for onboard inference. These approaches provide foundational
Page 716