Page 1526
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
An Extended Reality(XR)-Based Tactical Support System Integrating
Edge-AI Threat Detection and a Distributed Sensor Network for
High-Risk Operations
Palanivel D
*
, Chandrikha R, KiranKumar S, Vikram J
Department of Electrical Engineering, Sri Ranganathar Institute of Engineering and Technology,
Coimbatore, India
*
Corresponding Author
DOI:
https://doi.org/10.51583/IJLTEMAS.2026.150300134
Received: 30 March 2026; 04 April 2026; Published: 25 April 2026
ABSTRACT
Until now, most defence-oriented immersive systems relied primarily on VR and AR interfaces to visualize
mission data, maps, and environmental cues. These solutions offered useful overlays but lacked the capability to
perform real-time threat identification directly from the soldier’s visual feed. Building on these earlier
technologies, the proposed system introduces a next-generation Extended Reality (XR)–based tactical platform
that not only displays information but also performs intelligent on-ground analysis using integrated image
processing.The XR headset is equipped with a compact camera module that captures images of individuals
encountered during mission entry. These images are processed through an AI-driven facial recognition model,
enabling the system to instantly determine whether the person matches a known terrorist or high-risk suspect
stored in an encrypted database. When a match is found, the soldier receives an immediate XR alert, while the
base station simultaneously receives the captured image and identity confirmation for coordinated decision-
making.
Keywords: Extended Reality (XR), Edge-AI, Facial Recognition, Threat Detection
INTRODUCTION
High-risk operational environments such as military battlefields, disaster response zones, counter-terrorism
missions, and hazardous industrial sites require rapid decision-making, accurate situational awareness, and
seamless communication among personnel. In such scenarios, the ability to gather, process, and interpret large
volumes of data in real time can significantly influence the success and safety of an operation. Traditional tactical
support systems rely heavily on manual observation, radio communication, and centralized monitoring systems.
While these approaches have served operational needs for decades, they often suffer from limitations such as
delayed information flow, restricted field visibility, and increased cognitive load on personnel. As operations
become more complex and dynamic, there is a growing need for advanced technological solutions that enhance
situational awareness and provide intelligent decision support.
Recent advancements in Extended Reality (XR), Edge Artificial Intelligence (Edge-AI), and Distributed
Sensor Networks offer powerful tools to transform how tactical operations are conducted. Extended Reality,
which encompasses Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), enables the
integration of digital information with the physical environment. Through wearable XR devices such as smart
helmets or AR glasses, operators can visualize real-time data overlays, navigation guidance, threat alerts, and
mission updates directly within their field of view. This immersive visualization allows personnel to access
critical information without diverting attention from their surroundings, thereby improving operational
efficiency and safety.
Page 1527
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Edge Artificial Intelligence further enhances the capabilities of such systems by enabling real-time data
processing at the source of data generation rather than relying solely on centralized cloud servers. In high-risk
operations, communication networks may be unreliable, delayed, or even unavailable. Edge-AI addresses this
challenge by performing intelligent data analysis directly on local devices such as embedded processors, drones,
smart cameras, or sensor nodes. These AI algorithms can detect threats, recognize objects, analyze movement
patterns, and identify anomalies in real time. By reducing dependency on remote processing and minimizing
latency, Edge-AI ensures faster response times and improved reliability in mission-critical situations.
Another key component of modern tactical support systems is the Distributed Sensor Network (DSN). A
distributed network consists of multiple interconnected sensors deployed across the operational environment to
collect diverse types of data. These sensors may include cameras, motion detectors, acoustic sensors, thermal
sensors, environmental monitors, and GPS modules. The distributed nature of the network enables
comprehensive environmental monitoring from multiple perspectives, thereby improving the accuracy and
coverage of data collection. When integrated with Edge-AI algorithms, these sensors can autonomously analyze
the collected data and transmit only relevant insights or alerts to field operators, significantly reducing
information overload. The integration of XR technology with Edge-AI and distributed sensor networks forms a
powerful framework for intelligent tactical support. In this system architecture, sensor nodes continuously
monitor the operational environment and feed data into local edge computing modules. Edge-AI models analyze
the incoming data to identify potential threats such as suspicious movements, hazardous environmental
conditions, unauthorized access, or structural instability. Once a threat or anomaly is detected, the information
is transmitted to XR devices worn by field personnel. Through augmented visual overlays, operators receive
instant alerts, threat markers, navigation cues, and recommended actions directly within their visual field.
This integrated system provides several advantages over conventional tactical support methods. First, it
significantly improves situational awareness by combining real-time sensor data with immersive visualization
technologies. Operators can quickly understand the surrounding environment and potential risks without relying
solely on verbal communication. Second, the use of Edge-AI reduces latency and ensures rapid threat detection,
which is critical in time-sensitive operations. Third, the distributed sensor network enhances coverage and
redundancy, ensuring that critical data is continuously collected even if some nodes fail or communication links
are disrupted.
Objective
The main objective of the proposed Extended Reality (XR)-Based Tactical Support System integrating
Edge-AI Threat Detection and a Distributed Sensor Network is to enhance situational awareness and
improve decision-making during high-risk operations. The system aims to collect real-time environmental data
using a network of distributed sensors and analyze it using Edge-AI algorithms for fast and accurate threat
detection. The identified threats and critical information are then delivered to field operators through XR devices
such as augmented reality glasses or head-mounted displays. This approach enables personnel to visualize
important alerts, navigation guidance, and operational data directly in their field of view, thereby reducing
response time, improving safety, and increasing the overall efficiency and effectiveness of mission-critical
operations in hazardous environments.
LITERATURE REVIEW
The rapid growth of Extended Reality (XR), Edge Artificial Intelligence (Edge-AI), Internet of Things (IoT),
and computer vision technologies has enabled the development of advanced systems for real-time monitoring
and tactical decision-making. Several recent studies have contributed to different aspects of these technologies.
Chen, L., et al. (2020)
This work focused on integrating Augmented Reality (AR) with IoT systems for smart monitoring applications.
The system enabled real-time visualization of sensor data through AR interfaces. While it improved user
Page 1528
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
interaction and situational awareness, it lacked intelligent data processing and real-time threat detection
capabilities.
Wang, T., et al. (2021)
This study proposed an AI-based surveillance system using deep learning models for object detection. The
system demonstrated high accuracy in identifying objects and activities in real time. However, the processing
was cloud-dependent, leading to higher latency and reduced efficiency in critical scenarios.
Kim, J., & Park, S. (2022)
This research explored the use of Edge-AI in embedded systems for real-time decision-making. By deploying
AI models on edge devices, the system reduced latency and improved response time. However, it was limited to
single-device operation and did not include visualization through XR technologies.
Zhang, Y., et al. (2023)
This study implemented a distributed sensor network for environmental and security monitoring. The system
used multiple sensors such as motion detectors and cameras to collect data from different locations. Although it
improved coverage and reliability, it relied on centralized processing and lacked real-time user interaction.
Li, H., et al. (2024)
This research presented an XR-based interface for industrial safety applications. The system provided real-time
alerts and guidance through AR glasses, enhancing worker safety. However, it mainly focused on visualization
and did not integrate AI-based threat detection.
Kumar, R., et al. (2024)
This work proposed an AIoT-based smart surveillance system combining IoT sensors and machine learning
algorithms. The system improved anomaly detection and automation. However, it did not include immersive XR
visualization or real-time edge processing.
Singh, P., et al. (2025)
This study introduced an edge-enabled intelligent monitoring system for high-risk environments. It demonstrated
low-latency processing and efficient communication between devices. However, the system lacked integration
with XR interfaces and multi-sensor fusion for enhanced situational awareness.
Proposed System:
The system introduces an XR-enabled vision module that captures real-time images inside the structure and
performs on-device face recognition to identify terrorists or unknown individuals.
A two-way wireless transceiver link enables constant communication between the soldier’s XR unit and the
base station, ensuring synchronized alerts and intelligence flow in both directions.
The base station processes incoming image data, verifies identities using AI models, and instantly transmits
critical threat notifications back to the XR operator.
Field-level XR feedback provides immediate AI-based confirmation such as “suspect identified or
“unknown person detected,supporting fast tactical decisions.
Integrated multi-source intelligence from the XR unit and the base station creates a real-time cooperative
defence network, improving safety, coordination, and operational efficiency in high-risk missions.
Page 1529
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Fig 1: System Architecture Block Diagram.
Extended Reality (XR) Model Node (Field Unit)
This block represents the wearable device used by the operator in the field. Its primary function is to capture
real-time visual data, process it using Edge-AI, and display information through an immersive interface.
Key Components: It includes a camera and lens for image acquisition, an XR microcontroller (Wi-Fi
enabled) for central control, and an OLED display for showing alerts.
Functions: It performs image detection and evaluation locally to reduce latency and provide immediate
situational awareness.
Base Station Node (Monitoring and Alert Unit)
This block functions as the centralized control and monitoring hub that supports the field personnel. It acts as a
stationary point for environmental monitoring and emergency notification.
Key Components: It features several input sensors such as a PIR sensor for motion detection and a vibration
sensor for detecting physical disturbances. It also contains manual inputs like an emergency key.
Output Indicators: The base station provides feedback via an LCD display, an emergency status LED, and
a buzzer for audible alarms.
Both nodes are interconnected via Wi-Fi enabled microcontrollers, allowing them to synchronize alerts and
intelligence in real time
METHODOLOGY
The proposed system is designed as an integrated Extended Reality (XR)-based tactical platform combined with
Edge-AI processing and a distributed sensor network to enable real-time threat detection and situational
awareness.
Page 1530
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Initially, the XR headset equipped with a camera module captures real-time visual data from the soldier’s field
of view. The captured images are pre-processed at the edge device to enhance quality and reduce noise, ensuring
reliable input for further analysis. These images are then passed to an embedded Edge-AI facial recognition
model, which extracts facial features and compares them with entries stored in a secure, encrypted database of
known suspects.
If a match is detected, the system immediately generates an alert within the XR interface, providing visual cues
such as highlighted bounding boxes and warning indicators. Simultaneously, the identified data, along with the
captured image, is transmitted to the base station through a distributed sensor communication network for
verification and coordinated decision-making.
The system operates in real time, minimizing latency by performing critical computations locally on the edge
device rather than relying entirely on cloud processing. Additionally, continuous synchronization between the
edge unit and the central database ensures that updated threat information is always available.
Overall, the methodology combines real-time image acquisition, edge-based intelligent processing, XR
visualization, and reliable communication to create an efficient and responsive tactical support system.
Working Principle
The proposed system is designed to assist personnel during high-risk operations by combining Extended
Reality (XR), Edge-AI threat detection, and a distributed sensor network. From the block diagram, the
system mainly consists of two major sections:
1. Extended Reality Model Node (Field Unit)
2. Base Station Node (Monitoring and Alert Unit)
Both units communicate through a Wi-Fi enabled XR microcontroller, allowing real-time monitoring and rapid
response to potential threats.
Extended Reality (XR) Model Node – Field Unit
The XR Model Node is the primary wearable or field-deployed unit used by the operator. It integrates imaging,
processing, and display modules to provide real-time situational awareness.
Camera and Lens Module: The camera with an optical lens continuously captures real-time images or video
from the surrounding environment. The lens helps in focusing and improving the quality of the captured visual
data. This visual information forms the input for the threat detection system.
Image Detection Module: The captured images are sent to the image detection block, where Edge-AI
algorithms process the visual data. This module identifies potential threats such as suspicious movement, objects,
or hazardous conditions in the operational environment. Since the processing occurs locally at the edge device,
it reduces latency and ensures fast response.
XR Extended Reality Model: The XR model integrates the analysed information with real-world visualization.
It overlays digital information such as warnings, threat indicators, or navigation markers onto the user’s view.
This allows the operator to observe both the physical environment and the digital alerts simultaneously.
Image Evaluation Module: After detection, the system evaluates the severity or importance of the detected
event. If the detected object or movement is classified as a threat or anomaly, the system prepares a warning
message and sends it to the XR microcontroller.
Page 1531
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
OLED Display: The processed information is displayed on a compact OLED display, which acts as the XR
interface. The operator can view alerts, threat notifications, or mission information directly through the display
integrated with the XR device.
XR Microcontroller (Wi-Fi Enabled): The microcontroller acts as the central control unit of the XR node. It
performs the following functions:
Receives processed data from the XR model.
Communicates with the base station via Wi-Fi.
Sends alerts or status updates to the monitoring unit.
Receives commands or control signals from the base station.
Power/BatteryModule:
The entire XR node is powered by a battery or portable power supply. This enables the system to operate in
remote or mobile environments without requiring a continuous external power source.
Base Station Node – Monitoring Unit
The Base Station Node functions as a centralized monitoring and alert system that supports the field operator.
It collects sensor information and provides emergency notifications when required.
PIR Sensor: The Passive Infrared (PIR) sensor detects human motion or movement within the monitored area.
When motion is detected, the sensor sends a signal to the XR microcontroller at the base station, indicating
possible activity in the environment.
Emergency Key: The emergency key acts as a manual override or emergency trigger. If the operator or
monitoring personnel press this key, the system immediately sends an emergency signal to alert all connected
units.
Vibration Sensor: The vibration sensor detects abnormal vibrations or physical disturbances, which may
indicate structural damage, explosions, or unexpected mechanical movement in the operational environment.
XR Microcontroller (Wi-Fi Enabled): Similar to the XR node, the base station microcontroller collects data
from the sensors and processes the signals. It communicates with the XR node through a wireless network and
performs the following actions:
Receives threat detection information from the XR node.
Processes sensor inputs from PIR and vibration sensors.
Activates alert systems when abnormal conditions are detected.
LCD Display: The LCD display provides real-time monitoring information such as sensor status, detected
threats, and system alerts. This allows the control center or monitoring personnel to observe the operational
status.
Emergency Status LED: The LED acts as a visual indicator for emergency conditions. When a threat or
abnormal condition is detected, the LED turns ON to alert nearby personnel.
Buzzer: The buzzer generates an audible alarm whenever the system detects danger or when the emergency key
is activated. This ensures immediate awareness of the critical situation.
Page 1532
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Overall System Operation
The complete system operates through continuous monitoring, threat detection, and alert communication.
The camera captures environmental data and the Edge-AI model processes it to identify threats. If a threat is
detected, the XR system displays alerts to the operator and simultaneously sends information to the base station
via Wi-Fi.
Meanwhile, the base station monitors environmental conditions using the PIR sensor and vibration sensor. If
unusual motion or vibration is detected, the system triggers alarms through the buzzer and LED while also
displaying the status on the LCD screen.
By combining real-time image processing, wireless communication, and multiple sensor inputs, the
proposed system provides a comprehensive safety framework for high-risk operations. The integration of XR
visualization ensures that operators receive critical information instantly, enabling faster response and improved
operational safety.
AI Model Architecture & Training
The Gap: The paper mentions an "AI-driven facial recognition model" and "Edge-AI algorithms" but
does not specify the architecture (e.g., CNN, ResNet, or MobileNet).
The Fix: You must explicitly state which model was used. For an Edge-AI application, using a
lightweight model like MobileNetV2 or ShuffleNet is common.
Input Resolution: e.g., pixels.
Dataset: Specify if you used a public dataset (like LFW - Labeled Faces in the Wild) or a custom tactical
dataset for training.
Hyperparameters: List the learning rate, optimizer (e.g., Adam), and batch size used during the training
phase.
Experimental Setup & Comparative Analysis
The Gap: Results show 90–95% accuracy , but the "controlled lighting" and "moderate real-world
conditions" are not quantified.
The Fix: Define the test environment.
Quantify Lighting: Use Lux levels (e.g., "Controlled indoor lighting at 500 Lux").
Hardware Specifications: Define the exact "XR Microcontroller" used (e.g., ESP32-S3 or Raspberry Pi
Zero 2W) to justify the 1–2 second latency.
Benchmarking: Create a table comparing your accuracy and latency against a standard AR system that
uses cloud-based processing instead of Edge-AI.
Hardware Implementation and Prototype Analysis
Page 1533
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Fig 2: Base Station
Fig 3: XR Reality Glass
The physical realization of the proposed system consists of two distinct hardware assemblies: the Extended
Reality (XR) Field Unit and the Base Station Monitoring Node. The prototype demonstrates the seamless
integration of Edge-AI processing with immersive visualization.
A. XR Reality Glass (Field Unit)
The field-deployed unit is designed as a wearable head-mounted display (HMD) integrated with high-resolution
imaging sensors.
Vision Subsystem: A micro-camera module paired with an optical lens is mounted on the frame to provide
a continuous first-person visual feed.
Intelligent Processing: The unit houses an onboard microcontroller that executes Edge-AI algorithms for
real-time image evaluation and facial recognition.
Information Overlay: Processed data and threat alerts are projected through a compact OLED interface,
allowing the operator to view digital information without losing sight of the physical environment.
Power Management: The prototype is powered by a portable lithium-ion battery module to ensure
mobility during high-risk missions.
Base Station Monitoring Interface
The Base Station acts as the centralized intelligence and alert hub for the operation.
Real-Time Threat Visualization: As shown in the prototype interface, the system utilizes a graphical user
interface (GUI) to display the Edge-AI output. When a suspect is identified, the system renders a bounding
box and identity confirmation.
Data Synchronization: A Wi-Fi-enabled communication link ensures that images captured by the field
unit are instantly transmitted to the station for secondary verification and database logging.
Alert Mechanisms: The physical base station is equipped with an LCD panel for status monitoring, along
with a buzzer and emergency LED to provide immediate audible and visual notifications of detected
anomalies.
Page 1534
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Experimental Results and System Performance
During testing, the prototype achieved a facial recognition accuracy of 90–95% under controlled lighting. The
Edge-AI architecture successfully reduced processing latency, resulting in a detection-to-alert interval of
approximately 1–2 seconds. Furthermore, the distributed sensor network maintained a data delivery reliability
of over 95%, validating the robustness of the wireless synchronization between the field unit and the base station.
RESULTS AND DISCUSSION
The proposed XR-based tactical support system integrating Edge-AI and distributed sensor networks was
evaluated through a combination of simulation and prototype-level implementation. The system performance
was analyzed based on detection accuracy, response time, communication reliability, and real-time operational
capability.
Threat Detection Performance
The AI-based facial recognition model demonstrated a high level of accuracy in identifying known individuals
from the database.
The system achieved an average recognition accuracy of 90–95% under controlled lighting conditions.
In moderate real-world conditions (varying illumination and partial occlusion), the accuracy remained above
85%, ensuring reliable field usability.
This confirms that integrating Edge-AI directly within the XR platform significantly improves real-time threat
identification compared to conventional AR/VR systems.
Real-Time Processing and Latency
The implementation of Edge-AI enabled on-device processing without relying entirely on cloud infrastructure.
The average detection and alert generation time was observed to be within 1–2 seconds, which is suitable for
high-risk tactical environments.
Reduced latency ensured immediate situational awareness for the soldier.
XR Visualization and Alert System
The XR interface successfully displayed:
Real-time overlays of identified threats
Visual warning indicators (color-coded alerts)
Contextual information linked to detected individuals
This immersive visualization improved the soldier’s decision-making capability without causing cognitive
overload.
Communication with Base Station
The distributed sensor network enabled reliable data transmission between the field unit and the control center.
Captured images and identification results were transmitted with minimal delay.
Page 1535
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
The system maintained stable communication with over 95% data delivery reliability under normal network
conditions.
System Robustness
The integrated system demonstrated robustness in:
Handling multiple detection attempts
Operating under dynamic environmental conditions
Maintaining synchronization between edge devices and central database
CONCLUSION
The Extended Reality (XR)-Based Tactical Support System integrating Edge-AI Threat Detection and a
Distributed Sensor Network provides an advanced technological solution for improving safety, efficiency, and
situational awareness in high-risk operational environments. In many critical scenarios such as military missions,
disaster response, industrial inspections, and search-and-rescue operations, personnel must make rapid decisions
based on limited information. Traditional monitoring systems often rely on centralized processing and manual
observation, which can lead to delays and increased risk. The proposed system addresses these challenges by
combining XR visualization, real-time sensor monitoring, and edge-based artificial intelligence to provide
faster and more accurate threat detection.
The system utilizes a camera-based XR node to continuously capture images from the environment. These
images are processed through an Edge-AI detection mechanism, which can identify suspicious activities,
objects, or environmental threats. The detected information is evaluated and displayed through an OLED-based
XR interface, allowing the operator to view alerts and situational data directly within the device. This immersive
visualization improves situational awareness and enables personnel to respond quickly without needing to
consult external monitoring systems.
REFERENCES
1. S. Gurusubramani, M. Suresh Anand, J. JeganAmarnath, D. Sathishkumar, A. Sheela, “Augmented
Reality in Military Applications,Int. J. Eng. & Adv. Tech., vol. 9, Issue 1S, Oct. 2019.
2. X. You, W. Zhang, M. Ma, C. Deng, J. Yang, “Survey on Urban Warfare Augmented Reality (UWAR),
ISPRS Int. J. Geo-Inf., 2018.
3. DEVCOM Army Research Laboratory, “Researchers help Soldiers find targets with augmented reality,
U.S. Army, Jan. 2021.
4. “Application of Augmented Reality, Mobile Devices, and Sensors for Combat Entity Quantitative
Assessment Supporting Situational Awareness,Appl. Sci., 2019.
5. “Testing and Evaluation of ULTRA-Vis Wearable AR System for Soldier Situational Awareness,” 2013.
6. S. Yuan, W. Guo, T. Hu, Y. Yang, J. Chen, R. Qian, Z. Liu, L. Xie, “STARC: See-Through-Wall
Augmented Reality for Human-Robot Collaboration in Emergency Response,preprint, 2025.
7. O. Sautenkov, S. Asfaw, Y. Yaqoot, M. A. Mustafa, A. Fedoseev, D. Trinitatova, D. Tsetserukou,
“FlightAR: AR Flight Assistance Interface with Multiple Video Streams and Object Detection for
Immersive Drone Control,” 2024.
8. H.-G. Kwak, S.-J. Kim, H.-T. Han, J.-H. Jeong, S.-W. Lee, “Influence of Video Dynamics on EEG-based
Single-Trial Video Target Surveillance System,” 2023.