INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 134
Uni Find: Lost and Found Objects Management System for
Campus using AI & Cloud Services
Ruchika Wadbudhe, Vaishnavi Ganesh, Achal Patil, Aditya Meshram, Aman Sayyed, Aniket Hedau
Department of Computer Science and Engineering Priyadarshini College of Engineering, Nagpur
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1410000017
Abstract—The rapid expansion of university campuses has led to frequent instances of misplaced personal belongings, creating
inconvenience for students and staff. UniFind: Lost and Found Objects Management System for Campus using AI & Cloud Services
is designed to provide a smart, efficient, and scalable solution to this challenge. The system leverages artificial intelligence for
image recognition and natural language processing to accurately identify, classify, and match lost items with their rightful owners.
By integrating with cloud services, UniFind ensures secure data storage, real-time accessibility, and seamless scalability for large
user bases. The platform enables users to report lost or found items via a web or mobile interface, where AI-driven matching
algorithms automatically suggest potential matches. Notifications and dashboards streamline communication between finders and
owners, significantly reducing manual effort and delays. This project demonstrates how the fusion of AI and cloud technology can
modernize campus management processes, enhance user convenience, and promote a responsible, collaborative environment within
academic institutions.
Keywords—Lost and Found System, Campus Management, Artificial Intelligence (AI), Image Recognition, Natural Language
Processing (NLP), Cloud Computing, Object Classification, Smart Search and Matching, Real-time Notifications, Web and Mobile
Application, Scalable Architecture, Data Security, Automation in Campus Services.
I. Introduction
Object detection has become a cornerstone of computer vision, supporting applications from surveillance and driver autonomy to
medical imaging and industrial automation and intelligent transportation. Though a lot has improved, major problems of low
exposure, tiny-scale targets, overlapping of objects, variation of domains, and dynamic environments still impede robustness and
reliability. Recent research has sought to overcome such challenges through improvement schemes, novel model constructions,
multi-modal fusion, and enhanced tracking systems.
The rapid growth of smart campus initiatives and the widespread use of personal mobile devices have created new challenges and
opportunities for institutional services. One ongoing and often inefficient process is lost and found management. Students frequently
misplace valuable belongings, and the manual, centralized nature of traditional lost and found systems often results in slow retrieval
times and frustration.
This project introduces UniFind, a modern, AI-integrated smart campus platform aimed at transforming the recovery of lost and
found items by establishing a secure, user-friendly, and community-driven system. Built on a strong and responsive React frontend
and a secure Supabase cloud backend, UniFind uses advanced technology to simplify the matching process.
The main innovation of UniFind is its integration of Gemini AI, which automatically analyzes uploaded images to detect key item
details, such as category, color, and brand. It uses this information for advanced image-to-image and text-to-image matching. This
method greatly improves the chances of finding a match and significantly shortens the time needed for retrieval compared to manual
methods. By securely verifying users with college IDs and offering features like private notifications and secure profile
management, UniFind provides a highly efficient, trustworthy, and scalable digital solution for this important campus service.
II. Literature Review
Real-time object detection, tracking, and monitoring framework for security surveillance systems[1]
This paper presents a real-time security surveillance system that performs well by blending traditional computer vision with deep
learning. The system starts with Background Subtraction using Approximate Median Filtering to detect moving objects. It then
employs Connected-Component Labeling for efficient tracking. A Convolutional Neural Network (CNN) is used for final
recognition and validation. The framework showed better accuracy compared to several leading methods on the tough MOT-
Challenge datasets (MOT15, MOT16, and MOT17). This demonstrates that its combined approach is effective and practical for
real-time security applications.
Robust real-time object detection and counting system for casting foundries[2]
The paper presents DeepMachining, a strong real-time system designed for detecting and counting casting products in complex and
noisy foundry environments. Its main idea is a dual-branch structure that uses large-scale pretraining along with adaptive few-shot
fine-tuning to adjust to different product lines. One branch stays fixed to keep core features,
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 135
while the distillation branch makes adjustments. This approach effectively addressed challenges such as object overlap and changes
in the environment, resulting in an impressive average counting accuracy of 97.8%. It also greatly decreased manual labor and
mistakes in counting in industrial settings.
Two-stage vision system: Application of multi-perspective object detection network and character recognition network in
industrial product classification[3]
This research proposes a two-stage vision system for classifying industrial products. The system first locates the product code and
then recognizes the characters. The first stage uses YOLO-DualSight Fusion Network (YOLO-DSF). This is a modified version of
YOLOv8, which combines multi-perspective backbones. LSPDown captures fine details while GhostNet focuses on global features.
They are merged dynamically by a FocusFusion Module (FFM) to improve small-target detection.
In the second stage, a U-shaped Denoising Network (UDNet) cleans the detected code before classification with ResNet34. The
complete system achieved a high classification accuracy of 98.9% in field tests. This demonstrates strong performance in industrial
settings.
Small object detection (SOD) system for comprehensive construction site safety monitoring[4]
The paper presents a Small Object Detection (SOD) system that uses YOLOv5 for safety monitoring at construction sites, focusing
on the multi-scale issue. A main feature is its ability to crop high-resolution images into segments defined by the user, including an
overlap buffer. This helps to enlarge small objects, like distant workers, making them easier to detect. By running the system on
Jetson AGX Xavier edge devices, it provides real-time inference and improves small object accuracy. The method raised the
average accuracy from $0.165$ to $0.642$, demonstrating its effectiveness for real- time safety applications in edge computing.
Enhanced lightweight detection of small and tiny objects in high-resolution images using object tracking-based region of
interest proposal[5]
This work presents a lightweight, real-time object detection system for small objects in high-resolution video. It uses an object
tracking-based Region of Interest (ROI) proposal mechanism. A two-branch ROI Fusion module combines rough ROI estimates
from a low-resolution segmentation branch with precise ROI predictions from a Kalman Filter (SORT-like) tracking branch. This
setup lets a lightweight detector (YOLOv7 Tiny) concentrate on important areas. This method is nearly four times faster than
segmentation-only approaches. It also performs better on datasets like DroneCrowd and SeaDronesSee, showing efficiency and high
accuracy with few parameters.
Fusion-based modeling of an intelligent algorithm for enhanced object detection using a Deep Learning Approach on radar
and camera data[6]
This paper presents IAEOD-DLRCDF, an intelligent algorithm that improves object detection in autonomous driving by
combining radar and camera data. The system begins with multi-angle joint calibration to align the data. It uses YOLOv8 for
detecting individual objects, then follows with a data fusion process that merges spatial and classification information. Object
classification is handled by an Adam-optimized Bidirectional Long Short-Term Memory (BiLSTM) network. This method achieved
an overall accuracy of 90.59% and the shortest computational time of 0.74 seconds compared to other models, showing it is an
effective and highly efficient fusion solution.
Focus Track: Enhancing object detection and tracking for small and ambiguous objects[7]
Focus Track is a new, strong one-stage Multi-Object Tracking (MOT) system that excels at tracking small, unclear, and hidden
objects in crowded scenes. It uses a fine-tuned YOLOv10 detector. Its main innovations include the Modified Soft Buffered IoU
(MS-BIOU) for better bounding box association and the Adaptive Similarity Enhancement (ASE) module, which takes into account
occlusion severity and motion changes to prevent identity switches. The system also incorporates Spatial-Temporal Confidence
Enhancement (STCE) and a Size-Occlusion Trajectory Smoother (SOTS). FocusTrack consistently outperformed other top trackers
on both the MOT17 and MOT20 benchmarks in key metrics like HOTA and IDF1.
Small object detection: A comprehensive survey on challenges, techniques and real-world applications[8]
This is a detailed review of recent progress in Small Object Detection (SOD) using deep learning. It centers on articles from 2024
to 2025. The survey clearly outlines the main challenges, including limited feature information, scale variation, and background
interference. It organizes top solutions into a clear structure. This includes techniques like optimized Feature Extraction/Fusion
(FPN, PAN), various Attention Mechanisms, and Data-centric Approaches (Super- Resolution, GANs). The paper concludes by
highlighting lightweight networks, Transformer-based models, and domain adaptation as the most promising areas for future
research.
Low-light object detection via adaptive enhancement and dynamic feature fusion[9]
This paper addresses the issue of object detection in low-light conditions, such as low brightness, low contrast, and noise, by
suggesting a two-stage solution. The first stage employs DAMFCN, which is the Low-Light Enhancement Network, along with an
Adaptive Module (LLAM) to capture fine details and reduce noise. The improved image is then processed by DarkYOLOv8, a
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 136
modified version of YOLOv8 based on an EfficientNet backbone. This version includes the Dynamic Feature Fusion Attention
Module (DFFAM) to merge features effectively and reduce blur. The system achieved top results on the Exclusive Dark dataset
with high efficiency, showing a lightweight and effective method for low-light vision tasks.
SO-YOLOv8: A novel deep learning-based approach for small object detection with YOLO beyond COCO[10]
The research presents SO-YOLOv8, an improvement of the YOLOv8x model designed for small object detection (SOD). The model
adds a Squeeze-and-Excitation (SE) block to the backbone. This block helps recalibrate important features, making the model more
sensitive to tiny objects. With optimized training methods, including improved data augmentation techniques like MixUp and Copy-
Paste, as well as multi-scale training, the model becomes significantly more robust. Tests on the PASCAL VOC 2012 dataset showed
major improvements. It achieved a Precision of $1.0$ and a better mAP@0.5 of $0.79$, demonstrating its ability to reduce false
positives for small-scale targets.
Object detection in real-time video surveillance using attention based transformer-YOLOv8 model[11]
This research proposes a new framework for real-time object detection in surveillance by combining the YOLOv8 backbone with
an attention mechanism and a Transformer- based detection head. This hybrid design takes advantage of YOLOv8's lightweight
efficiency and the Transformer's capability to capture long-range dependencies and global context. It addresses common issues like
occlusion and varying lighting. The model shows excellent performance with a high Precision of 96.78%, Recall of 96.89%, and
mAP of 89.67%. Importantly, the optimized model achieved an impressive real-time inference time of 5.2 ms per frame. This
significantly surpasses traditional methods like Faster R-CNN (32.5 ms) and YOLOv3 (36.8 ms). Thus, it is very suitable for resource-
constrained, real-time applications such as industrial automation and autonomous systems.
An application of metadata-based image retrieval system for facility management[12]
This paper presents a metadata-based image retrieval system for organizing and retrieving photos from large facility management
photo databases. The system uses three types of metadata inferred from a mobile device: location, inferred via Bluetooth Low Energy
(BLE) beacon Received Signal Strength (RSS) using a regression algorithm; perspective, inferred from the device's Inertial
Measurement Unit (IMU); and image semantic content, inferred using a CNN-based deep learning algorithm for semantic
segmentation. Experiments demonstrated the feasibility of BLE-based indoor localization with an average Root Mean Square Error
(RMSE) of $1.103$ meters using a Support Vector Machine (SVM) regressor. The framework combines Building Information
Modeling (BIM) and Geographical Information Systems (GIS) to visualize a federated information management platform. The
authors conclude that the system improves decision-making through quick information retrieval based on flexible combinations of
these three metadata types.
Find Mine: Find the Lost Items via Mobile App[13]
This paper proposes "FindMine," a user-friendly mobile app designed to help users quickly and remotely reclaim or report lost and
found items. The app encourages a community- focused process where users can post details of found items, including the location
and relevant keywords. Users who have lost an item can search for it by selecting an area on a map. It also lets both parties
communicate via chat to confirm ownership and arrange delivery. The system is built using Kotlin for the backend, Spring Boot
for the APIs, and PostgreSQL for the database. The main goal is to use geographic location for efficient searching and posting. This
makes the recovery process easier and removes the need for traditional, centralized lost and found services.
A distributed image-retrieval method in multi-camera system of smart city based on cloud computing[14]
This paper presents a distributed image retrieval method for multi-camera systems in a smart city. It uses a cloud computing platform
and incorporates a Distributed Fault- Tolerant Processing (DFP) mechanism. The method focuses on Content-Based Image Retrieval
(CBIR) and involves three main steps: generating feature vectors using the SURF algorithm, clustering features with the K-means
algorithm through MapReduce, and constructing an inverted index. The main contribution is applying DFP and fault tolerance
methods, including redundancy strategies like erasure, regenerative, and replication codes. This approach significantly boosts the
processing rate, system reliability, and availability while handling large volumes of image and video data on low-reliability cloud
storage nodes. Experimental results show that the proposed method is scalable and effective, especially demonstrating a shorter
computational time compared to the Hadoop system when processing large data sets.
A Systematic Review on Technologies and Applications in Smart Campus: A Human-Centered Case Study[15]
This paper reviews smart campus technologies and applications while including a human-centered case study to assess how well
research meets stakeholder needs. We categorize enabling technologies into five areas: Data Computing and Storage, IoT,
Intelligent, Immersive, and Mobile technologies. We group applications into four areas: Smart Learning, Smart Living, Smart
Environment, and Smart Management. The review shows that IoT and Intelligent technologies dominate both the literature and
human-centered interest. Notably, there is a significant difference. The emphasis on Mobile technologies and Smart Environment
applications is greater in the human-centered study than in the literature. This suggests that research may not be keeping pace with
what end users care about regarding environmental awareness. The findings highlight the need for future research to focus on
stakeholder-driven issues like privacy-aware systems and solutions for energy and the environment.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 137
Managing the Strategic Transformation of Higher Education through Artificial Intelligence[16]
This article examines the strategic use of Artificial Intelligence (AI) in "smart universities." It foresees a shift in higher education
that uses AI and quantum technologies for mostly independent operation. The article looks at the potential benefits, such as
personalized learning, better accessibility, and more efficient administration. It outlines key areas of function, like AI-Driven
Admissions Office and Intelligent Tutoring. However, it also highlights the challenges and risks, including job loss for academic
and administrative staff, a decline in educational quality due to less human interaction, algorithmic bias, and issues with data
privacy and security. The study employs a systematic review approach and offers a balanced evaluation to help stakeholders make
informed decisions. It particularly notes the unique chances for Historically Black Colleges and Universities (HBCUs) to use AI
for fair access and innovation.
Security and Privacy Aspects of Cloud Computing: A Smart Campus Case Study[17]
This paper offers a detailed look at the security and privacy challenges in cloud computing, highlighting their significance through
a smart campus case study. Cloud computing provides many benefits but is also very vulnerable to security issues like data privacy,
integrity, access control, and cyber- attacks. The study reviews possible solutions, including cryptographic techniques, proper access
policies, and the use of Blockchain technology to improve security and decentralization. A survey conducted at a smart campus
revealed that most respondents (79%) strongly believed cloud-based data is at risk of security threats if not managed properly.
Additionally, 92% felt that distributed data storage is safer. The paper ends by stressing the urgent need for cloud service providers
to put effective security and privacy measures in place to ensure data confidentiality and reliability.
The review was conducted using publications from 2020 to 2025 indexed in IEEE Xplore, Scopus, Springer, and ResearchGate.
Keywords such as “lost and found system”, “object detection”, “campus management”, “cloud-based retrieval”, and “AI-based
recognition” were used. 17 were selected based on relevance to computer vision, NLP, and smart campus applications. Each paper
was categorized according to its focus area—object detection, retrieval systems, or campus automation—to highlight technological
trends and limitations.
Comparative Analysis of Existing Approaches
Approach / Model Key Features Domain of Application Limitations
YOLOv8 / Transformer -YOLO High- speed, high-
accuracy object detection
Surveillanc e, Traffic Monitoring Requires large datasets, limited
semantic understanding
DeepMachin ing Dual- branch neural
framework for counting
Industrial Manufacturing Not adaptable for personal item
classification
FocusTrack Enhance d small object
tracking
Crowded Scene Tracking Heavy computatio nal load
using spatial-temporal
confidence
FindMine Mobile app for lost
items with location
tagging
Campus / Public Use Lacks AI- based
similarity detection and
automation
Cloud Retrieval
(Smart City)
Distribut ed fault- tolerant
retrieval via
cloud
Urban Surveillanc e High latency, no personalizat ion
for users
Identified Research Gaps
From the reviewed literature, several gaps were identified that hinder the deployment of effective lost-and-found systems in campus
environments:
1. Limited AI integration combining both image and text modalities for matching lost items.
2. Lack of real-time cloud synchronization and user notification systems.
3. Absence of secure authentication or user verification in most existing systems.
4. Scalability and privacy concerns when handling thousands of users or images.
5. No unified platform offering both automation and personalization for academic institutions
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 138
III. Discussion
UniFind represents an evolution of AI-assisted lost-and- found systems, combining the efficiency of modern vision transformers
with the flexibility of cloud-native services. While similar frameworks exist for industrial or surveillance contexts, the focus on
educational institutions makes UniFind distinctive. The platform can enhance campus security, reduce manual workload, and
provide insights into frequently lost item categories. Ethical considerations such as image privacy and data retention are
incorporated within the Supabase framework.
IV. Conclusions
This paper reviewed major AI and cloud-based developments relevant to lost-and-found object management, highlighting
their potential and limitations. A conceptual UniFind model was proposed, offering an integrated architecture that combines Gemini
AI’s multimodal intelligence, NLP-driven description matching, and Supabase cloud services. Future work includes implementing
the model, integrating multilingual support, deploying on mobile platforms, and evaluating performance on real campus datasets.
With these advancements, UniFind can serve as a blueprint for the next generation of smart campus management systems.
V. Acknowledgment
The authors express their heartfelt gratitude to Dr. Vaishnavi Ganesh, Department of Computer Science and Engineering,
Priyadarshini College of Engineering, Nagpur, for her valuable guidance, constant encouragement, and insightful suggestions
throughout the development of this review work. The authors also extend their sincere thanks to the faculty members of the
Department for their constructive feedback and continuous support. Finally, the team appreciates the contribution of peers and
mentors whose discussions and technical insights greatly enriched the quality of this paper.
References
1. S. Abba, A. M. Bizi, J.-A. Leeb, S. Bakouri, and M. L. Crespo, "Real-time object detection, tracking, and
monitoring framework for security surveillance systems," Heliyon, vol. 10, Apr. 2024, Art. no. e34922, doi:
10.1016/j.heliyon.2024.e34922.
2. C.-W. Chou and Y.-T. Hsu, "Robust real-time object detection and counting system for casting foundries," Appl. Soft
Comput. J., vol. 176, Apr. 2025, Art. no. 113155, doi: 10.1016/j.asoc.2025.113155.
3. S. Jin, Z. Cao, and C. Yu, "Two-stage vision system: Application of multi-perspective object detection network and
character recognition network in industrial product classification," Eng. Appl. Artif. Intell., vol. 156, May 2025, Art. no.
111190, doi: 10.1016/j.engappai.2025.111190.
4. S. Kim, S. H. Hong, H. Kim, M. Lee, and S. Hwang, "Small object detection (SOD) system for comprehensive construction
site safety monitoring," Autom. Constr., vol. 156, Oct. 2023, Art. no. 105103, doi:
10.1016/j.autcon.2023.105103.
5. A. Kos, K. Majek, and D. Belter, "Enhanced lightweight detection of small and tiny objects in high- resolution images
using object tracking-based region of interest proposal," Eng. Appl. Artif. Intell., vol. 153, Apr. 2025, Art. no. 110852,
doi: 10.1016/j.engappai.2025.110852.
6. Y. Wu, "Fusion-based modeling of an intelligent algorithm for enhanced object detection using a Deep Learning
Approach on radar and camera data," Inf. Fusion, vol. 113, Aug. 2024, Art. no. 102647, doi:
10.1016/j.inffus.2024.102647.
7. S. B. J. Khan, C. Li, and P. Zhang, "FocusTrack: Enhancing object detection and tracking for small and ambiguous
objects," J. Vis. Commun. Image R., vol. 111, Aug. 2025, Art. no. 104549, doi:
10.1016/j.jvcir.2025.104549.
8. M. Nikouei et al., "Small object detection: A comprehensive survey on challenges, techniques and real- world
applications," Intell. Syst. Appl., vol. 27, Jul. 2025, Art. no. 200561, doi: 10.1016/j.iswa.2025.200561.
9. S. Fu, Q. Zhao, H. Liu, Q. Tao, and D. Liu, "Low- light object detection via adaptive enhancement and dynamic feature
fusion," Alexandria Eng. J., vol. 126, Apr. 2025, pp. 60–69, doi: 10.1016/j.aej.2025.04.047.
10. Iqra and K. J. Giri, "SO-YOLOv8: A novel deep learning-based approach for small object detection with YOLO beyond
COCO," Expert Syst. Appl., vol. 280, Apr. 2025, Art. no. 127447, doi: 10.1016/j.eswa.2025.127447.
11. D. Nimma et al., "Object detection in real-time video surveillance using attention based transformer- YOLOv8 model,"
Alexandria Eng. J., vol. 118, Jan. 2025, pp. 482–495, doi: 10.1016/j.aej.2025.01.032.
12. J. W. Ma, T. Czerniawski, and F. Leite, “An application of metadata-based image retrieval system for facility
management,” Future Generation Computer Systems, vol. 8, pp. 265–288, Mar. 2020, doi: 10.1016/j.future.2017.11.015.
13. P. Choudhary, A. Singh, A. K. Choudhary, and A. P. Srivastava, “Find Mine: Find the Lost Items via Mobile App,” in
2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), 2021, pp. 491-495, doi:
10.1109/ICIEM51511.2021.9445379.
14. J. Yang, B. Jiang, and H. Song, “A distributed image-retrieval method in multi-camera system of smart city based on
cloud computing,” Future Generation Computer Systems, vol. 84, pp. 11– 26, Jul. 2018, doi: 10.1016/j.future.2017.11.015.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 139
15. Y. Zhang, C. Yip, E. Lu, and Z. Y. Dong, “A Systematic Review on Technologies and Applications in Smart Campus:
A Human-Centered Case Study,” IEEE Access, vol. 10, pp. 16134-16149, Feb. 2022, doi:
10.1109/ACCESS.2022.3148735.
16. B. George and O. Wooden, “Managing the Strategic Transformation of Higher Education through Artificial Intelligence,”
Administrative Sciences, vol. 13, no. 9, p. 196, Aug. 2023, doi: 10.3390/admsci13090196.
17. S. H. Gill et al., “Security and Privacy Aspects of Cloud Computing: A Smart Campus Case Study,” Intelligent Automation
& Soft Computing, vol. 31, no. 1, pp. 117-128, Feb. 2022, doi: 10.32604/iasc.2022.016597.