
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 202
sensitivity to subtle defects can reduce missed defects, lower dependence on manual inspection, and improve
consistency in fabric quality control. This is especially valuable in high-throughput production settings where
inspection speed is a core operational requirement. In this sense, the practical contribution of the study is not
only higher defect-detection performance but also a more suitable accuracy–speed profile for smart
manufacturing environments.
Future work
Several directions for future research emerge from this study. First, the EPSA-enhanced detector should be
evaluated across a broader range of textile materials, weave patterns, lighting conditions, and production
environments to strengthen its external validity. Second, future work should test the model on embedded or edge
hardware, since the current evidence mainly supports comparative deployability rather than full hardware-level
validation in factory-edge settings.
Third, more detailed class-level analysis would help determine whether EPSA provides uniform gains across all
defect types or whether certain defect categories benefit more strongly from pyramid-aware attention. Fourth,
future studies could explore integrating EPSA with other lightweight enhancements, such as improved feature
fusion and adaptive convolution strategies, while keeping the model compact enough for industrial use. It already
suggests that the strongest performance is achieved when attention, feature fusion, and geometric adaptability
are coordinated within the same system.
Finally, future research may investigate deploying efficiency-aware attention models within full smart inspection
platforms, including online defect monitoring, visualization interfaces, and multi-threaded industrial software
pipelines. The explicitly points toward this broader application direction, indicating that practical fabric defect
detection research should move beyond algorithm comparison toward reliable end-to-end deployment systems.
REFERENCES
1. X. Xie, “A review of recent advances in surface defect detection using texture analysis techniques,”
Electronic Letters on Computer Vision and Image Analysis, vol. 7, no. 3, pp. 1–22, 2008, doi:
10.5565/rev/elcvia.108.
2. Y. Liu, K. Zhang, J. Zhang, and Q. Wang, “Automatic fabric defect detection using convolutional neural
networks,” Textile Research Journal, vol. 89, no. 23–24, pp. 5147–5160, 2019, doi:
10.1177/0040517519849985.
3. Kumar, “Computer-vision-based fabric defect detection: A survey,” IEEE Transactions on Industrial
Electronics, vol. 55, no. 1, pp. 348–363, Jan. 2008, doi: 10.1109/TIE.2007.896476.
4. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May
2015, doi: 10.1038/nature14539.
5. J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv preprint arXiv:1804.02767,
2018.
6. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object
detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Las Vegas, NV, USA, 2016, pp. 779–788, doi: 10.1109/CVPR.2016.91.
7. G. Jocher et al., “YOLOv5,” GitHub repository, 2020. [Online]. Available:
https://github.com/ultralytics/yolov5
8. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional block attention module,” in
Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 3–
19, doi: 10.1007/978-3-030-01234-2_1.
9. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018, pp. 7132–7141,
doi: 10.1109/CVPR.2018.00745.
10. T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for
object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), Honolulu, HI, USA, 2017, pp. 2117–2125, doi: 10.1109/CVPR.2017.106.