" Exploring The Transformative Role of Generative Artificial Intelligence in Creative Industries: Bridging Art and Code "
Article Sidebar
Main Article Content
Generative Artificial Intelligence (GenAI), which refers to models capable of creating original outputs such as text, images, audio, 3D content, and code, is transforming creative industries at a rapid pace. This paper examines the influence of key generative approaches—including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), autoregressive language models, diffusion models, and multimodal systems—on workflows in areas such as art, design, animation, music, marketing, and software development. It outlines the study’s objectives and scope, and discusses underlying architectures, operational processes, and the necessary hardware and software infrastructure. In addition, the paper explores practical applications, advantages, and major challenges, including technical limitations, ethical concerns, and economic implications. The study concludes by proposing practical strategies for individuals and organizations to adopt GenAI responsibly, ensuring that innovation is balanced with the preservation of human creativity and broader societal values.
Downloads
References
Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NeurIPS), 2014.
D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in International Conference on Learning Representations (ICLR), 2014. Vaswani et al., “Attention is all you need,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.
J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Advances in Neural Information Processing Systems (NeurIPS), 2020. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” OpenAI, 2018. Ramesh et al., “Zero-shot text-to-image generation,” in International Conference on Machine Learning (ICML), 2021.
Radford et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning (ICML), 2021.
T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
N. Bostrom and E. Yudkowsky, “The ethics of artificial intelligence,” in The Cambridge Handbook of Artificial Intelligence, Cambridge University Press, 2014.
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.
Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
M. Abadi et al., “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2016. Paszke et al., “PyTorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems (NeurIPS), 2019.
Devlin et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” in NAACL-HLT, 2019.
T. Brown et al., “Language models are few-shot learners,” in Advances in Neural Information Processing Systems (NeurIPS), 2020. Radford et al., “Language models are unsupervised multitask learners,” OpenAI, 2019.
R. Rombach et al., “High-resolution image synthesis with latent diffusion models,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
P. Dhariwal and A. Nichol, “Diffusion models beat GANs on image synthesis,” in Advances in Neural Information Processing Systems (NeurIPS), 2021. Nichol and P. Dhariwal, “Improved denoising diffusion probabilistic models,” in International Conference on Machine Learning (ICML), 2021.
Oord et al., “WaveNet: A generative model for raw audio,” in Speech Synthesis Workshop (SSW), 2016.
Poole et al., “On variational autoencoders and their applications,” in ICLR Workshops, 2016.
Mildenhall et al., “NeRF: Representing scenes as neural radiance fields for view synthesis,” in European Conference on Computer Vision (ECCV), 2020.
S. Zhao et al., “Large-scale text-to-image generation models: A survey,” ACM Computing Surveys, 2023.
Zhang et al., “Generative AI for software engineering: A systematic literature review,” IEEE Transactions on Software Engineering, 2024.
P. Samuelson, “Generative AI meets copyright,” Communications of the ACM, vol. 66, no. 9, pp. 42–45, 2023.

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in our journal are licensed under CC-BY 4.0, which permits authors to retain copyright of their work. This license allows for unrestricted use, sharing, and reproduction of the articles, provided that proper credit is given to the original authors and the source.