Security Tools Based on Large Language Models
Abstract
This article focuses on the field of security tools based on large language models (LLMs), systematically reviewing the core products, security strategies, and technical solutions launched by leading US technology companies and research teams in 2024. In June 2024, OpenAI released its security strategy for large language models and a user data protection plan. The former established a multi-dimensional protection system covering infrastructure, sensitive data/model weight protection, and model review, while the latter enhanced privacy protection by giving ChatGPT users the right to choose how their data is used and by default restricting the use of some data for training purposes.
References
Baral S, Saha S, Haque A, 2024, An Adaptive End-to-End IoT Security Framework Using Explainable AI and LLMs. 2024 IEEE 10th World Forum on Internet of Things (WF-IoT), 469–474.
Singh A, 2023, Exploring Language Models: A Comprehensive Survey and Analysis. 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), 1–4.
Dowswell K, 2024, Considering Responsible AI with GitHub Copilot, in Programming with GitHub Copilot: Write Better Code--Faster! Wiley, New Jersey, 217–227.
Washizaki H, Yoshioka N, 2024, AI Security Continuum: Concept and Challenges. 2024 IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI (CAIN), 269–270.
Das BC, Hadi Amini M, Wu Y, 2024, Security and Privacy Challenges of Large Language Models: A Survey. arXiv e-prints, Art. no. arXiv: 2402.00888.
Wu Q, Wang Y, 2023, Research on Intelligent Question-Answering Systems Based on Large Language Models and Knowledge Graphs. 2023 16th International Symposium on Computational Intelligence and Design (ISCID), 161–164.
Jawhar S, Miller J, Bitar Z, 2024, AI-Based Cybersecurity Policies and Procedures. 2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC), 1–5.
Ots K, 2025, Overview of Generative Artificial Intelligence Security, in Securing Microsoft Azure OpenAI. Wiley, New Jersey, 1–17.
Park J, You G, Ji Y, et al., 2024, Security Requirements for Fully Automated AI Systems to Exercise and Ensure the Rights of Data Subjects. 2024 19th Asia Joint Conference on Information Security (AsiaJCIS), 107–112.