IMLMA: An Intelligent Algorithm for Model Lifecycle Management with Automated Retraining, Versioning, and Monitoring
Abstract
With the rapid adoption of artificial intelligence (AI) in domains such as power, transportation, and finance, the number of machine learning and deep learning models has grown exponentially. However, challenges such as delayed retraining, inconsistent version management, insufficient drift monitoring, and limited data security still hinder efficient and reliable model operations. To address these issues, this paper proposes the Intelligent Model Lifecycle Management Algorithm (IMLMA). The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining, and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance. A multi-metric replacement strategy, incorporating MSE, MAE, and R2, ensures that new models replace existing ones only when performance improvements are guaranteed. A versioning and traceability database supports comparison and visualization, while real-time monitoring with stability analysis enables early warnings of latency and drift. Finally, hash-based integrity checks secure both model files and datasets. Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays, enhances predictive accuracy and stability, and maintains low latency under high concurrency. This work provides a practical, reusable, and scalable solution for intelligent model lifecycle management, with broad applicability to complex systems such as smart grids.
References
Joshi S, 2025, A Review of Generative AI and DevOps Pipelines: CI/CD, Agentic Automation, MLOps Integration, and Large Language Models. Journal of Artificial Intelligence and Software Engineering, 15(3): 100–120.
IntuitionLabs, 2025, Active Learning and Human Feedback for Large Language Models, IntuitionLabs, viewed August 30, 2025, https://intuitionlabs.ai/pdfs/active-learning-and-human-feedback-for-large-language-models.pdf
Li W, Yin X, Ye M, et al., 2024, Efficient Hyperparameter Optimization with Probability-Based Resource Allocating on Deep Neural Networks. Neurocomputing, 599: 127907.
Liu X, Qi H, Jia S, et al., 2025, Recent Advances in Optimization Methods for Machine Learning: A Systematic Review. Mathematics, 13(13): 2210.
Egele R, Balaprakash P, Wiggins GM, et al., 2025, DeepHyper: A Python Package for Massively Parallel Hyperparameter Optimization in Machine Learning. Journal of Open Source Software, 10(109): 7975.
Kochnev R, Goodarzi AT, Bentyn ZA, et al., Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning? arXiv. https://arxiv.org/abs/2504.06006
Microsoft, 2024, How to Monitor Datasets, viewed August 30, 2025, https://learn.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets?view=azureml-api-1&tabs=python
EvidentlyAI, 2025, Shift Happens: We Compared 5 Methods to Detect Drift in ML Embeddings, viewed August 30, 2025, https://www.evidentlyai.com/blog/embedding-drift-detection
Kore A, Abbasi Bavil E, Subasri V, et al., 2024, Empirical Data Drift Detection Experiments on Real-World Medical Imaging Data. Nature Communications, 15(1): 1887.
Paul R, 2025, Handling LLM Model Drift in Production: Monitoring, Retraining, and Continuous Learning, viewed August 30, 2025, https://www.rohan-paul.com/p/ml-interview-q-series-handling-llm
Matthew B, 2025, Model Versioning and Reproducibility Challenges in Large-Scale ML Projects, Proceedings of the 2025 IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA.
Woźniak AP, Milczarek M, Woźniak J, 2025, MLOps Components, Tools, Process and Metrics—A Systematic Literature Review. IEEE Access, 13: 123456–123480.
Eken B, Pallewatta S, Tran N, et al., 2025, A Multivocal Review of MLOps Practices, Challenges and Open Issues. ACM Computing Surveys, 57(8): 1–44.
Patel R, Tripathi H, Stone J, et al., 2025, Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges. arXiv. https://arxiv.org/abs/2506.02032
Ottenheimer D, Schneier B, 2025, The AI Agents of Tomorrow Need Data Integrity, IEEE Spectrum, viewed August 30, 2025, https://www.schneier.com/essays/archives/2025/08/the-ai-agents-of-tomorrow-need-data-integrity.html
U.S. Department of Defense, 2025, CSI_AI_DATA_SECURITY, viewed August 30, 2025, https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF
OWASP, 2023, OWASP Machine Learning Security Top Ten (ML01:2023–ML10:2023), viewed August 30, 2025, https://owasp.org/www-project-machine-learning-security-top-10/