Research Paper
Fortifying AI Infrastructure: Securing Code, Configuration, and Integrity in National Systems
The rapid adoption of artificial intelligence (AI) on cloud platforms, such as AWS and Azure, has introduced critical security vulnerabilities across various national sectors, including defense, healthcare, and energy. While these environments deliver scalable intelligence, they also expand the attack surface, exposing misconfigured resources, unverified code, and weak identity controls. Recent breaches, including Capital One’s AWS data exposure, Tesla’s compromised Kubernetes console, and Microsoft’s AI dataset leak, demonstrate how cloud-hosted AI pipelines can be weaponized through insecure defaults, leaked credentials, and permissive access roles. This study analyzes prominent security incidents alongside current research on cloud and AI threats to identify recurring weaknesses in configuration management, secret handling, and model integrity. The findings highlight how attackers exploit these gaps to steal data, engage in cryptojacking, and gain unauthorized access to AI models. To address these risks, the paper proposes a framework for fortifying AI infrastructure that emphasizes: (1) zero-trust identity and access management, (2) secure coding and model lifecycle practices, (3) automated configuration scanning, and (4) continuous policy enforcement. The results underscore that AI infrastructure should be treated as national critical infrastructure, warranting rigorous standards and proactive defense measures. Without systematic hardening, AI pipelines are high-value targets for cybercriminals and nation-state actors, posing a threat to public safety and national security.
Published by: Ifeoma Eleweke
Author: Ifeoma Eleweke
Paper ID: V11I5-1190
Paper Status: published
Published: October 14, 2025
Full Details