AbstractArtificial intelligence (AI) is revolutionizing both industries and reshaping the global economy. However, the rapid advancement of AI technologies brings significant security and privacy challenges. Recent incidents highlight vulnerabilities in AI systems, such as data leakage and malicious code injection, leading to severe financial losses and privacy breaches. Although existing studies have discussed specific security threats, they often lack detailed granularity and cover a limited scope. In this survey, we fill this gap by systematically categorizing and analyzing the threats and countermeasures in AI systems, which span both the training and inference stages, encompass centralized and distributed settings, and address both conventional and foundation AI models. By reviewing existing literature, we aim to provide AI researchers and practitioners with a thorough understanding of system vulnerabilities and current countermeasures. We hope to inspire further research into robust solutions, ultimately contributing to the development of resilient AI technologies.