•  
  •  
 

Bulletin of Chinese Academy of Sciences (Chinese Version)

Keywords

embodied artificial intelligence (EAI);security;governance

Document Type

Artificial Intelligence and Public Security

Abstract

Embodied artificial intelligence (EAI) is progressively integrated into the fabric of our daily lives, enhancing various sectors such as industrial production, healthcare, and national defense. Nevertheless, the diverse range of hardware devices, software algorithms, and data communications that constitute these complex systems may contain vulnerabilities that could be exploited by attackers, posing a serious threat to personal, social, and national security. Thus, this study examines the security implications and proposes a security framework of EAI, from the perspectives of the information domain, physical domain, and social domain, focusing on its ontological security, interaction security, and application security. To mitigate these security risks, this study proposes EAI security governance principles and comprehensive measures for EAI security, aiming to provide scientific guidance for effective governance in this area.

First page

429

Last Page

439

Language

Chinese

Publisher

Bulletin of Chinese Academy of Sciences

References

1 ​Turing A M. Computing machinery and intelligence. (2007-11-23)[2025-03-09]. https://link.springer.com/chapter/10.1007/978-1-4020-6710-5_3.

​2​ Robotics—Safety requirements for service robots. [2025-03-09]. https://www.iso.org/standard/83498.html.

​3​ Yao Y F, Duan J H, Xu K D, et al. A survey on large language model (LLM) security and privacy: GoodThe, badthe, and uglythe. arXiv, 2023, doi: 10.1016/j.hcc.2024.100211.

​4​ Liu D, Yang M, Qu X, et al. A survey of attacks on Large Vision-Language Models: Resources, advances, and trendsfuture. arXiv, 2024, doi: 10.48550/arXiv.2407.07403.

​5​ Zhu W, Ji X, Cheng Y, et al. TPatch: A triggered physical adversarial patch. arXiv, 2023, doi: arxiv.org/html/2401.00148v1.

​6​ Sun Y, Huang Y, Wei X. Embodied adversarial attack: A dynamic robust physical attack in autonomous driving. arXiv, 2023, doi: 10.48550/arXiv.2312.09554.

​7​ Wen C C, Liang J Z, Yuan S H, et al. How secure are large language models (LLMs) for navigation in urban environments?. arXiv, 2024, doi: 10.48550/arXiv.2402.09546.

​8​ Liu S Y, Chen J W, Ruan S W, et al. Exploring the robustness of decision-level through adversarial attacks on LLM-based embodied models. arXiv, 2024, doi: 10.48550/arXiv.2405.19802.

​9​ Zhang H T, Zhu C Y, Wang X L, et al. BadRobot: Jailbreaking embodied LLMs in the physical world. arXiv, 2024, doi: 10.48550/arXiv.2407.20242.

​10​ Robey A, Ravichandran Z, Kumar V, et al. Jailbreaking LLM-controlled robots. arXiv, 2024, doi: 10.48550/arXiv.2410.13691.

​11​ Lu X, Huang Z, Li X, et al. POEX: Understanding and mitigating policy executable jailbreak attacks against embodied AI. arXiv, 2025, doi: 10.48550/arXiv.2412.16633.

​12​ Liu A S, Zhou Y G, Liu X L, et al. Compromising embodied agents with contextual backdoor attacks. arXiv, 2024, doi: 10.48550/arXiv.2408.02882.

​13​ Ji X Y, Cheng Y S, Zhang Y P, et al. Poltergeist: Acoustic adversarial machine learning against cameras and computer vision. (2021-05-24)[2025-03-09]. https://ieeexplore.ieee.org/document/9519394.

​14​ Jin Z Z, Ji X Y, Cheng Y S, et al. PLA-LiDAR: Physical laser attacks against LiDAR-based 3D object detection in autonomous vehicle. (2023-05-21)[2025-03-09]. https://ieeexplore.ieee.org/document/10179458.

​15​ Zhang G M, Yan C, Ji X Y, et al. DolphinAttack// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. Dallas: ACM, 2017: 103-117.

​16​ Jiang Y, Ji X Y, Jiang Y C, et al. PowerRadio: Manipulate sensor measurementvia power GND radiation. 2024, arXiv, doi: 10.48550/arXiv.2412.18103.

​17​ Wang X, Pan H, Zhang H, et al. TrojanRobot: Physical-world backdoor attacks against VLM-based robotic manipulation. arXiv, 2025, doi: 10.48550/arXiv.2411.11683.

​18​ Jiao R, Xie S, Yue J, et al. Exploring backdoor attacks against large language model-based decision making. arXiv, 2024, doi: 10.48550/arXiv.2405.20774.

​19​ Zhu Z H, Wu B Z, Zhang Z Y, et al. EARBench: Towards evaluating physical risk awareness for task planning of foundation model-based embodied AI agents. arXiv, 2024, doi: 10.48550/arXiv.2408.04449.

​20​ Yin S, Pang X H, Ding Y Z, et al. SafeAgentBench: A benchmark for safe task planning of embodied LLM agents. arXiv, 2024, doi: 10.48550/arXiv.2412.13178.

​21​ Spreitzer R, Moonsamy V, Korak T, et al. Systematic classification of side-channel attacks: A case study for mobile devices. IEEE Communications Surveys & Tutorials, 2018, 20(1): 465-488.

​22​ 施锦诚, 王国豫, 王迎春. ESG视角下人工智能大模型风险识别与治理模型. 中国科学院院刊, 2024, 39(11): 1845-1859. Shi J C, Wang G Y, Wang Y C. Artificial intelligence foundation model risk identification and governance model from the ESG perspective. Bulletin of Chinese Academy of Sciences, 2024, 39(11): 1845-1859. (in Chinese)

​23​ Ji X Y, Zhu W J, Xiao S L, et al. Sensor-based IoT data privacy protection. Nature Reviews Electrical Engineering, 2024, 1: 427-428.

Share

COinS