Bulletin of Chinese Academy of Sciences (Chinese Version)
Keywords
large-scale models, artificial intelligence, generative AI, large-model risks
Document Type
Policy & Management Research
Abstract
Large-scale models (large models) are not only central to technological innovation, but also deeply entwined with national security, economic transformation, and social governance. This study examines the status quo of large-model development, identifies the key risks and challenges, and proposes response strategies, aiming to provide theoretical and policy insights for China’s navigations in global artificial intelligence (AI) competition and advances technological innovation. The research indicates that competition in the large-model market is fierce, while the industry is gradually consolidating. Competition in large models between China and the United States has escalated into a form of geopolitical contest. From a technical perspective, the marginal returns of large-model scaling appear to be diminishing; mixture-of-experts approaches have improved model efficiency; and chain-of-thought techniques have enhanced logical reasoning within large models. Nevertheless, large models face multiple risks, including those emerging from technology itself, external governance issues, and broader societal impacts. A comprehensive systems-level approach is therefore needed. At the foundational level, risk is controlled via technical optimizations rooted in AI’s fundamental elements. At the legal level, an innovation-incentive framework should be established, aligning responsibilities with rights to provide stable expectations for the market. At the societal level, risk regulation should be undertaken from broader social dimensions.
First page
2005
Last Page
2015
Language
Chinese
Publisher
Bulletin of Chinese Academy of Sciences
References
1. Kaplan J, McCandlish S, Henighan T, et al. Scaling laws for neural language models. arXiv, 2020, doi: abs/2001.08361.
2. Zhou L X, Schellaert W, Martnez-Plumed F, et al. Larger and more instructable language models become less reliable. Nature, 2024, 634: 61-68.
3. 司晓. 大模型发展趋势及腾讯公司自主创新实践. 中国科学院院刊, 2024, 39(9): 1631-1638. Si X. Development trends of large models and Tencent’s independent innovation practice. Bulletin of Chinese Academy of Sciences, 2024, 39(9): 1631-1638. (in Chinese)
4. Rajbhandari S, Li C, Yao Z, et al. DeepSpeed-MoE: Advancing mixture-of-experts inference and training to power next-generation ai scale. arXiv, 2022, doi: abs/2201.05596.
5. Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models. arXiv, 2022, doi: abs/2201.11903.
6. 程乐. “数字人本主义”视域下的通用人工智能规制鉴衡. 政法论丛, 2024, (3): 3-20. Cheng L. Appraising regulatory framework of artificial general intelligence (AGI) under digital humanism. Journal of Political Science and Law, 2024, (3): 3-20. (in Chinese)
7. Morris M R, Sohl-Dickstein J, Fiedel N, et al. Levels of AGI for operationalizing progress on the path to AGI. arXiv, 2023, doi: abs/2311.02462.
8. Dahl M, Magesh V, Suzgun M, et al. Large legal fictions: Profiling legal hallucinations in large language models. Journal of Legal Analysis, 2024, 16(1): 64-93.
9. Jiang X, Tian Y, Hua F, et al. A survey on large language model hallucination via a creativity perspective. arXiv, 2024, doi: abs/2402.06647.
10. Turpin M, Michael J, Perez E, et al. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. arXiv, 2023,doi: abs/2305.04388.
11. Greenblatt R, Denison C, Wright B, et al. Alignment faking in large language models. arXiv, 2024, doi: abs/2412.14093.
12. Meinke A, Schoen B, Scheurer J, et al. Frontier models are capable of in-context scheming. arXiv, 2025, doi: abs/2412.04984.
13. 王艳慧. 人工智能民事主体地位的论证进路及其批判. 华东政法大学学报, 2020, (4): 86. Wang Y H. Civil subject status of artificial intelligence: The argumentation and its challenges. East China University of Political Science and Law Journal, 2020, (4): 86. (in Chinese)
14. 郑志峰. 人工智能应用责任的主体识别与归责设计. 法学评论, 2024, 42(4): 123-137. Zheng Z F. Subject identification and responsibility distribution of ai application liability. Law Review, 2024, 42(4): 123-137. (in Chinese)
15. 徐伟. 生成式人工智能侵权中因果关系认定的迷思与出路. 数字法治, 2023, (3): 129-143. Xu W. The myth and solution to the establishment of legal causation in tort by generativeartificial intelligence. Digital Law, 2023, (3): 129-143. (in Chinese)
16. 王迁. 再论人工智能生成的内容在著作权法中的定性. 政法论坛, 2023, 41(4): 16-33. Wang Q. The qualitative analysis of content generated by artificial intelligence in copyright law. Tribune of Political Science and Law, 2023, 41(4): 16-33. (in Chinese)
17. 吴汉东. 人工智能生成作品的著作权法之问. 中外法学, 2020, 32(3): 653-673. Wu H D. Rethinking the copyright of works generated by artificial intelligence. Peking University Law Journal, 2020, 32(3): 653-673. (in Chinese)
18. Andorno R. The precautionary principle: A new legal standard for a technological age. Journal of International Biotechnology Law, 2004, 1(1): 11-19.
19. 史九领, 洪永淼, 刘颖. 美国《2022年芯片与科学法案》对我国相关产业的影响与对策. 中国科学院院刊, 2024, 39(2): 379-387. Shi J L, Hong Y M, Liu Y. Impact of CHIPS and Science Act of 2022 on China’s related industries and policy suggestions. Bulletin of Chinese Academy of Sciences, 2024, 39(2): 379-387. (in Chinese)
20. 洪涛, 程乐. 全国算力体系一体化建设的五大问题及治理对策. 中国科学院院刊, 2024, 39(12): 2086-2095. Hong T, Cheng L. Five key issues and governance strategies in integration of China’s national computing power. Bulletin of Chinese Academy of Sciences, 2024, 39(12): 2086-2095. (in Chinese)
21. 徐小奔. 论人工智能生成内容的著作权法平等保护. 中国法学, 2024, (1): 166-185. Xu X B. On the equal protection of copyright law for content generated by artificial intelligence. China Legal Science, 2024, (1): 166-185. (in Chinese)
Recommended Citation
CHENG, Le and XIAO, Yang
(2024)
"Status quo of large-scale models, risks and challenges, and recommended countermeasures,"
Bulletin of Chinese Academy of Sciences (Chinese Version): Vol. 40
:
Iss.
11
, Article 11.
DOI: https://doi.org/10.3724/j.issn.1000-3045.20250304001
Available at:
https://bulletinofcas.researchcommons.org/journal/vol40/iss11/11