New AI supply chain standard brings together Ant, Tencent, Baidu and Microsoft, Google, Meta
China's Ant Group, Tencent Holdings, and Baidu have teamed up with US tech giants Microsoft, Google, and Meta Platforms to develop the world's first international standard for large language model (LLM) security for supply chains, as the need for artificial intelligence (AI) governance grows more urgent.
The companies unveiled their "Large Language Model Security Requirements for Supply Chain" on Friday with the World Digital Technology Academy (WDTA), during a side event at the Inclusion Conference on the Bund in Shanghai.
This new standard, which covers the entire life cycle of LLMs, is part of the WDTA's broader AI Safety, Trust, and Responsibility initiative. Established in Geneva in April 2023 under a United Nations framework, the WDTA aims to provide comprehensive measures for managing security risks across the supply chain, such as data leaks, model tampering and supplier non-compliance.
Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.
The standard was drafted and reviewed by experts from leading Chinese and US technology firms, along with top academic and industry institutions, including the Cloud Security Alliance Greater China Region and Nanyang Technological University in Singapore.
"International cooperation on AI-related standards has become increasingly crucial," WDTA Honorary Chairman Peter Major told the audience during the side event at the Inclusion Conference on the Bund. Photo: 2024 Inclusion Conference on the Bund alt="International cooperation on AI-related standards has become increasingly crucial," WDTA Honorary Chairman Peter Major told the audience during the side event at the Inclusion Conference on the Bund. Photo: 2024 Inclusion Conference on the Bund>
"International cooperation on AI-related standards has become increasingly crucial as artificial intelligence continues to advance and impact various sectors globally," Peter Major, Chair of the United Nations Commission on Science and Technology for Development and Honorary Chairman of the WDTA, said during a panel discussion.
The latest standard comes after two earlier generative AI (GenAI) standards, which were also the result of collaboration between Chinese and Western tech firms. The "Generative AI Application Security Testing and Validation Standard" and the "Large Language Model Security Testing Method" were published in April at a WDTA event.
"There's a lot of ambiguity and uncertainty currently around large language models and other emerging technologies, which makes it hard for institutions, companies and governments to decide what would be a meaningful standard," Lars Ruddigkeit, Microsoft's technology strategist, said in the same panel. "For me, the WDTA supply chain standard tries to bring this first road to a safe future on track."
As businesses and individuals increasingly adopt GenAI, tech companies have called for measures to keep the technology safe. OpenAI chief executive Sam Altman, upon resuming his role in November after a brief ousting, said "investing in full-stack safety efforts" would be a priority for the company.
In July 2023, China became the first country to regulate GenAI and related services by issuing rules telling service providers to uphold "core socialist values", among other requirements. Since then, Beijing has whitelisted LLMs from several tech companies - including Ant, Baidu and Tencent - for commercial use.
Some international standards and regulations on AI existed before the GenAI that followed the release of ChatGPT in late 2022.
In 2021, Unesco, the UN's heritage body, introduced a "Recommendation on the Ethics of AI", which has been adopted by 193 member states.
Between 2022 and 2023, the International Organization for Standardisation, a Geneva-based non-governmental group, published AI-related guidelines on system management, risk management and systems using machine learning.
In the US, California's legislature is expected to pass SB 1047, one of the nation's first major frameworks for regulating AI systems, although critics argue it may impose burdens on AI innovation and research.
This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP's Facebook and Twitter pages. Copyright © 2024 South China Morning Post Publishers Ltd. All rights reserved.
Copyright (c) 2024. South China Morning Post Publishers Ltd. All rights reserved.