Proposal on Strengthening the Security of Artificial Intelligence Applications and Promoting the Steady Development of the Digital Economy
The editor shared the second proposal of Yan Wangjia, member of the National Committee of the Chinese People’s Political Consultative Conference and CEO of Venustech Information Technology Group Co., Ltd.
Proposal 2: Proposal on Strengthening the Security of Artificial Intelligence Applications and Promoting the Steady Development of the Digital Economy
Problem and cause analysis
As a new economic form leading the future, the digital economy has become a new engine for my country’s high-quality development. General Secretary Xi Jinping pointed out that “the digital economy is the future development direction of the world”. As a factor of production, relying on artificial intelligence applications will fully demonstrate its element value and promote the further development of the digital economy.
While vigorously developing artificial intelligence applications and helping the development of the digital economy, we must adhere to the overall development and security, strengthen the security of artificial intelligence applications, and promote the steady advancement of the digital economy. The current artificial intelligence applications mainly face the following security problems.
1) Data security and privacy protection issues related to AI applications. Building artificial intelligence applications is usually based on a large amount of data, including possible personal data, corporate sensitive data, etc. The process of data collection, processing, and calculation will inevitably lead to data security and privacy protection issues.
2) The security risks of the algorithm “black box” application mode in artificial intelligence applications. At present, many artificial intelligence applications often directly apply existing models, lack in-depth analysis of core algorithms, and ignore their working principles. In these applications, algorithms are opaque “black boxes” that are likely to hide unknown security risks.
3) The security of the software and hardware supply chain of artificial intelligence applications. The realization of artificial intelligence applications involves a lot of engineering work, involving a series of software and hardware supply chains from the underlying chips to the development of upper-level applications, and security problems may arise in each link. In addition, implementations may rely on open source code, both at the risk of introducing vulnerabilities or backdoors, as well as involving intellectual property issues.
4) The emerging ethical security issues of artificial intelligence applications. Among these problems, there are not only the application of artificial intelligence for the wrong purpose, such as the repeated “big data killing”; there are also unreasonable application construction processes, such as unfair decision-making caused by algorithm bias; and problems after application deployment, such as Determination of liability in the event of loss caused by wrong decision-making, etc.
specific recommendations
“Security is the premise of development, and development is the guarantee of security.” In order to steadily promote the development of the digital economy and give full play to the role of artificial intelligence applications in exploring the value of data elements, it is necessary to always implement the concept of safe development, and take effective measures from various aspects to strengthen artificial intelligence. Smart App Security. Specific recommendations are as follows:
1) Strengthen the research on data security and privacy protection technology related to artificial intelligence applications
Artificial intelligence applications also face data security risks while obtaining data value. Traditional security protection technologies such as data desensitization hide sensitive information and also cause loss of data value. Therefore, while promoting the improvement of laws related to data security, we should also strengthen research on cutting-edge technologies such as privacy computing and secure multi-party computing, retain data value and support efficient computing required for the construction of artificial intelligence applications without revealing privacy and sensitive information. . It is recommended to set up relevant special topics to promote industry-university-research cooperation to carry out technical research and application research.
2) Strengthen the transparency requirements of artificial intelligence applications and avoid “black box” applications
The new artificial intelligence algorithm model represented by deep neural network is complex and lacks a strict theoretical basis, and its interpretability is still a frontier research topic. Artificial intelligence applications built with unclear core algorithm principles may produce unexpected behaviors and uncontrollable security risks. Therefore, it is necessary to improve the transparency requirements of artificial intelligence applications, especially in key areas involving social and personal safety, “black box” applications should be avoided. It is recommended to formulate relevant laws and standards to clarify the transparency requirements and measurement indicators of artificial intelligence applications in various fields.
3) Promote AI application supply chain security assessment and strengthen autonomous and controllable requirements
Under the current situation of frequent supply chain security incidents, the security issues of artificial intelligence application software and hardware supply chains should receive more attention. Therefore, it is necessary to strengthen the security assessment of the supply chain of artificial intelligence applications, especially for artificial intelligence applications related to key information infrastructure in the national economy and people’s livelihood industries. On the basis of a good security assessment, the requirements for autonomy and control should be strengthened to avoid being “stuck in the neck”. “. It is recommended to standardize the artificial intelligence application supply chain security evaluation mechanism and study evaluation technologies and methods, and promote the research of relevant core technologies to achieve autonomous control. In addition, considering that more and more artificial intelligence applications come from the open source software field, it is recommended to pay close attention to and strengthen the assessment of open source supply chain security and open source software security, and to do a good job in risk management and control in the open source field.
4) Institutional and technological two-wheel drive responsible artificial intelligence application
The “New Generation Artificial Intelligence Governance Principles – Development of Responsible Artificial Intelligence” issued by the National New Generation Artificial Intelligence Governance Professional Committee put forward the principles of artificial intelligence development such as harmony and friendship, fairness and justice, safety and controllability, and shared responsibility. The implementation of these principles is of great significance for the response to the ethical security issues of artificial intelligence applications and the long-term development of the digital economy. Therefore, measures must be taken to promote the development of responsible AI applications from both the institutional and technical aspects. It is recommended to promote the research on quantitative evaluation methods of artificial intelligence governance norms and formulate relevant norms and systems, and promote the collaborative research of industry, academia and research to create responsible artificial intelligence applications.
The Links: MCC 200-16IO1 PM75CLA120