Persuasive technologies in China: Implications for the future of national security
Key Findings
The rapid adoption of persuasive technologies—any digital system that shapes users’ attitudes and behaviours by exploiting physiological and cognitive reactions or vulnerabilities—will challenge national security in ways that are difficult to predict. Emerging persuasive technologies such as generative artificial intelligence (AI), ambient technologies and neurotechnology interact with the human mind and body in far more intimate and subconscious ways, and at far greater speed and efficiency, than previous technologies. This presents malign actors with the ability to sway opinions and actions without the conscious autonomy of users.
Regulation is struggling to keep pace. Over the past decade, the swift development and adoption of these technologies have outpaced responses by liberal democracies, highlighting the urgent need for more proactive approaches that prioritise privacy and user autonomy. That means protecting and enhancing the ability of users to make conscious and informed decisions about how they’re interacting with technology and for what purpose.
China’s commercial sector is already a global leader in developing and using persuasive technologies. The Chinese Communist Party (CCP) tightly controls China’s private sector and mandates that Chinese companies—especially technology companies—work towards China’s national-security interests. This presents a risk that the CCP could use persuasive technologies commercially developed in China to pursue illiberal and authoritarian ends, both domestically and abroad, through such means as online influence campaigns, targeted psychological operations, transnational repression, cyber operations and enhanced military capabilities.
ASPI has identified several prominent Chinese companies that already have their persuasive technologies at work for China’s propaganda, military and public-security agencies. They include:
- Midu—a language intelligence technology company that provides generative AI tools used by Chinese Government and CCP bureaus to enhance the party-state’s control of public opinion. Those capabilities could also be used for foreign interference (see page 4).
- Suishi—a pioneer in neurotechnology that’s developing an online emotion detection and evaluation system to interpret and respond to human emotions in real time. The company is an important partner of Tianjin University’s Haihe Lab (see page 16), which has been highly acclaimed for its research with national-security applications (see page 17).
- Goertek—an electronics manufacturer that has achieved global prominence for smart wearables and virtual-reality (VR) devices. This company collaborates on military–civil integration projects with the CCP’s military and security organs and has developed a range of products with dual-use applications, such as drone-piloting training devices (see page 20).
ASPI has further identified case studies of Chinese technology companies, including Silicon Intelligence, OneSight and Mobvoi, that are leading in the development of persuasive technologies spanning generative AI, neurotechnologies and emerging ambient systems. We find that those companies have used such solutions in support of the CCP in diverse ways—including overt and attributable propaganda campaigns, disinformation campaigns targeting foreign audiences, and military–civil fusion projects.
Introduction
Persuasive technologies—or technologies with persuasive characteristics—are tools and systems designed to shape users’ decision-making, attitudes or behaviours by exploiting people’s physiological and cognitive reactions or vulnerabilities.1 Compared to technologies we presently use, persuasive technologies collect more data, analyse more deeply and generate more insights that are more intimately tailored to us as individuals.
With current consumer technologies, influence is achieved through content recommendations that reflect algorithms learning from the choices we consciously make (at least initially). At a certain point, a person’s capacity to choose then becomes constrained because of a restricted information environment that reflects and reinforces their opinions—the so-called echo-chamber effect. With persuasive technologies, influence is achieved through a more direct connection with intimate physiological and emotional reactions. That risks removing human choice from the process entirely and steering choices without an individual’s full awareness. Such technologies won’t just shape what we do: they have the potential to influence who we are.
Many countries and companies are working to harness the power of emerging technologies with persuasive characteristics, such as generative artificial intelligence (AI), wearable devices and brain–computer interfaces, but the People’s Republic of China (PRC) and its technology companies pose a unique challenge. The Chinese party-state combines a rapidly advancing tech industry with a political system and ideology that mandate companies to align with CCP objectives, driving the creation and use of persuasive technologies for political purposes (see ‘How the CCP is using persuasive technologies’, page 21). That synergy enables China to develop cutting-edge innovations while directing their application towards maintaining regime stability domestically, reshaping the international order, challenging democratic values and undermining global human-rights norms.
There’s already extensive research on how the CCP and its military are adopting technology in cognitive warfare to ‘win without fighting’—a strategy to acquire the means to shape adversaries’ psychological states and behaviours (see Appendix 2: Persuasive technologies in China’s ‘cognitive warfare’, page 29).2 Separately, academics have considered the manipulative methods of surveillance capitalism, especially on issues of addiction, child safety and privacy .3 However, there’s limited research on the intersection of those two topics; that is, attempts by the Chinese party-state to exploit commercially available emerging technologies to advance its political objectives. This report is one of the first to explore that intersection.
Chinese technology, advertising and public-relations companies have made substantial advances in harnessing such tools, from mobile push notifications and social-media algorithms to AI-generated content. Many of those companies have achieved global success. Access to the personal data of foreign users is at an all-time high, and Chinese companies are now a fixed staple on the world’s most downloaded mobile apps lists, unlike just five years ago.4 While many persuasive technologies have clear commercial purposes, their potential for political and national-security exploitation—both inside and outside China—is also profound.
This report seeks to break through the ‘Collingridge dilemma’, in which control and understanding of emerging technologies come too late to mitigate the consequences of those technologies.5 The report analyses generative AI, neurotechnologies and immersive technologies and focuses on key advances being made by PRC companies in particular. It examines the national-security implications of persuasive technologies designed and developed in China, and what that means for policymakers and regulators outside China as those technologies continue to roll out globally.
Persuasive-technology capabilities are evolving rapidly, and concepts of and approaches to regulation are struggling to keep pace. The national-security implications of technologies that are designed to drive users towards certain behaviours are becoming apparent. Democratic governments have acted slowly and reactively to those challenges over the past decade. There’s an urgent need for more fit-for-purpose, proactive and adaptive approaches to regulating persuasive technologies. Protecting user autonomy and privacy must sit at the core of those efforts. Looking forward, persuasive technologies are set to become even more sophisticated and pervasive, and the consequences of their use are increasingly difficult to detect. Accordingly, the policy recommendations set out here focus on preparing for and countering the potential malicious use of the next generation of those technologies.
Full Report
For the full report, please download here.
26 Nov 2024