Taiwan AI Labs Releases World's First GenAI Report on Coordinated Operations in Democratic Nations Exposing Information Manipulators' Tactics
On the 17th, Taiwan AI Labs released the "2024 Taiwan Presidential Election Online Information Manipulation Analysis Report", marking the world's first report on election manipulation conducted through large language models and multiple artificial intelligence models in the inaugural year of generative AI.
"In 2024, many countries held elections, and Taiwan emerged as a benchmark for the impact of foreign information operations worldwide. As the first democratic country to conduct elections in 2024 and the pioneer in using AI technology to observe information manipulation during the election," said the founder of Taiwan AI Labs, Ethan Tu, at the press conference.
Tu said, "We hope to share Taiwan's experiences and lessons in dealing with information manipulation, along with the threats of generative AI, with other democratic nations."
He expressed the aspiration that Taiwan should legislate for AI risks assessment and code of ethics based on the concept of "safeguarding digital rights," rather than content censorship that may compromise freedom of speech.
The press conference held by Taiwan AI Labs on the 17th saw more than 100 experts and stakeholders from Taiwan and abroad, including diplomats in Taiwan, journalists and correspondents from reputable international and Taiwanese news outlets, representatives of international organizations and NGOs, officials, and researchers in relevant fields. This robust participation underscores the level of the attention regarding Taiwan's elections and the issue of foreign information manipulation.Infodemic: AI Tool Analyzing and Understanding Information Manipulation of Coordinated Accounts
Since the COVID-19 pandemic, Taiwan AI Labs has been developing trustworthy and responsible AI technology for healthcare on an international scale, while establishing reconnaissance mechanisms to confront the global Infodemic with global partners.
Ethan explained that the first step is to observe and understand the behavior of individual accounts to identify troll accounts.
Coordinated troll accounts are defined as a group of non-authentic user accounts. These accounts may include the so-called "Wumao," which are accounts created to disseminate specific content by official entities. They can also be controlled through automated programs or operated by public relations companies in a non-organic, organized manner to spread specific narratives. Over the past two years, Taiwan AI Labs has utilized generative technology and large language models to comprehend billions of activities on social media. This effort has led to the discovery of over 30,000 troll groups and an understanding of the discourse disseminated by troll accounts across more than two million topics on social media. This approach helps uncover the targets and patterns of operational attacks, ultimately revealing potential underlying motives.
With the increasing global awareness of information manipulation, many interested parties across the globe expressed interest in Taiwan AI Labs’ tool. And Taiwan AI Labs has developed and increased the technological capabilities into the AI platform ‘Infodemic.’
‘Infodemic’ is designed to provide real-time and comprehensive data, enabling non-technical partners to understand the landscape of information manipulation both domestically and internationally. It serves as a foundation for developing strategies to enhance digital literacy. Chao-Hwei Hwang, the Chief Content Officer of Taiwan AI Labs, said that the ‘Infodemic’ has been consistently utilized in recent years to observe coordinated behavior on major Taiwanese social platforms, including Facebook, YouTube, Twitter (X), TikTok, and PTT.
Using large language models, Taiwan AI Lab comprehends the targets of information manipulation attacks, reactions from mainstream media reporting, and promptly records the information battlegrounds where troll groups participate and potential impacts. Starting approximately two months before the elections, closed-door meetings with domestic and international experts are held weekly to exchange observed online information manipulation. The aim is to improve public media literacy and establish resilience against information manipulation.
In addition to providing ‘Infodemic’ to media and research teams, Taiwan AI Labs has introduced a secure information platform called Miin. This platform allows ordinary users to receive messages while utilizing AI to comprehensively understand various perspectives. It helps users discern whether there may be potential information manipulation.
During the conference, Ethan Tu, the founder of Taiwan AI Labs, pointed out that ‘Infodemic’ provides insights into the significant issues in Taiwan influenced by troll accounts over the past year. As illustrated in the figure below, each circle represents a battlefield, with the circle's size indicating the impact of the issue and the color intensity reflecting the intensity of troll groups’ activities.2024 Taiwan Presidential Election Primary BattleFields
According to the data collected by ‘Infodemic’, Taiwan AI Labs reveals over 14,000 troll accounts participating in more than 48,000 battles from January 1, 2023, to the date the report was published. Over 270,000 News articles were involved during the period, with more than 8,000 pieces published by state-affiliated media. Additionally, 731,383 troll accounts engaged in information manipulation. Interestingly, approximately half of the unauthentic user accounts vanished following the conclusion of the election.‘Infodemic’ Provides Insights into Information Manipulations in Taiwan Over the Past Year
From January to April 2023, the tense around election-related discussions was relatively low. The most significant event during this period was Taiwan President Tsai Ing-wen's visit to the United States. Information operations became more active in May, and the number of battles increased.
Discussion and debates over US Military aid to Taiwan started to get attention in May. US President Joe Biden expressed support for Taiwan's defense, leading to numerous coordinated accounts stirring up Taiwan-US relations and spreading conspiracy theories about the United States. Discussions in June revolved around Taiwan's defense budget, with both Taiwan and the US being targets of coordinated information manipulation, aiming to impact public perception. The news regarding the South China Sea Working Group was also a significant battlefield.
Large and influential battlefields intensified in July as troll groups found out that the issues of US military aid might not resonate with the public. Toll groups’ focus shifted to issues related to people's daily lives, such as housing justice and illegal construction.
Operations began to focus on the egg shortage and food safety issues. It was considered as a success for the coordinated troll network: the then Minister of Agriculture Chi-chung Chen resigned after three weeks of intensive manipulation operations and attacks over these issues.
The pre-election battlefield focused on discussions about the "blue-white alliance." (Blue referred to the Nationalist Party/Blue camps and white for the Taiwan People’s Party/White camp. These two parties were discussing forming an alliance to have joint presidential candidates). Troll groups attempted to push this alliance through via a series of attacks, but this operation failed as the blue-white alliance did not materialize. Subsequently, troll groups shifted their focus to attacking other candidates. This behavior did not align with typical actions of domestic PR firms in Taiwan, and both sides of the political spectrum might not have been aware of this event, suggesting signs of potential foreign intervention.
Leading up to the Taiwanese presidential elections in January 2024, the major events included the earthquake in Japan and the national emergency alert by the Ministry of National Defense. The troll groups’ attack showed a clear focus attempting to influence the Taiwan presidential election. Approximately half of the collaborative accounts stopped their activities right after the election. Operations only resumed after the visit of the U.S. high-level delegation to Taiwan.
The convenience and popularization of the internet and other technological advancements were once thought to bolster democracy and diversity in democratic countries. However, Taiwan's experience shall serve as a warning to all. In the wake of more advanced generative technology and increasing foreign manipulation activities through social media, infringements on digital rights and foreign manipulation have become increasingly severe globally. This echoes a fact that the digital right and democracy index has been decreasing with technological progress. Based on Taiwan AI Labs' observations of the 2024 Taiwan elections, there are ten takeaways that require special attention for global democratic societies.Learning 1: Generative technology is widely employed for information manipulation, with textual misinformation being the primary battleground. Debunking rumors is no longer as effective.
Taiwan AI Labs founder Ethan Tu pointed out that with the universal recognition of fact-checking, the main challenge in the information ecosystem has shifted from fake news to misleading information. Narrative manipulation through distorting text has become a major battlefield after the participation of large language models. A significant trend is the use of massive text generation for asymmetrical propaganda operations. While the content itself may not be fraudulent, the extensive promotion of asymmetrical narratives creates a distorted perception among the public regarding political preferences and opinions.
Taking the egg shortage incident as an example, an event of minor shortage of eggs was exploited to mislead the public into hoarding eggs. The rapid emergence of new battlefields spreading rumors renders fact-checking not effective anymore.
Image generation is now being used in creating many fake accounts profile pictures. While a significant number of generated videos for smear campaigns was observed before the elections, their actual impact remains limited.Learning 2: A few dominant troll groups were leading media framing
Although numerous troll groups attempt to construct specific messages and narratives, including Taiwan's PR companies, Taiwan AI Labs' finding indicates that a few dominant coordinated networks can significantly impact the overall volume of operations. For example, the top two troll groups on Facebook (#61009、#61019) generated a significant amount of volume that accounted for 45.71% of the total information manipulation volume on Facebook, with just 626 accounts.Learning 3: Main troll groups are not native to Taiwan
Taiwan AI Labs founder Ethan Tu analyzed that the dominant troll groups that orchestrated public opinions operated in both Chinese and English to manipulate narratives domestically and internationally. In addition to attacking Taiwanese political figures in Chinese, these groups used English to manipulate anti-American issues, spreading skepticism about the U.S., including criticizing US President Joe Biden and questioning U.S. foreign policies.
Considering the facts such as the bilingual operations, attacks on various political parties, and the lack of alignment with specific political ideologies, the actors behind these troll accounts do not appear to be native to Taiwan.
Their objectives involve spreading chaos, a tactic commonly seen in some authoritarian countries. The aim is to disseminate confusion in democratic countries and promote the narrative that "authoritarianism is more efficient." This type of strategy was detected in discussions on U.S. military aid, the introduction of Indian labor in Taiwan, and the Japanese disaster.
These troll accounts intensify their effort to smear and attack countries that are friendly to Taiwan, attempting to create a false perception against these countries.
As an example, the fake news about naval officers' bodies on social media platforms revealed a significant number of accounts were hacked in Taiwan. Taiwan AI Labs encouraged users whose accounts were compromised to report to the police. It was found that systematic account takeover occurred on social media, and further investigation revealed the use of backdoor vulnerabilities in China-made networking equipment to implant VPNs as a springboard for posting articles on social media. Considering these operations, it is less likely the work of domestic PR firms and has exceeded the capabilities of domestic PR organizations.Learning 4: Mainstream collaborative groups have significant resonance with official media
As an example, Taiwan AI Labs observed that the top two troll groups on Facebook (#61009、#61019) amplified specific narratives in a non-organic way from September to December 2023. It's noteworthy that a significant amount of these narratives echoed the statements of Chinese state-run media (see image below), including the narratives of " (the Taiwanese ruling party) pushing Taiwan into a dangerous situation of war and peril," "the US disregards the lives of the Taiwanese people," and "the termination of the Economic Cooperation Framework Agreement (ECFA) affecting Taiwan's economy." As the election approached, narratives aligning with Chinese state-owned media intensified and increased gradually, including threats of war against Taiwan and attacks on Taiwan's education policy and economy.
Coordinated network operations have a significant impact, especially evident on the Facebook pages of presidential candidates in Taiwan. Approximately every six comments on these pages are linked to a troll group, with an even higher frequency for the incumbent president—nearly every three comments saw an unauthentic one. In the chart, the blue line represents regular users, while the red line represents troll groups. The interaction between regular users and troll accounts was initially not high, but their engagement increased through the activities of troll groups, as illustrated in the chart.
Taiwan AI Labs also monitored collaborative operations on state-owned TikTok platform and found that during the observation period, manipulative rhetoric was predominantly negative when referring to Taiwan, the United States, and Japan. In contrast, when collaborative groups discussed China, the content was mostly positive, emphasizing the establishment of favorable perceptions such as "praising China's powerful weapons" and "supporting the reunification of Taiwan with China."
On Facebook, YouTube and PTT, troll groups attacked all political parties without aiming at a specific one. However, manipulative behavior on TikTok shows a clear political bias, overwhelmingly supporting a specific Presidential candidate. Not only is the online discourse predominantly positive for this candidate, but also the online engagement volume was concentrated on him. In contrast, the inorganic volume for the other two candidates is less than 30% of the mentioned candidate's volume, and the discourse is mainly negative.
Social platforms have begun to influence what people see and engage with in their communities, exposing users to specific topics or content. The exposure imbalance between positive and negative opinions is significantly more pronounced compared to other Taiwanese and American social media platforms. As a result, both the European Union and the United States think tanks have recently been advocating for the neutrality of AI algorithms. The neutrality of algorithms is crucial for democratic nations.Learning 8: Mainstream troll groups attempt to dominate discourse related to the Blue-White alliance
According to the monitoring of ‘Infodemic,’ the top two troll groups (#61009、#61019) on Facebook had a clear and direct attempt to influence public opinion over Blue-White Alliance (the Nationalist Party/Blue camps and the Taiwan People’s Party/White camp were discussing forming an alliance to have joint presidential candidates). They aimed at promoting specific candidates and political parties, with an evident and direct intention to push the cooperation between the blue and white camps through.
Coordinated groups often disguise their identities to create groups and communities on social media platforms, such as regional communities, alumni groups, or like-minded people groups. These large-scale communities and groups share information, engage with the members, and provide timely responses, influencing users and taking over the power of discourse. These troll groups also infiltrate various Taiwanese news media's official social media channels. Ethan Tu advises the public to remain vigilant when using social platforms or fan pages, verifying the authenticity of unusual comments and accounts, and checking whether their statements are truthful.
During the 2024 Taiwan presidential election, short videos played a significant role. Taking the example of the Israel-Hamas conflict, short videos related to this event appeared in large numbers on various platforms, and similar content was recreated into seemingly different videos. This led to a prevalence of certain perspectives online, causing potential cognitive imbalance among the public.
It is noteworthy that, even before the outbreak of the Israel-Hamas conflict, such short videos had already appeared on Chinese social media platforms, fabricating content about conflicts between Israel and Hamas and gaining attraction in terms of views. Hence, it can be inferred that there was a force behind driving this cognitive manipulation, performance practices on the platforms, and involving users in spreading misinformation.
Ethan Tu, the founder of Taiwan AI Labs pointed out that although these information operations were not successful every time, they have had an impact in Taiwan. According to Taiwan AI Labs, the information operations in the past year have influenced at least 3 million Taiwanese people. These information manipulation activities became even more conspicuous in the run-up to the electionAbout AI platform ‘Infodemic’
Infodemic employs artificial intelligence to uncover information manipulation, expose collaborative behaviors, identify the dissemination of malicious information, and recognize narrative strategies used by troll account groups. By providing a cognitive security scanning system, businesses can verify any information, regardless of its format—be it a URL, image, video, or even a sentence. Simply input the data and initiate the scan, and the system will indicate whether the information is organic or if it's being spread by malicious actors manipulating public perception through platform algorithms. Upon detecting troll activities, the platform exposes and offers comprehensive data, displaying insights into malicious actors, their concealed intentions, and detailed information about bot networks operating on the internet and social media platforms. Through aggregated information graphics, businesses can easily grasp the complex realm of information warfare and understand the impact caused by these issues. Additionally, the system can establish links and explore any potential coordinated activities between these users and foreign entities.About Taiwan AI Labs
Taiwan AI Labs is a privately funded and the first open AI research institute in Asia advanced in trustworthy and responsible AI solutions. Since 2017 it developed outcomes through open algorithms, source code, and federated learning. It integrates Taiwan's talents, semiconductor industry strength, and comprehensive healthcare data to drive artificial intelligence development in smart healthcare, human-machine interfaces, generative AI, and other fields.
Taiwan AI Labs actively promotes the formation of Taiwan's AI industry chain, collaborating with government institutions and industries to promote "federated learning," replacing centralized data governance by shared data governance. It breaks the barrier of inter-industry data limitations due to data privacy issues, establishing a trusted AI technology that safeguards human rights and privacy, aligns with technology ethics, and gains broad international recognition.
Under the premise of trusted AI, Taiwan AI Labs effectively combines generative AI, launching right-respecting solutions in smart healthcare, federated learning, AI virtual anchors, future content studios, cognitive security, and more. In the field of human-machine interfaces, it develops AI music art generation, speech recognition, semantic understanding, and other technologies, significantly applied to the current globally significant issues of misinformation and fake news manipulation. The observation of the contexts of information manipulation has sparked fervent discussions internationally and were published in various renowned international journals.
Taiwan AI Labs' research consistently strides at the forefront globally, aiming to assemble an AI national team, integrate talents, resources, and industry chains, and establish Taiwan Federated Learning alliance (https://TAIFA.org) promoting public/private federation solutions worldwide.