The Malicious Use of Artificial Intelligence Was Discussed at the XIII International IT-Forum

The Malicious Use of Artificial Intelligence Was Discussed at the XIII International IT-Forum

Written by Darya MATYASHOVA, Master of International Relations, Intern of the International Center for Socio-Political Research and Consulting (dasham0708@mail.ru).

On the 7-9th of June 2022, the 13th International IT-Forum was held in Khanty-Mansiysk. The Forum attracted more than 5,000 participants from around the world, especially those from BRICS and SCO states. A range of international conferences were held within the framework of the Forum, including the Fourth International Conference “Tangible and Intangible Impact of Information and Communication in the Digital Age”. It was organized by by the Government of the Khanty-Mansi Autonomous Area – Ugra, Ministry of Digital Development, Telecommunications and Mass Media of the Russian Federation, Commission of the Russian Federation for UNESCO, UNESCO / UNESCO Information for All Programme, Russian Committee of the UNESCO Information for All Programme, and Interregional Library Cooperation Centre. The conference was opened by the Governor of Ugra Natalia KOMAROVA, ambassadors of a number of countries and the representatives of international organizations.

One of the special conference sections was dedicated to the threats of malicious use of artificial intelligence (MUAI) and the challenges of psychological security. The section was presided by Professor Evgeny PASHENTSEV DSc, a leading researcher of the Diplomatic Academy at the Ministry for Foreign Affairs of Russia and Coordinator of the International Research Group on Threats for International Psychological Security by Malicious Use of Artificial Intelligence (Research MUAI. In his opening speech, he stressed that, unfortunately, the ethical codes of AI adopted in different countries, almost all of them have a common feature: they do not speak clearly or speak very briefly about malicious use of AI, despite the obvious quantitative and qualitative growth of this massive threat in its manifestations and consequences. According to Evgeny Pashentsev, it is of crucial importance to think about the malicious use of AI and take appropriate measures now, since the penetration of AI into our lives is of an everyday nature, and the degree of threat posed by the malicious use of AI may soon become the main one. At the same time, it is far from being adequately realized by the majority of society. The malicious use of AI and psychological security is, in its turn, in the focus of experts’ attention, because at a certain stage of its qualitative development, pretty soon, it can become an effective means of total control, and not just one of the propaganda channels and means like radio, television, the Internet…

Arvind GUPTA

Arvind GUPTA (India), Head and Co-founder of Digital India Foundation explained in his paper “Are Algorithms Deciding What We See, Read and Consume? And if Yes, under What Ethical Frameworks?” the important issues that is the monopoly of big technology companies in critical areas such as operating systems, payments architecture, e-commerce, social media interactions and global advertisement revenues. Due to this monopoly, there is a lack of accountability of these companies for misinformation in social networks. The speaker also provided evidence where willful design of algorithms lead to the spread of fake news. Personalized recommendations create echo chambers and confirm users’ biased judgments, and this bias is managed through the usage of bots. Due to this, users, with their political differences, become vulnerable to such news curated through bots. In addition to these problems, the outsourcing of critical inputs manufacturing to few countries creates supply chain security challenges for nation-states. All of this is unscrupulous corporate behavior that undermines the fundamental principle of an interconnected world: content neutrality.

To overcome these problems, A. Gupta recommended a number of practical measures like: open access to algorithms for researchers with proper incentives of AI studies, harmonization in policies regulating algorithms across the world, implementation of legislation defining various types of information: personal, confidential, etc., prohibition of targeted advertising, local manufacturing of strategic industries’ components, development of digital public goods under privacy by design framework.

Evgeny PASHENTSEV

Evgeny PASHENTSEV stressed in his paper “Malicious Use of AI in Agenda-Setting: Trends and Prospects in the Context of Rising Global Crisis” that numerous positive aspects of the use of AI in society in general, and in public communications in particular, are undeniable. However, due to the growing sociopolitical and economic contradictions in modern society, the development of geopolitical rivalries and international tensions, it can be assumed that the large-scale malicious use of AI through agenda setting is already taking place at national and global levels in the form of disinformation campaigns. At the same time, no government or transnational corporation will take responsibility for this. As in traditional forms of propaganda, these entities blame only their opponents, and do not publicly admit that they actively resort to propaganda themselves. New threats to agenda-setting and political stability are arising from the advantages of offensive and defensive psychological operations using AI. These advantages are increasingly associated with quantitative and qualitative departures from the traditional mechanisms of producing, delivering, and managing information; new possibilities for having psychological impacts on people; and the waging of psychological warfare. In particular, these advantages may include: (1) the volume of information that can be generated, (2) the speed at which information can be generated and distributed, (3) the believability of information, (4) the strength of the intellectual and emotional impacts that can be created, (5) the analytical data-processing capabilities that are available, (6) the use of predictive analytics resources based on AI, (7) the methods of persuasion that can be used, and (8) new capabilities for integration in the decision-making process. Based on a qualitative and rather approximate assessment of the data available from primary and secondary open access sources, Evgeny Pashentsev draws the preliminary conclusion that advantages 1 and 2 have already been achieved, whereas advantages 3–8 are in the developmental stage at the operational level.

The usage of AI in radio, film, television and advertising is growing rapidly and manifests itself in a variety of forms. For example, researchers at Lancaster and the University of California have found that the average rating for synthetic faces was 7.7% more trustworthy than the average rating for real faces (Suleiman 2022)[1] which is statistically significant. Due to the crisis in the world economy, the degradation of democratic institutions in many countries, and increasingly acute geopolitical rivalries, MUAI through agenda setting at the national and global levels is growing. The redistribution of material resources in favor of the super-rich over the years of the coronavirus pandemic not only increases socio-political tensions in society, but also creates additional opportunities and incentives for the growth of MUAI. The growth in the number of billionaires’ fortunes in the world from 8 to 13 trillion dollars in the crisis year of 2020 (Dolan, Wang & Peterson-Withorn, 2021)[2] against the background of a record economic decline in recent decades, new hundreds of millions of unemployed, and the growth according to the UN of the number of hungry people in the world from 690 million in 2019 (Kretchmer, 2020)[3] to 811 million in 2020 (World Health Organization, 2021)[4] does not contribute to solving these and other acute problems of our time. The ten largest personal fortunes in the world, six represent Amazon (1), Microsoft (2), Google (2), and Facebook (1) (Forbes, 2021)[5]. At the end of 2019, the five Big Tech companies — Alphabet, Amazon, Apple, Microsoft, and Facebook (all are based on AI technologies progress) combined market cap was $4.9 trillion, which means they gained 52% in value in a single crisis year $7.5 trillion in the end of 2020). Together with the rising technological giant: Elon Musk’s Tesla Inc the these six companies were collectively worth in 2021 almost $11 trillion (Statista, 2021)[6].

In the near future, antidemocratic regimes will focus the entire set of AI technologies associated with agenda-setting on keeping the population under their control. Such regimes in countries with large military and economic potential are then able to focus more on psychological aggression against other nations, thereby turning agenda-setting into an important element of hybrid warfare. It should be borne in mind that the relative cheapness and ease of transferring AI software, as well as the involvement of AI specialists in criminal activities, allow psychological operations through AI to be carried out by relatively small groups of people, which can destabilize the situation in the country or even at the global level. However, it only underlines the importance of the skillful use of AI technologies by socially oriented forces not only at the level of public administration, but also through various structures of civil society, in order to neutralize threats to the psychological security of society. The paper of Evgeny Pashentsev was prepared with the financial support of the RFBR and the VASS, project No. 21-514-92001 “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia”.

Darya BAZARKINA

Darya BAZARKINA, a leading researcher at the Institute of Europe of the Russian Academy of Sciences, a member of Research MUAI, presented the paper “Artificial Intelligence in Terrorists’ Hands: Ways of Influencing Public Consciousness”, where she outlined current and future threats to the usage of AI by terrorist organizations and individual terrorists. She noted that communication remains one of the main aspects of terrorist activity. Their propaganda, recruitment and searches for funding are not just in the digital arena, but also involve the use of a wide range of sophisticated technologies—new encryption tools, crypto currencies, operations in the darknet, etc. At the same time, more and more crimes are committed with the help of social engineering tools (psychological manipulation in order to induce a person to perform certain actions or share confidential information). Given the importance of influencing the public consciousness for terrorists, as well as the convergence of terrorism and cybercrime, terrorist organizations and lone-wolf terrorists can actively use the mechanisms of social engineering in their psychological operations. This threat to the psychological security of society (and in some cases, its physical security) is already a reality. It can become even more relevant due to the development and spread of AI technologies, which (if used maliciously) can facilitate the tasks of social engineering even for criminals without special technical knowledge.

Since 2015, so-called Islamic State (IS) has used bots to exchange instructions and coordinate terrorist attacks. IS and Al-Qaeda use Telegram bots to provide access to content archives. Terrorists use bots not only for agenda-setting, but also to coordinate active and potential fighters, and, as a result, to expand their audience among active users of existing AI products. The fact that for the purposes of social engineering, terrorists would like to attract not only users but also AI developers to their ranks is shown by open ads. Terrorist propaganda in the EU countries is currently aimed at encouraging individuals to commit terrorist attacks in their places of residence. The suggested methods included drones. With the decline in the combat power of the terrorists, they move from direct armed clashes to attacks in which the perpetrator is removed from the object. Theoretically, even the use of modern robotics by terrorists, primarily unmanned aerial vehicles, carries an element of social engineering. Some research suggests that people who use autonomous technology may experience a decline in their ability to make decisions related to moral choices, self-control or empathy. In the case of a terrorist organization, this may be a deliberate removal of personal responsibility from the person who committed the terrorist act. At the same time, according to Darya Bazarkina, social engineering in a narrow sense (psychological manipulation in order to obtain passwords and other confidential data) is also used by terrorist groups.

Peter MANTELLO, a researcher from Italy, mentioned in his paper “Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization” that mass campaigns of political indoctrination were once the province of the state. However, today the low-cost or even free easy-to-create bots has lowered the entry bar for violent extremist organizations, allowing them to build affective bonds with social media users to mobilize them to act radically and violently. According to Peter Mantello, one of the most rapidly growing areas of artificial intelligence development is interactive software deployed on social networks to replace or enhance human “efforts” for various purposes. These computing agents, “web robots” “social” or “chatbots”, these computational agents perform a variety of functions such as automatically generate messages, advocate ideas, act as followers of users or surrogate agents. Conversational software applications are now augmenting and replacing human efforts across an ever-expanding array of fields – advertising, finance, mental health counseling, dating, and wellness. AI can be trained on the materials of interactions in social networks, news articles and other text content, which will allow them to “understand” the context of the situation, history and ethics in order to interact more human-like manner.

Peter MANTELLO

P. Mantello mentioned that such achievements allow artificial agents to read the emotional state of users and react in an appropriate “emotional” way, increasing the anthropomorphic attractiveness of AI. Like other AI applications, social media bots were promising tools for achieving the public good. But their malicious misuse by non–state and state actors is already a reality. Hostile states and non-state actors use social bots to increase the speed and scale of the spread of disinformation on the Internet, create fake accounts on social networks, collect personal data of unsuspecting users, impersonate friends/associates of people, and manipulate political communications. In the hands of militant extremist organizations, AI-based bots are rapidly replacing human propagandists, recruiters and troll armies. Currently, such bots are also used to recruit neo-Nazis in Germany or white supremacists in the United States. Organizations like Al-Qaeda or IS have the most extensive experience and the widest networks in social networks and, thus, represent the most dangerous and disturbing example of this trend on a global scale. The researcher also emphasized the growing dependence of modern warfare on intelligent machines and its special value for parties with fewer resources or traditional weapons in an asymmetric conflict.

Anna BYCHKOVA

Anna BYCHKOVA, head of the Scientific Research Department of the Irkutsk Institute (branch) of the All-Russian State University of Justice, in her paper “International Legal Aspects of Countering Threats to Psychological Security Evoked by the Malicious Use of Artificial Intelligence” proved the importance of the transition from moral AI regulators to legal ones: the time to protect psychological security through appealing to the norms of ethics has passed – the time of the legal norms is coming. However, legal norms inherit ethical ones. Awareness of the risks of using AI has led to the development of ethical standards at the level of individual countries. Thus, the Russian “Code of Ethics in the field of AI” is an act of a recommendatory nature containing ethical norms and establishing general principles, as well as standards of behavior for actors in the field of AI, operates with such phrases: “AI actors should treat responsibly …”, “AI actors should not allow …”, “AI actors are recommended…” etc. The Chinese approach is fundamentally different: ethics in the field of AI is not an independent subject of consideration, but one forming a triad together with the legislation and regulation of AI. It defines the main goal – the creation of an AI that people can trust (Trustworthy AI). Such an AI has five characteristics: a) reliable and manageable; b) its decisions are transparent and understandable; c) its data is protected; d) its responsibility is clearly regulated; e) its actions are fair and tolerant towards any communities. Planning, development and implementation of practices to ensure each of the five characteristics is carried out at the corporate level, a system of standards and controls for all practices is being developed at the industry level (joint responsibility of industry associations and the state).

In connection with the last point, Anna Bychkova pointed to the formation of quasi-legal norms by Big Tech platforms, which have become, in fact, “states within a state”. Given the scale of Big Tech’s influence on society as a whole, there really is a need for an agreement between representatives of these digital platforms and the state, which is obliged to defend its interests and protect the rights of its citizens. At the same time, while some countries, including Russia, only timidly urge people involved in AI to behave ethically, China is forming a comprehensive system of ethical, regulatory and legislative practices that allow trust in AI systems.

Sergei SEBEKIN

Sergey SEBEKIN, senior lecturer at Irkutsk State University, pointed out in the paper “Malicious Use of Artificial Intelligence to Undermine Psychological Security as an Act of Aggression: Issues of Classification in International Law” that the existing “traditional” international law is not yet adapted to attribute the malicious use of AI to acts of aggression, since it was formed in the “pre-information industrial” era, when conventional weapons were the decisive factor in achieving military-political objectives and influencing the strategic situation. At the same time, the need for such a legal classification has already matured.

The phenomenon of malicious psychological impact through AI is “dual”: on the one hand, it is the nature of the impact, which is psychological, affecting the thoughts and consciousness of people, and which has been used for several hundred years; on the other hand, it is an instrument of influence – i.e. AI, the full application of which can be expected in the future. None of the above-described components of this complex impact fully falls within the field of regulation of international law from the point of view of their qualification as an act of aggression. According to Sergey Sebekin, the main criterion by which the malicious use of AI in order to undermine psychological security will be qualified as an act of aggression are the effects produced and the consequences of such influences. Thus, the solution of the issue of qualifying the use of AI for the purposes of psychological destabilization requires the search for effects equivalent to an act of aggression, which in this case can be expressed through socio-political, economic and physical consequences.

Vitali ROMANOVSKI, Chief Adviser of the Belarusian Institute of Strategic Research (Belarus) presented the paper “Malicious Use of Artificial Intelligence for Disinformation and Agenda-Setting: The Case of African States”. He pointed out that digital disinformation and agenda formation are becoming an increasingly common feature of the domestic political landscape of Africa. For example, in their report “Industrialized Disinformation: 2020 Global Inventory of Organised Social Media Manipulation” Oxford University researchers found evidence that in 81 countries worldwide social media were used to spread computational propaganda and disinformation about politics. Among these countries are Tunisia, Libya, Egypt, Sudan, Ghana, Nigeria, Ethiopia, Kenya, Angola, Zimbabwe and South Africa.

Moreover, according to National Democratic Institute data, from 1 January 2020 to 31 July 2021, African states held 32 different elections. Various intergovernmental and non-governmental organizations used the term “fake news” in their reports to describe respective election campaigns. Among them are Final Report 2020, European Union Election Observation Mission to Burkina Faso; Final Report 2020, European Union Election Observation Mission to Ghana; Central African Republic Report of the Secretary-General S/2021/571, United Nations; Digital Voter Manipulation: A situational analysis of how online spaces were used as a manipulative tool during Uganda’s 2021 General Election, African Institute for Investigative Journalism and Konrad Adenauer Stiftung and others.

Vitali ROMANOVSKI

According to Vitali Romanovski, in view of the growing evidence of politically motivated manipulation of the media in several African states, it is reasonable to support the assumption that AI-based deepfakes technologies are likely to be used more often to determine the agenda. This assumption is confirmed by the Europol 2022 report “Law Enforcement Agencies and the Challenge to Deepfakes”, which states that technology can contribute to various types of criminal activity, including spreading disinformation, manipulating public opinion and supporting narratives of extremist or terrorist groups. National Governments should develop more consistent policies to counter disinformation. For example, it would be possible to consider the creation of a specialized state interdepartmental structure to counter disinformation. The most important task will be to promptly inform the internal and external audience about the registered cases of disinformation.

Pierre-Emmanuel THOMANN

The topic of malicious use of AI was also considered by speakers from other sections. Thus, Pierre-Emmanuel THOMANN, president of the «Eurocontinent» international association, Professor at Jean Moulin University, Lyon III (France) and Research MUAI member, presented the paper “Artificial Intelligence and Europe: Between Ethics and Great Power Geopolitics”. He regards that the systemic nature and effect of strategic malicious use of AI would be made possible by an increase in the actors’ room for maneuver in space and time. Great powers that can implement AI-enhanced strategies leading to multiple areas of supremacy in spatial dimensions such as the land, maritime, air, cybernetic, spatial, and cognitive domains and time gains, with their anticipation capacity favored by predictive analysis, could lead to an overthrow of the international order or make stabilization impossible. Studies devoted to the geopolitical implications of strategic malicious use of AI and its implications for the EU and international psychological security are currently lacking. Analysis of the risks of malicious use of AI in international relations tends to be focused on threats to democracy and the use that non-democratic regimes can make of it. The link between AI and international relations and the possible consequences the former might have in systemic terms (i.e., the mutation of the geopolitical configuration), is awaiting investigation.

Strategic malicious use of AI is likely to have a decisive effect on the evolution of the geopolitical configuration, leading to a reinforcement of hierarchies and inequalities and possibly a new America-China bipolarity. Pierre-Emmanuel Thomann also analyzed the European Union’s prospect on these processes. The EU recognizes that the United States and China will dominate AI and digitalization in the international geopolitical arena in the coming years. Until 2020, the main focus of the EU regarding AI and digitalization was on its ethical, normative and economic aspects in the context of regulating the EU common market, and this is reflected in its main communication strategy. This is in line with the EU’s promotion of ‘multilateralism’ as an international doctrine in its Global Strategy for the Foreign and Security Policy of the EU, which is known as the EU Global Strategy (EUGS) and intended to foster international cooperation at the European and global levels.

The “Strategic Compass for Security and Defence” elaborated by the European Union External Action Service (EEAS) of the EU was published in the context of the conflict in Ukraine in March 2022 (EEAS 2022). The document takes into account the new security environment and updates former strategic documents. The EU has acknowledged the return of power politics in a contested multipolar world. It has emphasized that it needs to prepare for fast-emerging challenges because its strategic competitors are actively undermining its secure access to the maritime, air, cyber, and space domains. In the cyber domain, the EU wants to develop and make intensive use of new technologies—notably quantum computing, AI, and big data—to achieve comparative advantage (e.g., in terms of cyber-responsive operations and information superiority). It also needs to maintain its excellence in ensuring autonomous EU decision-making, including that based on geospatial data. However, the EU has not changed its doctrinal position on multilateralism and still refuses to accept the multipolar model. It promotes strategic autonomy, but it considers itself complementary to NATO and the US as its main strategic partner. It is therefore aligned de facto with the unipolar objectives of the US and will continue to cooperate in areas like respective security and defense initiatives, disarmament and non-proliferation, the impact of emerging and disruptive technologies, climate change and defense, cyber defense, military mobility, countering hybrid threats including foreign information manipulation and interference, crisis management, and relationships with strategic competitors (principally, Russia and China). The EU does not consider the systemic implications of strategic MUAI on the geopolitical configuration and how it might acquire greater autonomy, especially from the US (on which it is very dependent); nor does it advocate a more stable and balanced international system. Here again, the EU’s close alliance with the US risks geopolitical fragmentation into antagonistic blocs at a global level

A better balance is needed to avoid an escalation and uncontrolled spiral of geopolitical rivalries without limits in time and space. On a global scale, there will be no international order and no common rules and norms for the development of AI to fight MUAI without an acceptable geopolitical order involving the Great Powers. There must be an acceptance of an ethical and human-centered order, a new multilateral configuration that offers a model for global AI cooperation. Only then will MUAI and threats to IPS be contained. Otherwise, strategic MUAI in the context of Great Power rivalry and its threat to IPS will open up a Pandora’s box of world conquest by a new entity. This may lead to total and permanent war and place humanity in danger.

            If the EU wants to rebuild its digital sovereignty, it will have to redouble its efforts and investments. International cooperation based on inclusiveness, respect and reciprocity will be better achieved with a stronger geopolitical balance on AI between global actors such as the US, China, Russia and the EU member states, and also between smaller states. The EU should therefore place more emphasis on questions regarding geopolitical balance and data sovereignty to counter threats to IPS from MUAI. It should also focus more on the different consequences that it could face regarding strategic MUAI, such as the implications for the EU of cognitive warfare and the development of GEOINT that goes beyond tactical MUAI. In parallel to the steps taken by the EU, strong bilateral or smaller coalitions should be created for cooperation outside the framework of the EU, between voluntary actors who would agree to pool the necessary resources and skills in order to ensure their independence and their future digital sovereignty, and to avoid being sucked into the US-China confrontation. The EU should refrain from aligning itself with potential new and emerging exclusive alliances as a result of the increasing confrontation between the US, China and Russia, but should instead promote strategic autonomy and sovereignty, and cooperation on an inclusive basis.

Marius VACARELU

Marius VACARELU, Professor at the National School of Political Science and Management (Romania) and a member of Research MUAI, proceeded in the paper “Global vs. Regional Political Competitions under Artificial Intelligence Development” from the understanding of political competition as a natural situation, a source of technological progress and economic development. At the same time, certain standards of political competition exist both for leaders and for economies, armies, and education systems. Marius Vacarelu raised the question whether AI developments today can balance the costs of political competition (lead to a reduction in military and economic costs) and whether AI will become the most important tool of such a competition in the future. The speaker pointed out that the superiority of a particular region in the field of AI can lead to new alliances to achieve global goals and, possibly, to wars for which it is necessary to set a limit on the permissibility of the use of force.

The presented papers indicate the need for further interdisciplinary studies of threats to psychological security caused by the malicious use of AI. Comprehensive solutions combining legal, political, economic, technological, psychological and educational measures are already needed in the fight against these threats. The problem of malicious use of AI aroused keen interest among representatives of the academic community and in political circles not only in Russia, but also among representatives of foreign countries. All participants in the discussion of the problem of malicious use of AI made their practical recommendations that can be used in the preparation of the final document of the conference.

Information about the author:

Darya MATYASHOVA, Master of International Relations, Intern of the International Center for Socio-Political Research and Consulting (dasham0708@mail.ru).


[1] Suleiman, E. (2022). Deepfake: a New Study Found That People Trust “AI” Fake Faces More Than Real Ones. Retrieved 12 Jun 2022, from https://reclaimthefacts.com/en/2022/03/25/deepfake-a-new-study-found-that-people-trust-ai-fake-faces-more-than-real-ones/

[2] Dolan, K., Wang, J., & Peterson-Withorn, C. (2021). The Forbes World’s Billionaires list. Retrieved 5 November 2021, from https://www.forbes.com/billionaires/

[3] Kretchmer, H. (2020). Global hunger fell for decades, but it’s rising again. Retrieved 5 November 2021, from https://www.weforum.org/agenda/2020/07/global-hunger-rising-food-agriculture-organization-report/

[4] World Health Organization. (2021). UN report: Pandemic year marked by spike in world hunger. Retrieved 5 November 2021, from https://www.who.int/news/item/12-07-2021-un-report-pandemic-year-marked-by-spike-in-world-hunger

[5] Forbes. (2021). The World’s Real-Time Billionaires. Retrieved 28 November 2021, from https://www.forbes.com/real-time-billionaires/#1d7a52b83d78

[6] Statista. (2021). S&P 500: largest companies by market cap 2021. Retrieved 28 November 2021, from https://www.statista.com/statistics/1181188/sandp500-largest-companies-market-cap/