Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security

Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security

13 février 2022 0 Par Pierre-Emmanuel Thomann

Review of the report by Ekaterina Mikhalevich, PhD student of St. Petersburg State University

Artificial intelligence (AI) technologies, despite their high significance for social development, raise threats to national and international psychological security (IPS) to a new level. The use of AI to destabilize the economy, political situations, and international relations through targeted high-tech psychological impact on the consciousness of citizens is of a growing danger. Economic difficulties, social contradictions and political conflicts against the background of the ongoing coronavirus pandemic create an objective basis for the malicious use of AI (MUAI) in Northeast Asia. The region has not developed its own security system that would cover all countries of the region and meet their interests. Negative information and psychological impacts associated with various factors of national and international development are increasingly affecting the system of interstate relations in Northeast Asia. Recently, the pace of development of AI technologies in the countries of the region, especially in China, Japan, and South Korea, has sharply increased, which, despite the progressiveness of these achievements, also creates new challenges to psychological security in the region, which require a timely response from state and non-state, national and international structures and institutions.

The report “Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security” [1] published by Edition of the International Center for Social and Political Studies and Consulting is a result of the implementation of the research project titled “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia,” funded by the Russian Foundation for Basic Research (RFBR) and the Vietnam Academy of Social Sciences (VASS), project number 21-514-92001.

The author of the report is Professor Evgeny Pashentsev, who is a Leading Researcher at the Diplomatic Academy at the Ministry of Foreign Affairs of the Russian Federation, and the Director of the International Center for Social and Political Studies and Consulting (Moscow). He is a coordinator of the International Research Group on Threats to International Psychological Security through Malicious Use of Artificial Intelligence (Research MUAI).

This report presents the responses of 19 experts from 10 countries to questions regarding threats to psychological security posed by the MUAI. The pool of specialists in the field of international relations, AI, geopolitics, political communication, law, cybersecurity engineering, psychology, security sciences had the opportunity to express their expert opinion on 3 groups of issues:

  • general threats to ISP caused by the MUAI from now until 2030;
  • threats to IPS caused by the MUAI in the countries of residence of the experts;
  • situation of the MUAI and IPS in Northeast Asia. This region was chosen due to the fact that AI technologies there are at a fairly high level, which also makes easier its malicious use.

It should be noted that due to the current international situation, 4 out of 19 experts expressed a desire not to publish their answers as part of the report.

Experts were asked to choose an option reflecting their point of view on how much the MUAI increase the level of threat to IIPS today. About 10% of experts believe that such an influence is “only slight”, 56% selected the “noticeably” option, 22% of experts believe that the influence is “strong” [p. 13]. The option “not at all” was not selected by any of the experts. Most experts believe that the situation will worsen by 2030: 53% of the experts answered that the MUAI will “strongly” increase the threats to the IIPS and 47% answered “noticeably”. The options “only slightly” and “not at all” were not selected by any expert [p. 14].

Based on the results of the survey, the most relevant threat is a targeted complex impact on the psyche of large target groups in order to obtain specific political dividends through the psychological management of target audiences. The manipulation of the information agenda, the use of AI technologies to destroy the control process, the malicious use of emotional AI leave a wide field for discussion, although many experts drew attention to the particular danger of a qualitative increase in the level of manipulation of public consciousness with the help of AI [p. 9-13].

Most experts agreed on the need for international cooperation in successfully countering the MUAI. At the same time, a number of experts stressed that such cooperation is difficult to achieve at the global level due to geopolitical confrontation [p. 19-20, 22]. Some of them think that it can be implemented within the framework of international associations, for example, with the active participation of Russia within the BRICS and SCO [p. 20-21].

According to experts, the threats of the MUAI in Northeast Asia are determined by such factors as the conflict situation in the region, the high level of development of AI, the influence of external actors (primarily the United States) who conduct information campaigns against China by using AI [p. 27-29].

In conclusion, the experts analyze the level of public awareness of the countries of Northeast Asia about the existing threats to IPS and try to assess the degree of readiness of the state bodies of the countries of the studied region to respond and confront this kind of threats. The opinions of experts are divided on this issue: some believe that the public of the leading countries is quite well aware of the threats, some believe that the public’s level of awareness is extremely low [p. 32-33]. In my opinion, the reason is that the experts, when answering this question, were guided by various criteria. If we talk about the general awareness of the society in Northeast Asia (including education, information, and economic opportunities), then the level of awareness is really not high enough. Nevertheless, at the state level, the problem of the MUAI is recognized as a challenge to national security, and real measures are already being taken to counter this threat.

Evgeny Pashentsev concludes that among the threats to IPS experts did not mention such a form of the MUAI as the threat of targeted distortion of information about AI, associated with the formation of overestimated expectations from it. Evgeny Pashentsev showed the actual danger of the formation of a financial bubble (as a rule, the occurrence of a bubble is characterized by a rush demand for some product, as a result of which the price for it rises significantly, which, in turn, causes a further increase in demand and may lead to financial ruin). The extremely rapid growth of Big Tech is inextricably linked with AI technologies, which is fraught with financial collapse in the not too distant future.

According to the results of the expert survey, this report demonstrates limited but real opportunities for international cooperation in a new, very important, and yet extremely problematic area of interdisciplinary research that is taking its first steps: the MUAI and IPS. The high degree of readiness of specialists to take part in the survey and, in general, the comprehensiveness and professional competence of the answers received are highly encouraging. During the survey, experts expressed coinciding, significantly different, and even mutually exclusive points of view, which is understandable given the novelty and particularity of the issues being discussed.

The author of the report would like to believe that this survey is just a prologue for future joint international research in the field of the MUAI and IPS. Such research will not only be designed to solve important scientific problems; its principal practical task will be to help ensure the psychological security of society. People must have a clear systemic understanding of the surrounding reality to make conscious choices in their lives. AI is a means to take away this choice in the interests of antisocial actors, but, to a greater extent, it is also a tool for the protection and self-development of the individual and society as a whole.

In conclusion, it should be noted that this report once again proves that the rapid development of AI is accompanied by an increase number of threats. The report shows that in order to effectively counter the threat of the MUAI, it is necessary to introduce technical, political and legal measures within the framework of a socially oriented transformation of the social system. In addition, it is necessary to continue the search for scientifically based solutions to strengthen national defense at the state level, as well as deepen international cooperation in this area within the framework of international organizations. This report represents a tangible contribution to the understanding of the MUAI problem as a threat to the IPS from the point of view of specialists from various fields of science and countries.

  1. Evgeny Pashentsev. Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security. Report by Evgeny Pashentsev. Center for Social and Political Studies and Consulting. December 2021, Moscow http://globalstratcom.ru/wp-content/uploads/2017/11/MUAI-and-IPS-Report.pdf

Full report : http://globalstratcom.ru/wp-content/uploads/2017/11/MUAI-and-IPS-Report.pdf

Article :

https://russiancouncil.ru/en/analytics-and-comments/columns/cybercolumn/experts-comment-on-ai-malicious-use-and-challenges-to-international-psychological-security/

Experts on the Malicious Use of Artificial Intelligence and Challenges to International Psychological Security

Report by Evgeny Pashentsev
Edition of the International Center for Social and Political
Studies and Consulting
December 2021, Moscow: LLC «SAM Polygraphist», 2021. –
62 pp.
ISBN 978-5-00166-528-1
Funding: The reported study was funded by RFBR and VASS,
project number N 21-514-92001.

The present publication is a result of the implementation of the research project titled “Malicious Use of
Artificial Intelligence and Challenges to Psychological Security in Northeast Asia,” funded by the Russian
Foundation for Basic Research (RFBR) and the Vietnam Academy of Social Sciences (VASS). The responses
received from a targeted survey of nineteen experts from ten countries and their subsequent analysis aim
to highlight the range of the most serious threats to international psychological security (IPS) through
malicious use of artificial intelligence (MUAI) and determine how dangerous these threats are, which
measures should be used to neutralize them, and what the prospects for international cooperation in this
area are. This publication attempts to determine whether MUAI will increase the threat level of IPS by 2030

The publication pays special attention to the situation in Northeast Asia (NEA), where the practice
of MUAI is based on a combination of a high level of development of AI technologies in leading countries
and a complex of acute disagreements in the region.


Evgeny N. PASHENTSEV
Prof. Evgeny Pashentsev is a Leading Researcher at the Diplomatic Academy at
the Ministry of Foreign Affairs of the Russian Federation, and the Director of
the International Center for Social and Political Studies and Consulting
(Moscow). He is a coordinator of the European-Russian Communication
Management Network (EU-RU-CM Network), the Russian-Latin American
Strategic Studies Association, and the International Research Group on Threats
to International Psychological Security through Malicious Use of Artificial
Intelligence (Research MUAI). He is a partner of the European Association for
Viewers Interests in Brussels, a member of the International Advisory Board of
Comunicar (Spain), and the Editorial Board of the Journal of Political Marketing
(USA). Prof. Pashentsev has authored, co-authored, and edited 37 books and
more than 200 academic articles published in Russian, English, Spanish, Portuguese, Italian, Serbian,
Vietnamese, and Bulgarian. He has presented his papers at more than 180 international conferences and
seminars over the last 10 years in 24 countries. The areas of his current research include strategic
communication and malicious use of AI

Contents
Introduction by Evgeny Pashentsev 5
Questions and Answers by the Experts 9

  1. What threats to psychological security caused by the malicious use of
    artificial intelligence do you consider the most relevant for the modern world? Why? 9
  2. How much does the malicious use of artificial intelligence increase the level
    of threat to international psychological security today? 13
  3. How much will the malicious use of artificial intelligence increase
    the level of threat to international psychological security by 2030? 14
  4. What measures (political, legal, technical or other) do you consider
    to be important to neutralize the threat to international psychological security
    caused by the malicious use of artificial intelligence? 14
  5. How important is international cooperation in successfully countering
    the malicious use of artificial intelligence? On what international platforms (and why)
    is this cooperation the most effective? What are the existing obstacles
    to this cooperation? 19
  6. Which of the threats to international psychological security caused
    by the malicious use of artificial intelligence do you consider the most relevant
    for your country? 22
  7. Are any measures (political, legal, technical or other) being taken in your
    country to overcome threats to psychological security caused by the malicious use
    of artificial intelligence? What are these measures? 24
  8. Which of the threats to international psychological security caused
    by the malicious use of artificial intelligence do you consider the most relevant
    for Northeast Asia? 26
  9. How much does the malicious use of artificial intelligence increase
    the level of threat to international psychological security in Northeast Asia today? 29
  10. How much will the malicious use of artificial intelligence increase the level
    of threat to international psychological security in Northeast Asia by 2030? 30
  11. In which countries of Northeast Asia (no more than three) have the threats
    to international psychological security caused by the malicious use
    of artificial intelligence reached the highest level? Why? 30
  12. How well is the public in Northeast Asia aware of the threats
    to international psychological security caused by the malicious use
    of artificial intelligence? 32
  13. How do you assess the degree of readiness of state bodies of the countries
    of Northeast Asia to counter threats to international psychological security
    caused by the malicious use of artificial intelligence? 33
    The Questionnaire for Experts “Malicious Use of Artificial Intelligence
    and Challenges to Psychological Security” 36
    Expert Review by Evgeny Pashentsev 38
    About the Experts 53

Introduction

Evgeny Pashentsev

Artificial intelligence (AI) technologies, despite their high significance for social development, raise threats to international psychological security (IPS) to a new level. There is a growing danger of AI being used to destabilize economies, political situations, and international relations through targeted, high-tech, psychological impacts on people’s consciousness. Meanwhile, crisis phenomena are rapidly increasing in frequency, number, and severityworldwide.

There is no need to explain here why, in 2020, the Doomsday Clock was set to 100 seconds to midnight for the first time in history and remains unchanged in 2021 (Defcon Level Warning System, 2021). Nor is there a need to explain why the UN Secretary General is serving as a megaphone for scientists, warning bluntly that failure to slow global warming will lead to more costly disasters and more human suffering in the years ahead (Dennis, 2021). And there is no need to explain why the growth in the world’s billionaires’ fortunes from 8 to 13 trillion dollars in the crisis year of 2020 (Dolan, Wang & Peterson-Withorn, 2021)—against the backdrop of record economic decline in recent decades, hundreds of millions of newly unemployed people, and, according to the UN, the growth in the number of hungry people in the world from 690 million in 2019 (Kretchmer, 2020) to 811 million in 2020 (World Health Organization, 2021)—does not contribute to solving these and other acute problems of our time.

Economic problems, the degradation of democratic institutions, social polarization, internal political and interstate conflicts against the backdrop of the ongoing COVID-19 pandemic, all under the conditions of rapid AI development, create extremely favorable ground for the malicious use of AI (MUAI). MUAI is an intentional antisocial action, whether in explicit or implicit form. Antisocial circles (from individual criminals and criminal organizations to corrupt elements in government to financial and commercial structures to the media to terrorists and neo-fascists) are already increasingly taking advantage of this situation, favorable to their purposes.

The manipulation of the public consciousness is especially destructive in historical moments of crisis. The inhumanity of fascism became apparent after the death of 50 million in the flames of the Second World War. However, the technology of manipulating the public consciousness, with the appropriate funding from certain corporate structures, ensured Hitler’s victory in the Reichstag elections in 1933—a distant year, but highly instructive for those alive today. It is hardly by accident that, today, the governments and parliamentarians of the USA, China, Russia, India, EU countries, and other states and associations to varying degrees and in different ways show growing concern about the threat of high-tech disinformation on the Internet and the role of leading media platforms that actively use AI technologies. The question is clear: can humanity collectively find a way out of an increasingly difficult situation with a quantitatively and, increasingly, qualitatively higher level of manipulation of the public consciousness?

In 2019, evidence of organized social media manipulation campaigns was found. These took place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017 (CloudFlare, 2020). In each country, at least one political party or government agency had used social media to shape public attitudes domestically (Ibidem). Bots today have convincingly authentic online profiles and advanced conversational skills, and can appear to be legitimate users embedded in human networks. Some automated accounts are also partially managed by humans, using profiles known as “cyborgs” or “sock puppets” (Samuels &Akhtar, 2019).

The problem of the relationship between MUAI and IPS was firstsystemically raised by the author in a speech at a round table at the Ministry of Foreign Affairs of Russia in November 2018 (ICSPSC, 2018; Pashentsev, 2018). The topic was then developed in several publications, of both single authorship and co-authorship with colleagues (Averkin, Bazarkina, Pantserev&Pashentsev, 2019; Bazarkina, Dam, Pashentsev, Phan&Matiashova, 2021; Bazarkina&Pashentsev, 2019 and 2020; Pashentsev, 2019a, b and c; Pashentsev, 2020a and b; Pashentsev, 2021; Pashentsev & Bazarkina, 2021). The author considers it necessary, especially in modern international circumstances and taking into account the topic of this study, to focus on threats to IPS through MUAI, which, in real life, is in constant feedback and a position of mutual influence with the psychological security (PS) problem at the individual, group, and national levels.

new threats to agenda-setting and political stability are arising from the advantages of offensive and defensive psychological operations using AI. These advantages are increasingly associated with quantitative and qualitative departures from the traditional mechanisms of producing, delivering, and managing information; new possibilities for having psychological impacts on people; and the waging of psychological warfare. In particular, these advantages may include: (1) the volume of information that can be generated, (2) the speed at which information can be generated and distributed, (3) the believability of information, (4) the strength of the intellectual and emotional impacts that can be created, (5) the analytical data-processing capabilities that are available, (6) the use of predictive analytics resources based on AI, (7) the methods of persuasion that can be used, and (8) new capabilities for integration in the decision-making process. Based on a qualitative and rather approximate assessment of the data available from primary and secondary open access sources, the author draws the preliminary conclusion that advantages 1 and 2 have already been achieved, whereas advantages 3–8 are in the developmental stage at the operational level (Pashentsev, 2021, p. 143).

It should be noted that MUAI threats are growing in Northeast Asia (NEA). The region has not developed its own security system that would cover all countries of the region and serve all interests. Negative psychological impacts associated with various aspects of national and international development are increasingly affecting the sociopolitical situation and interstate relations in NEA. Recently, the pace of development of AI technologies there, especially in China, Japan, and South Korea, has sharply increased, which, despite the progressiveness of the achievement, also poses new challenges to IPS in the region, which require a timely response from state and non-state and national and international structures and institutions.

Meanwhile, the current systemic analysis of the MUAI and IPS problem within the framework of international cooperation leaves much to be desired and is fragmentary within the framework of MUAI research—not counting the efforts of the international group of specialists founded in 2019 to study the threats to IPS through MUAI, the Research MUAI group. The members of this group have published over 40 articles in indexed international academic journals on the topic of this study (Pashentsev, 2019c).

The above circumstances prompted the author to conduct a targeted expert survey as part of the implementation of the research project “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia,” funded by the Russian Foundation for Basic Research (RFBR) and the Vietnam Academy of Social Sciences (VASS). The assessments given by nineteen experts from ten countries[1] obtained as a result of the expert survey and their subsequent analysis aim to highlight the most serious threats to IPS through MUAI and determine how dangerous these threats are to society, which measures should be taken to neutralize them, and what the prospects for international cooperation in NEA are. This survey attempts to determine whether MUAI will increase the level of threat to IPS by 2030. The experts paid special attention to the situation in NEA, where the practice of MUAI is based on a combination of a high level of development of AI technologies in leading countries and a complex of acute disagreements in the region.

The structure of this publication is designed in such a way that the reader can first get acquainted with the experts’ answers to the questions posed, and then with their analysis.

The author expresses his gratitude to the RFBR, which has made this research possible; the experts, who have devoted their valuable time to preparing answers to the questionnaire; and the author’s colleagues in the research project “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia” Prof. Darya Bazarkina, leading researcher at the Institute of Europe of the Russian Academy of Sciences (Moscow), Dr. Nieet Dam, lecturer at HSE University (Moscow), Yuri Kolotaev and EkaterinaMikhalevich, doctoral students at Saint Petersburg State University, and master’s degree student Darya Matiashova for their help in forming an international knowledge base of experts on the topic of the survey. The author is also grateful to the authors’ colleagues at the International Center for Social and Political Studies and Consulting (ICSPSC), which allowed this publication to come to fruition.

November 29th 2021

References

Averkin, A., Bazarkina, D., Pantserev, K., & Pashentsev, E. (2019). Artificial Intelligence in the Context of Psychological Security: Theoretical and Practical Implications. Proceedings Of The 2019 Conference Of The International Fuzzy Systems Association And The European Society For Fuzzy Logic And Technology (EUSFLAT 2019), 1, 101-107. doi: 10.2991/eusflat-19.2019.16

Bazarkina, D., Dam, V., Pashentsev, E., Phan, K., &Matiashova, D. (2021). The Political Situation in the Northeast Asia and Threats of Malicious Use of Artificial Intelligence: Challenges to Psychological Security. Sotsialno-GumanitarniyeZnaniya (Social And Humanitarian Knowledge), 4, 212-234. doi: 10.34823/SGZ.2021.4.51655

Bazarkina, D., & Pashentsev, E. (2019).Artificial Intelligence and New Threats to International Psychological Security.Russia In Global Affairs, 17(1). doi: 10.31278/1810-6374-2019-17-1-147-170

Bazarkina, D., & Pashentsev, E. (2020).Malicious Use of Artificial Intelligence.New Psychological Security Risks in BRICS Countries.Russia In Global Affairs, 18(4), 154-177. doi: 10.31278/1810-6374-2020-18-4-154-177

CloudFlare.(2020). How to Manage Good Bots. Good Bots vs. Bad Bots. Retrieved 5 November 2021, from https://www.cloudflare.com/learning/bots/how-to-manage-good-bots/

Defcon Level Warning System.(2021). Current Doomsday Clock Official Time Today. Retrieved 5 November 2021, from https://www.defconlevel.com/doomsday-clock.php#:~:text=January%2023%2C%202020%20to%202021%20%28Current%29%3A%20Doomsday%20Clock,2021.%20Click%20the%20change%20reasons%20to%20see%20why

Dennis, B. (2021). The U.N. chief’s relentless, frustrating pursuit to bring the world together on climate change. Retrieved 5 November 2021, from https://www.washingtonpost.com/climate-environment/2021/10/25/antonio-guterres-climate-change/

Dolan, K., Wang, J., & Peterson-Withorn, C. (2021).The Forbes World’s Billionaires list. Retrieved 5 November 2021, from https://www.forbes.com/billionaires/

ICSPSC. (2018). Prof. Evgeny Pashentsev spoke on Artificial Intelligence and Issues of National and International Psychological Security at the round table at the Ministry of Foreign Affairs of the Russian Federation. Retrieved 14 November 2021, from https://www.academia.edu/37933317

Kretchmer, H. (2020). Global hunger fell for decades, but it’s rising again. Retrieved 5 November 2021, from https://www.weforum.org/agenda/2020/07/global-hunger-rising-food-agriculture-organization-report/

Pashentsev, E. (2018). Artificial Intelligence and Issues of National and International Psychological Security. Retrieved 14 November 2021, from https://www.alainet.org/en/articulo/196926

Pashentsev, E. (2019a). Destabilization of Unstable Dynamic Social Equilibriums through High-Tech Strategic Psychological Warfare. In N. Van der Waag-Cowling & L. Leenen (eds.), Proceedings of the 14th International Conference on Cyber Warfare and Security ICCWS 2019 Hosted By Stellenbosch University and the CSIR, South Africa, 28 February – 1 March 2019 (pp. 322–328). Reading, UK: Academic Conferences and Publishing International Limited.

Pashentsev, E. (2019b). Malicious Use of Artificial Intelligence: Challenging International Psychological Security. In P. Griffiths & M. Nowshade Kabir (eds.), Proceedings of the European Conference on the Impact of AI and Robotics 31 October -1 November 2019 at EM-Normandie Business School, Oxford (pp. 238–245). Reading, UK: Academic Conferences and Publishing International Limited.

Pashentsev, E. (2019c). The Work of an International Group of Experts on Threats for International Psychological Security (IPS) by Malicious Use of Artificial Intelligence (MUAI). Retrieved 5 November 2021, from http://globalstratcom.ru/wp-content/uploads/2019/10/Новость-2-АНГЛ.pdf

Pashentsev, E. (2020a). AI and Terrorist Threats: The New Dimension for Strategic Psychological Warfare. In D. Bazarkina, E. Pashentsev & G. Simons (eds.), Terrorism and Advanced Technologies in Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat (1st ed., pp. 83–115). New York: Nova Science Publishers.

Pashentsev, E. (2020b). Malicious Use of Deepfakes and Political Stability.Abstracts Of Papers Presented At The European Conference On The Impact Of Artificial Intelligence And Robotics ECIAIR 2020, 82.

Pashentsev, E. (2021). The Malicious Use of Artificial Intelligence through Agenda Setting: Challenges to Political Stability. In F. Matos (ed.), Proceedings of the 3rd European Conference on the Impact of Artificial Intelligence and Robotics ECIAIR 2021. A Virtual Conference Hosted by ISCTE Business School, InstitutoUniversitário de Lisboa, Portugal. 18–19 November 2021 (1st ed., pp. 138-144). Reading, UK: Academic Conferences International Limited.

Pashentsev, E., & Bazarkina, D. (2021).The Malicious Use of Artificial Intelligence against Government and Political Institutions in the Psychological Area.In D. Bielicki, Regulating Artificial Intelligence in Industry (1st ed., pp. 36–52). London and New York: Routledge.

Samuels, E., &Akhtar, M. (2019). Are ‘Bots’ Manipulating the 2020 Conversation? Here’s What’s Changed Since 2016. Comments. Retrieved 5 November 2021, from https://www.washingtonpost.com/politics/2019/11/20/are-bots-manipulating-conversation-heres-whats-changed-since/#comments-wrapper

World Health Organization. (2021). UN report: Pandemic year marked by spike in world hunger. Retrieved 5 November 2021, from https://www.who.int/news/item/12-07-2021-un-report-pandemic-year-marked-by-spike-in-world-hunger

Questions and Answers by the Experts

  1. What threats to psychological security caused by the malicious use of artificial intelligence do you consider the most relevant for the modern world? Why?

Vian Bakir and Andrew McStay

A potential and growing threat to the psychological security of individuals, groups, and nations is the use of AI to data-mine the body for the purposes of profiling: this adds a new, and more invasive, layer of surveillance to the already existing surveillance of our sentiments and emotions online. Termed “emotional AI,” this is a weak form of AI in that it is designed to gauge and react to human emotions through text, voice, computer vision, and biometric sensing. It is a potential threat because of its fast adoption across the world; its methodological problems and the drive toward ever-increasing data surveillance (e.g., of users’ lives in broader contexts) to fix these problems; context-specific public acceptance or rejection of this technology; regulation that is unprepared for its rollout, but that is starting to recognize its dangers in the wrong hands; and already-evidenced examples of worst-case scenarios in dominant forms of emotional AI.

a) The emotional AI sector is growing fast across the world, and has been rolled out across multiple domains in society including transport, education, border control, health, workplaces, insurance, communication, and entertainment. Social media is already a dominant form of emotional AI (as it is designed to deploy AI to be maximally engaging to users, thereby making emotional expression viral on social media), but there are many other emergent forms.

b) It promises much (to be able to read the emotional state of humans via a variety of inputs) but, to date, the methodology underpinning the commercial deployment of the technology is weak, leading to claims of accuracy being suspect. Despite these suspect claims, this does not appear to be slowing its rollout across the world.

c) The emotional AI industry is aware of its methodological weaknesses; thus, industry leaders such as Microsoft are turning to methods that make greater use of social contextual awareness that surveil even more data. The methodological fix, then, is likely to lead to intensified surveillance.

d) National surveys conducted by the Emotional AI Lab show that the UK public is not against deployment of emotional AI in certain contexts (e.g., for greater personalization and safety in cars) but rejects it in areas where there can be abuse of power (e.g., for the purposes of microtargeting in political campaigns or the surveillance of workers in the workplace).

e) Regulation is not yet ready for the increasing use of emotional AI and how to distinguish between services of potentially pro-social value from less desirable applications. Nonetheless, the EU (which has the world’s strictest privacy protections) published its draft AI regulations in 2021 to form the basis of its legislative program over the next few years, wherein the use of AI for manipulative purposes is banned, and emotional AI is seen as being of either ‘limited’ or ‘high’ risk.

f) Worst-case scenarios of the malicious use of emotional AI as it applies to psychological security include the use of dominant forms of emotional AI (social media) to spread false, emotive, divisive narratives at moments of importance to the civic body, such as elections, referenda, and communal and religious gatherings where people from different social stratamix. Scholarship has already documented such activities across the world.

Raynel Batista Tellez

a) Automation of social engineering advertisement practices: Users’ online personal information is used to automatically generate custom malicious websites, emails, and links they would be likely to click on, convinced by chatbots that humans may trust to be another “real” person in a video chat.

b) Robot users or fake people: Some forms of robots, such as drones or chatbots for customer service, imitate human behavior and record massive amounts of data beyond human limitations, simulating natural human language and behaviors through cognitive automation techniques.

c) Automation of hacking: AI is used to improve target selection and prioritization for hacking purposes, evade detection, and creatively respond to changes in the target’s behavior. Autonomous software has long been able to exploit vulnerabilities in systems, but more sophisticated AI hacking tools may exhibit much better performance both compared to what has historically been possible and, recently, compared to humans.

d) Fake news reports using deepfake technology: Highly realistic videos are made of state leaders appearing to make inappropriate comments they never actually made, but that viewers usually trust.

f) Automating influence campaigns: AI-enabled analysis of social networks is leveraged to identify key influencers, who can then be approached with malicious offers or targeted with disinformation.

g) Automated disinformation campaigns: Individuals are targeted in swing districts with personalized messages to affect their voting behavior.

Robert Borkowski

I consider shifting the mood of the masses to be the greatest threat. New artificial intelligence tools allow much more effective, and invisible, manipulation of social moods and attitudes, which carries a significant threat in political life. I also do not rule out the possibility that new generations of terrorists will reach for the MUAI arsenal.

Anna Bychkova

a) The aggravation of the problem of dependence upon users’ information for the purpose of improving artificial intelligence algorithms that adapt to users’ interests.

b) The ability of a well-trained artificial intelligence algorithm to generate content that is perceived as having been created by a person and evokes certain emotions.

c) The disunity of users on social networks due to the “echo chamber” and the “information bubble” (“filter bubble”).

The producers of information is at the same time its consumers; they receive physiological and emotional pleasure, seeking a response to the content, which is expressed by the number of views, likes, and comments. Artificial intelligence algorithms are configured to promote the most radical views because it is such content that receives the greatest response on social networks. The creation of algorithms, like Generative Pre-trained Transformer 2 (GPT-2), that automatically write comments on a given topic and that are emotionally colored and can be perceived as having been written by a person (Expressive Text to Speech, IBM Watson Tone Analyzer) can be used to pursue political, economic, and extremist goals. Together, this provokes the polarization of public opinions, creates obstacles to reaching compromises, and, as a result, can be used to create an atmosphere of hatred and hostility.

Matthew Crosston

I do not agree entirely with the question asked as is.

Svetlana S. Gorokhova

The malicious use of AI technologies can cover the entire range of threats aimed at infringing upon the psychological security of an individual, different social groups, and society as a whole. Such an impact can affect emotional perceptions of surrounding reality and deform cognitive perceptions of information, even if the individual or group is aware of the intentional distortion. In addition, under the influence of these processes, the deformation of human consciousness as a whole is quite likely. Moreover, the vector of deformational changes will directly depend upon the goal of the attackers, and may encompass the entire spectrum of social deviation, starting with the development of religious (sectarian) fanaticism, and ending with the formation of extremist views. These become the most significant threats in connection with the emerging opportunities for the malicious use of AI technologies for selfish and criminal purposes.

Nguyen Quoc Hung

The use of AI for criminal and terrorist activities can be considered the most dangerous threat. The rapid progress in the field of AI increases the risk of this technology being applied to carry out automated attacks by criminals. The malicious use of AI poses threats to digital and political security, allowing perpetrators to carry out large-scale and potentially lethal attacks. The cost of conducting attacks may be lower when AI is used to perform tasks that would otherwise require human participation

Pavel Karasev

The use of information and communications technologies (ICT) by some leading countries as a means of achieving foreign and domestic policy objectives is a confirmed fact. In general, an increase in the use of ICTs for malicious information-based influence is a threat in itself. Technologies for the preparation and distribution of content are constantly improving, for example in terms of information targeting, user profile analysis, “fake news,” and the employment of opinion leaders to replicate this news. Skillfully crafted and carefully targeted information can have great effects on the opinions and perceptions of any population. The use of artificial intelligence (AI) for these tasks (namely the analysis of big data and the creation of texts) allows for a near-instant response and adaptation to the changes in the current situation and subtle, effective and, more importantly, routine manipulations of social behavior. The operations are carried out according to tailored scenarios and at the pace necessary for the earliest possible introduction of the given narratives into the minds of opinion leaders, denying the targeted side a prompt reaction to the information attack. This threat is the most relevant for the modern world, and its implications call for thorough consideration and study.

Alexander Raikov

Invulnerable and latent information attacks are the most dangerous threats in the modern psychological climate. An artificial intelligence (AI) system can target the psychological aspects in citizens’ lives. For example, the medical care system can seem unfair in certain regions of a country. This can be linked to various psychological factors that figure in this field, for example the influence of local personalities. If a region has a major medical center with a charismatic chairperson of medicine who has trained generations of residents to practice in a particular way and those residents mostly end up working within a certain radius from the center, then people will see the effect of the chairperson in their area. People are affected by the situation in which they live. And when people from a neighboring region understand that they do not have such a medical canter with a charismatic chairperson, psychological stress may arise, increasing the likelihood of protest. A special AI system can reveal such a situation by analyzing big data and generating special information about unfair inequality of regions for malicious dissemination of this information in the “unfairly offended” region.

Marina Reshetnikova

The urgency of addressing psychological security problems has consistently increased over the past decade. The use of artificial intelligence (AI) has played a primary role in this, with its targeted high-tech impact on the consciousness of citizens. The development of mathematical support for various types of programs such as deepfake software or mobile applications such as Jinri Toutiao has only intensified the negative impact on IPS in general. An essential role in increasing the awareness of malicious use of AI is the creation of various groups of chatbots under state patronage. The tasks of these groups are very different—from destabilizing the situation within a particular group of civilians to influencing the decisions of international organizations.

Last but not least is the role that state structures play in the growth of problems for IPS. Not long ago, the meme about “Big Brother” following everyone was more of a joke, but then, in the COVID-19 period, the situation concerning “Big Brother” technologies—a specific set of modern technologies based on AI, capable of following any mass of people arbitrarily, broadly, and penetratingly—significantly escalated. Under the guise of their slogans being about care and the population’s health, governments often solve their internal political problems. Moreover, this has not only been noticed by countries under authoritarian regimes, but also those with liberal governments. There is no need to look deeply for examples of this. There was a scandal in the southern provinces of Italy, where face scanners were installed in shopping centers under the guise of being temperature-measuring devices. Alternatively, another scandal surrounded the Israeli company NSO, whose Pegasus software could be used to spy on citizens. Attention has been drawn to the fact that this software is available in the public domain. Anyone can, completely legally, under contract with NSO, purchase it. At the center of this scandal was the murder in Istanbul of the well-known Saudi opposition journalist Jamal Khashoggi. The Saudi intelligence services were monitoring him with the help of the Israeli Pegasus software. This case caused a massive international scandal. Under the influence of the public, the Israeli Ministry of Defense raised the discussion of revoking the NSO corporation’s license to further develop this software.

Vitali Romanovski

In my opinion, accelerated digitalization and the digital divide are among the most relevant threats. The pandemic has underscored the need for the flawless, uninterrupted operation of data processing infrastructure for the effective functioning of social and economic systems. The overall dependence of public services, business processes, and personal well-being on digital infrastructures and their security has become more apparent. As a result, individuals’ and communities’ physical surroundings and psychological safety have become more sensitive and vulnerable to malicious cyber- and cognitive attacks. At the same time, growing digitalization and digital interdependence have highlighted the cyber vulnerabilities of regional economies. The digital divide has raised the issue of unequal distribution of profits between digital “haves” and “have-nots.” Competition for technological dominance has gained momentum and exacerbated military–political contradictions between states and socioeconomic contradictions within societies. Malicious actors will likely continue to capitalize on these trends. It is increasingly feasible that AI technologies will appear in their toolbox shortly.

Sergey A. Sebekin

The possibilities of using AI for malicious purposes will increase in future. There have already been cases of computer programs managing to convince a person that they are talking with a real person. It is all about when advanced narrow AI will be able not only to convince a person of its reality, but also to have a person act at the pleasure of the person using the AI for malicious activities. This has become more important now that AI is actively used for psychological brainwashing or social engineering, and, most likely, will be used for antisocial purposes on a wider scale in the near future.

Pierre-Emmanuel Thomann

The most relevant threat to the future is the geopolitical threat; that is, the use of artificial intelligence to impose a unipolar world and prevent the emergence of a more multipolar world. The malicious use of AI to enhance state-sponsored acts of terrorism is also a relevant threat.

Marius Vacarelu

The main problems can be described by the Latin expression: bellum omnium contra omnes. It refers to the increased wish to confront other people, putting all other things aside.

  1. How much does the malicious use of artificial intelligence increase the level of threat to international psychological security today?
Bakir V. and McStay A.Batista Tellez R.Borkowski R.Bychkova A.Crosston M.Gorokhova S.Hung N. Q.Karasev P.Raikov A.Reshetnikova M.Romanovski V.Sebekin S.Thomann P.-E.Vacarelu M.Expert from BelgiumExpert from RussiaExpert from Vietnam #2Expert from Vietnam #3
StronglyVVVVV
NoticeablyVVVVVVVVV
Only slightlyVVVV
Not at all

Vian Bakir and Andrew McStay

On the basis of the question specifying “today,” we suggest the answer “noticeably.” Biometric forms of emotional AI are still emergent and being trialed by governments across the world. However, in geographic areas that have few limitations on the state’s surveillance of populations, the trials are already raising human rights concerns, with some calling for such technologies to be totally banned. Scholars have also documented efforts by international and national actors to use social media (themselves a form of emotional AI) to sow emotive, false, and divisive narratives among populations during elections and referenda, honing and targeting these to specific audiences (e.g., this has been documented in Brazil, the USA, Spain, the UK, and Nigeria). However, the question of their real-world influence (i.e., whether these efforts have actually tipped elections) is not yet proven (and may never be, given the complexity of disentangling what makes someone vote a certain way, or vote at all).

  1. How much will the malicious use of artificial intelligence increase the level of threat to international psychological security by 2030?
Bakir V. and McStay A.Batista Tellez R.Borkowski R.Bychkova A.Crosston M.Gorokhova S.Hung N. Q.Karasev P.Raikov A.Reshetnikova M.Romanovski V.Sebekin S.Thomann P.-E.Vacarelu M.Expert from BelgiumExpert from RussiaExpert from Vietnam #2Expert from Vietnam #3
StronglyVVVVVVVVV
NoticeablyVVVVVVVVV
Only slightly
Not at all

Vian Bakir and Andrew McStay

The malicious use of dominant forms of emotional AI (e.g., using social media to spread divisive and deceptive narratives) will only increase in the very many areas of the world that have minimal regulations on social media, low levels of digital literacy, and pre-existing social tensions that can be stoked. Even if people are resistant to being influenced by such profiling systems, the attempts to influence people will spread. Mere awareness of these attempts could be enough to erode trust in the democratic process and the legitimacy of electoral results. For instance, President Trump’s repeated false claims of voter fraud led to an attempt by disaffected Trump supporters in the USA to overturn the election of Joe Biden in the Capitol Hill attempted insurrection of 2021.

  1. What measures (political, legal, technical or other) do you consider to be important to neutralize the threat to international psychological security caused by the malicious use of artificial intelligence?

Vian Bakir and Andrew McStay

Technical means are important. For instance, we need to be able to automatically detect false and hateful content online in order to assess it, flag it as problematic (e.g., having users make their own decisions), or remove it. (Whether these content moderation decisions should be performed by the technology platforms or the government has no easy answer, as both could lead to undue censorship and abuse of platform or governmental power.) Automation is necessary because of the sheer scale and speed of the circulation of hateful, deceptive information. However, humans must always be in the loop when such decisions are made due to the nuanced nature of deceptive and hateful content and the importance of the human right of freedom of speech.

Legal means are important. For instance, the EU draft of AI regulations (2021) usefully considers the levels of risk that various AI technologies may pose, and presents a scale of prohibited, high risk, and low risk activities. Each of these risk categories impose specific obligations on those developing and deploying AI in society (rather than placing the burden on citizens to first prove harm). This more precautionary approach seems like a good start. For instance, it proposes that prohibited AI include that which “deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Title II Article 5). Arguably, this could include the use of social media to spread divisive and false narratives to swing an election (e.g., voter suppression strategies for people of color in the USA). Also prohibited is “the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Title II Article 5). This could cover the targeting of people on social media according to their psychological vulnerabilities (e.g., propensity to believe conspiracy theories). Title IV also imposes transparency obligations on “AI systems” that are used to detect emotions or determine association with (social) categories based on biometric data: they “must take account of the specific risks of manipulation they pose”.

Education is also important. People’s digital literacy needs to increase for them to understand what profiling is, how it may be achieved, and to what ends it may be used. For instance, the UK’s 2016 “Brexit” referendum saw “dark ads” (online advertisements only seen by targeted recipients) being discussed in public for the first time, but three years later, by the time of the UK’s 2019 General Election, many were still unaware of these techniques. YouGov survey research commissioned by the non-governmental organization, Open Rights Group, showed that although 54% of the UK population was aware of how political parties target or tailor adverts based on the analysis of their personal data (political microtargeting), almost a third (31%) were not very aware or not aware at all. Only 44% of the national sample were very or fairly aware of “dark ads,” with a similar figure (41%) of the sample being not very or at all aware (Open Rights Group 2020). That there is still relatively low awareness after several years of public discourse on this issue is alarming in that it shows that a significant proportion of the electorate are unaware of how parties may attempt to manipulate them. Greater transparency of AI systems would be a positive first step, but educators and communicators (e.g., journalists and filmmakers) are needed to relay the social significance of these systems and the perils of the lack of transparency to audiences.

Raynel Batista Tellez

First, to promote effective international cooperation, MUAI elevates threats to IPS to a qualitatively new level, which requires an adequate assessment and reaction from society. The comprehension of the new, cross-cultural threats of MUAI lead to the formulation of large-scale strategies to protect sovereignty; enforce regional roles for building consensus, engagement, and international collaboration; and force experts to acknowledge the ethical implications of surveillance, persuasion, and physical target identification for regional equilibrium.

A cross-cultural approach of MUAI supports the idea of cultural competency as a mechanism of social influence and establishes the distribution of power from the perspective of security as a sociocultural phenomenon. AI is highly likely to have different social impacts on regional geopolitical balance, depending on people’s cultural settings traced by customs, values, and behaviors.

“Cultural knowledge” refers to individuals knowing about some cultural characteristics, history, values, beliefs, and behaviors of other ethnic or cultural groups.

A strategy to neutralize the threat to international psychological security caused by the malicious use of AI must also include a cross-cultural approach, defining three levels of interaction:

a) Cultural awareness is the stage of understanding other groups. It involves being open to the idea of changing cultural attitudes.

b) Cultural sensitivityis knowing that differences exist between cultures, but not assigning values to the differences (better or worse, right or wrong). Clashes at this level can easily occur, especially if a custom or belief in question goes against the idea of multiculturalism. Internal conflict (intrapersonal, interpersonal, and organizational) is likely to occur at times over this issue. Conflict will not always be easy to manage, but it can be made easier if everyone is mindful of the organizational goals.

c) Cultural competence brings together the previous stages—and adds operational effectiveness. A culturally competent organization has the capacity to bring into its system many different behaviors, attitudes, and policies and work effectively in cross-cultural settings to produce better outcomes.

Robert Borkowski

Much depends on the sound and honest policy of state authorities on counteracting MUAI threats, although counteracting, for example, the spread of fake news is extremely difficult. Moreover, politicians must have a strong will to take an honest approach to counteracting this threat and have a real understanding of the threat, which, unfortunately, is not visible in many countries. Social education and the development of society’s awareness of threats and rational attitudes are very important.

Anna Bychkova

Today, we are witnessing the desire of both the state and the technocratic sector to replace the human factor in the search for and the neutralization and removal of certain content with technologies based on machine learning. Artificial intelligence becomes both regulator and judge: the program must train on the ‘big data’ it is provided according to programmed rules, so that later it is the program that decides the “fate of the content”: delete, redo, or leave unchanged. Thus, technologies, in fact, not only determine which content will be in demand by users at that moment and in the near future, but also represent an analogue of the “state regulator,” which can, based on the “conclusions” of artificial intelligence, determine the future fate of media resources. A comprehensive approach is needed based on the analysis of existing threats and forecasts of the main trends: a combination of political, legal, and technological measures, as well as measures in the field of educational policy aimed at improving media literacy, preventing information dependence.

Matthew Crosston

Other: education of people to understand how much the malicious use of AI capitalizes on a fundamental failure of people to discern information and analyze context, rather than confirmating MUAI as a true explicit threat on par with kinetic weapons.

Svetlana S. Gorokhova

At our current stage of historical development, we cannot and should not repeat the mistakes that were made by humanity in less enlightened eras; therefore, even at these early stages of the widespread introduction of new technologies into our lives, it is necessary to lay down appropriate norms in the legal field that not only provide the possibility of imposing retrospective responsibility on perpetrators, but also consider the prospects of establishing historical (prospective) responsibility for those who are engaged in the development and implementation of potentially dangerous and fundamentally new technologies that were simply impossible before. This long-term responsibility can principally be imposed by developing relevant regulatory legal acts concerning rules, duties, and prohibitions related to the general prevention of possible harm that, in the future, may be caused to citizens in the process of their interaction with the newly introduced and new technologies equipped with AI. This, prospective responsibility, can, first of all, be expressed in the inclusion in the relevant normative legal acts of rules, duties and prohibitions related to the general prevention of possible harm, which in the future may be caused to citizens during the interaction with the introduced new technologies, equipped with AI.

Nguyen Quoc Hung

a) Effectively combating the actions of hostile forces and criminals who violate information security.

b) Concentrating resources to create and gradually develop the information technology industry, especially the information security (cybersecurity) industry in Vietnam.

Pavel Karasev

One priority should be the creation of a monitoring system and the timely identification of signs of information influence operations. Taking into account the fact that the cognitive capabilities of any one individual are insufficient for analyzing and comprehending a huge volume of information in the global media sphere, it would be necessary to use AI technologies to develop this system. Another major task should be ensuring the capacity to provide a timely response to signs of upcoming information and political operations, including the refutation of fake news. It is important to realize that the challenge of countering information operations cannot be solved by only technical or legal means. Alternative narratives are needed, and their creation requires the convergence of disciplines from different branches of science – humanitarian, social, technical, and natural. To build accurate models that can form the foundation for machine learning, it is necessary to translate the current achievements of psychology, sociology, political science, and other humanities into the language of mathematics and computer programs.

Alexander Raikov

Political international collaboration on this topic is very important. I think that a special agreement has to be created and approved. The disparity in ethical codes can be analyzed and adapted to make this agreement. Special security technologies are used to neutralize the threat to international psychological security. The new results in scientific studies in the field of hybrid, strong, general, and super AI must be taken into account while creating the method and tools for this neutralization. In a modern, multi-level economic system, the variety of approaches and management models and the variety of feedback leads to the corresponding management systems having unique responses to changing the conditions and factors of development and security. We cannot manage security if we do not have information about events or the knowledge, including implicit and hidden knowledge, that allow us to analyze and interpret events and make adequate decisions. AI systems can detect such events and therefore be used to maliciously correct feedback, causing irreparable economic and psychological damage to countries. Ironically, AI systems are the only technological measure for neutralizing the threat to international psychological security caused by the malicious use of AI.

Marina Reshetnikova

In the current situation, it is unlikely to expect political, legal, technical, or other actions directed toward IPS from government agencies. The post-COVID-19 situation plays an important role in this. The fight against the pandemic today is undoubtedly the most important task facing the governments of almost all countries worldwide. However, this raises the question of whether they are using the pandemic to solve their domestic political problems. Moreover, here again, we return to the controversy about the exacerbation of “Big Brother” technologies. The only way to neutralize the threats posed by the malicious use of AI is to create international public organizations and associations dedicated to monitoring and controlling the usage of such technologies.

Vitali Romanovski

The active role of the national government, interstate cooperation, and private sector involvement are essential to developing strategies to counter AI technologies’ employment by malicious actors. It is important to enhance interagency collaboration and information exchange upon applying AI and other digital technologies in the national security sphere. Governmental entities should also develop policies to increase the population’s resilience to offensive cognitive operations from other states and non-state actors.

Sergey A. Sebekin

Since AI has been used to exertpsychological influence, it would be logical to assume that the weak element is the person, not the technologies that they use and through which it is possible to influence others. It is important for people to be taught a high level of critical thinking so that they are less likely to be gullible and succumb to various AI-based provocations.

Pierre-Emmanuel Thomann

The promotion of a more multipolar world, with strong international cooperation platforms at different levels (local, regional, global) would helpto neutralize the threat to international psychological security

Marius Vacarelu

A strong education to ethics, but we must admit that a complete neutralization is impossible.

  1. How important is international cooperation in successfully countering the malicious use of artificial intelligence? On what international platforms (and why) is this cooperation the most effective? What are the existing obstacles to this cooperation?

Vian Bakir and Andrew McStay

International cooperation between all stakeholders is very important, but is probably not sufficient, and requires the support of strong regulation at supra-national levels. As a case in point, a Code of Practice of Disinformation was signed by dominant social media platforms between 2018 and 2020, and set a wide range of commitments. These include transparency in political advertising; demonetization of purveyors of disinformation; closure of fake accounts; the empowerment of users to report disinformation and to understand why they have been targeted by an advertisement; the empowerment of researchers by providing data; and the prioritization of authentic, accurate, and authoritative information to users while not preventing access to otherwise lawful content or messages solely because they are thought to be “false.” However, disinformation remains prevalent online, and as a result, the EU may be moving towards a more assertive co-regulatory approach in its forthcoming Digital Services Act.

An international framework for tackling the malicious use of AI is needed, otherwise states are likely to impose their own solutions, which may well contravene important human rights such as freedom of expression. For instance, coercive responses of many governments seeking to tackle online disinformation have included arrests, Internet shutdowns, and legislation on fake news that stifles dissenting views.

Raynel Batista Tellez

Cultural competency gives international relations and the distribution of power the capacity to promote actors’ cooperation and to create a sense of belonging and identity. The concept of power is central to international relations. Power is the production, in and through social relations, of effects that shape the capacities of actors to determine their circumstances and fate. However, the failure to develop alternative conceptualizations of power limits the ability of international relations scholars to understand how global outcomes are produced and how actors are enabled and constrained differentially to determine their fates. If technology and culture are, together, seen as a circle of influence or circle of sustainability, then, cross-cultural competency influences the global distribution of power. Therefore, digital technologies modify space, time, relationships, and types of communication that continue to coexist with the other areas inherent in a culture, and perceptions and understandings of AI are likely to be profoundly shaped by local cultural and social contexts.

Robert Borkowski

The greatest value in interpersonal contacts and in international relations is the exchange of ideas, comparing one’s own situation with that of others and sharing experiences, thanks to which one can learn from others and their mistakes. It would be beneficial for the international community if a platform were organized for the exchange of ideas, organized by scientific centers in the form of an international congress on MUAI and the publication of the global MUAI Global Report. Maybe it should be done under the auspices of the United Nations. It should also contribute to the development of appropriate regulations in international law. International cooperation in this area has not developed for three reasons. First, the risks of MUAI are underestimated. Second, in international relations, the threats of MUAI are dominated by particularisms and, until something spectacular and unfortunate happens, counteracting MUAI will be downplayed. Third, some states use MUAI themselves so they will not be interested in more stringent initiatives against it.

Anna Bychkova

When speaking about the role of international cooperation, it is necessary to determine its subjects. When talking about interstate cooperation, it is important to understand that the technological giants engaged in the development of artificial intelligence algorithms are quasi-states. They adopt generally binding rules of behavior for all users (legal norms of sorts), the violation of which incurs a penalty implemented by the algorithms (not always objective). Given that the spread of IT is of a cross-border nature, it is logical that states should unite to develop standards, principles, and restrictions aimed, for example, at protecting universal human rights. Such platform could be the UN Commission on Human Rights Council. The obstacles to such cooperation may be the lobbying efforts of IT corporations, which are sponsors of a huge number of NGOs that promote their interests in disguise. There is a need to protect the sovereignty of individual states, whose leaders may see threats in such an association.

Matthew Crosston

International cooperation is almost irrelevant in countering MUAI, as it operates at a sub-level far below where international laws, sanctions, and countermeasures could successfully operate.

Svetlana S. Gorokhova

It is difficult to overestimate the importance of international cooperation in successfully countering the malicious use of artificial intelligence. I believe that it would be most effective to use interaction at the highest level: the state level. However, it should be borne in mind that there are serious obstacles to such cooperation, caused primarily by the fact that the analysis of the state policy directions of the most technically developed countries clearly illustrates their intention to participate in and win the global race of achievements in the field of artificial intelligence. Any race involves, at best, competition, and, at worst, rivalry or even hostility. Of course, this is a significant obstacle to fruitful cooperation.

Pavel Karasev

International cooperation on countering psychological influence is necessary, not only due to the characteristics of the ICT environment (its transboundary nature, globality, and anonymity) but also out of the necessity to develop common approaches, especially taking into account the possible use of AI technologies. In addition, today, significant disagreements remain between individual states and groups of countries even on more general issues of security in the global information space. This makes broad international cooperation on these issues unfeasible. Work at the regional level is more effective. For example, the platforms of the Shanghai Cooperation Organization, BRICS, and Collective Security Treaty Organization have proven themselves to be effective in countering information security threats – at the regular meetings and summits of these associations, information security issues are discussed and a common point of view is developed regarding countering current threats emanating from the ICT environment, including the malicious use of AI technologies.

Alexander Raikov

International cooperation in countering the malicious use of artificial intelligence is crucial. However, meetings that take the typical format of allowing everyone to express their thoughts will not help. Typical meetings are divergent in nature. They generate many ideas, but they do not create synergies. What is needed is a specialized intellectual platform that will ensure the stable and purposeful convergence of the discussion process toward a strong result that will provide adequate opposition.

Marina Reshetnikova

It is the development of international cooperation that, perhaps, will ensure successful opposition to the malicious use of AI. It is of note that the legendary Edward Snowden, who, until recently, was living somewhere in the vastness of Russia, joined this confrontation. In his opinion, the expansion of AI to the extent of violating constitutional rights has gone too far and warrants opposition. It is hard to disagree with him. The first step that needs to be taken is to exert public pressure on government agencies to expel structures like NSO from the AI ​​market, that has been successfully done in Israel.

The main threat to IPS caused by the malicious use of AI is the violation of constitutional rights, namely personal inviolability. This violation gives rise to a global psychosomatic disorder, leading to the destabilization of the world order. The formation of permanent physiological disturbance provokes the feeling of personal insecurity. Furthermore, this can cause not only internal political and ethnic unrest but also large-scale artificial disasters. In this situation, the salvation of humanity depends on the development of protective technologies. It is through them that people will technologically confront “Big Brother.” These technologies will have to allow a people to constantly simulate and model their “alternative personality” with other parameters of geolocation, appearance, and so on This is the only way to resist actors like NSO and their offspring, such as Pegasus.

Vitali Romanovski

International cooperation is of utmost importance in successfully countering MUAI. The establishment of appropriate international norms and standards relating to the application of emerging technologies, such as AI, will eventually have to be discussed at the level of the UN system’s actors, processes, and activities. Among these are the Secretary-General’s High-Level Panel on Digital Cooperation, the Open-Ended Working Group, and the Group of Governmental Experts. In addition, there is relevant ongoing research at the level of the United Nations Institute for Disarmament Research, the United Nations Interregional Crime and Justice Research Institute, and the United Nations Office on Drugs and Crime. However, multilateral institutions’ bureaucratic inertness and the growing distrust between the global powers are among the key obstacles to such cooperation.

Sergey A. Sebekin

International cooperation is necessary to solve any problem that is more or less global or interstate in nature – the same applies to the malicious use of AI for psychological influence. In the future, it will be important to create interstate commissions on the malicious use of AI, various kinds of advisory mechanisms and hotlines. In the near future, it will be important to start considering the issue of MUAI for psychological influence on such platforms as the Shanghai Cooperation Organization, BRICS, and the United Nations. However, the escalating geopolitical confrontations and domestic political antagonisms in different countries cast doubt upon such a favorable development of events for society.

Pierre-Emmanuel Thomann

International cooperation to counter MUAI remains very limited as geopolitical rivalry between great powers is increasing. Ad hoc coalitions might be more successful than large international organizations.

Marius Vacarelu

The main obstacles are geopolitical interests and internal political competition. Because for many politicians “the ends justify the means”, international cooperation will exist only between countries that do not compete for the same territories, resources or geopolitical positions.

[1] Fifteen experts from Belarus, Cuba, France, Poland, Romania, Russia, the United Kingdom, the USA, and Vietnam have agreed to have their answers published.