Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security on International Forum « Russia and Ibero-America in a Turbulent World: History and Prospects »

Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security on International Forum « Russia and Ibero-America in a Turbulent World: History and Prospects »

23 octobre 2023 0 Par Pierre-Emmanuel Thomann

By Ekaterina Mikhalevich, Chief Specialist of PJSC Gazprom Neft

12.10.2023

Ekaterina_mikhalevich@mail.ru

On October 5, within the framework of the VI International Forum “Russia and Ibero-America in a Turbulent World: History and Modernity” at St. Petersburg State University, two sessions of the Panel “Malicious Use of Artificial Intelligence and Challenges for BRICS Psychological Security” were held under the chairmanship of Professor Evgeny N. Pashentsev.

Picture 1. Speech by Evgeny N. Pashentsev

In his paper “Malicious Use of Artificial Intelligence: Challenges to Psychological Security in Brazil” Evgeny N. Pashentsev, Dsc., Prof., Leading Researcher at the Center for Digital Studies and Chief Researcher at the Center for Global Studies and International Relations of the Institute of Contemporary International Studies at the Diplomatic Academy of the MFA, professor at St. Petersburg State University, noted that Brazil is actively developing the AI industry, the number of research centers and specialists in the relevant field is growing. However, according to a new report published by cybersecurity firm Trend Micro, Brazil is the second most vulnerable country in the world to cyber attacks. Brazil was second only to the United States in the number of blocked threats in the first half of 2023. The Kaspersky report also confirms that Brazil leads Latin America in the number of cyber threats. In February 2022, Brazil was included in the Spamhaus list of countries where the largest number of spambots were detected. Most of these bots are used to send spam, phishing, and other malicious activities. Analysts attribute the abundance of bots in Brazil’s digital space to technical, political and socio-economic factors. Prof. Pashentsev paid significant attention to the risks of using deep fakes, chatbots, emotional AI and other AI-based technologies to undermine the psychological security of Brazil. At the same time, the speaker drew attention to the fact that the National Cybersecurity Strategy of Brazil, adopted in 2020, despite setting out a plan to strengthen the country’s cybersecurity for 2020-2023, does not provide a detailed assessment of the role of AI-based technologies in changing the level of threats to psychological security. Despite the fact that Brazil intends to include AI and disinformation on the agenda of the next meeting of the G20 Working Group on the Digital Economy (GT) in 2024, the bill No. 2338 of 2023, which is being considered in the country’s Senate, regulating AI systems in Brazil, although it establishes the rights of people affected by the use of these systems, and it provides penalties for violations, does not even mention the threats of malicious use of AI in psychological sphere. Meanwhile, according to Prof. Pashentsev, Brazil’s acute socio-economic problems, the activation of right-wing circles within the country, U.S. dissatisfaction with the independent course of Lula da Silva, and the further very likely aggravation of the international situation will lead to a sharp increase in psyops against the Brazilian government with the growing role of AI technologies at all stages of their planning and implementation.

Picture 2. Speech by Ekaterina A. Mikhalevich

Ekaterina Mikhalevich, Chief Specialist of PJSC Gazprom Neft, in her paper “China’s Initiatives in Countering MUAI in the Psychological Security Sphere” said that for the Government of the People’s Republic of China, ensuring psychological security and socio-political stability is an important component of maintaining the overall security architecture of the country.Studying the regulatory framework of the PRC to control the use of deep synthesis technologies, coupled with its experience in countering the threats of malicious use of AI by introducing a social credit system and censoring content distributed through social networks, seems important for experts from the BRICS countries, given the high degree of economic uncertainty and socio-political development in the modern world. The main feature of the Chinese approach to protection against malicious use of AI is the accelerated, advanced nature of development in this area. As follows from official documents, all the latest technological developments must necessarily take into account the factor of preventing malicious use, ensuring, among other things, psychological security. According to the Chinese government, the priority measures to counter the risks of malicious use of AI should be the development of a national and international regulatory framework governing the field of AI, the creation of a system of social ethics by training citizens in information literacy, as well as the implementation of public monitoring by establishing a system of social trust. Learning from China’s experience in addressing the risks of malicious use of AI could be beneficial for all BRICS members.

Picture 3. Speech by Darya Yu. Bazarkina

In her paper “Malicious Use of Artificial Intelligence: Risks to Psychological Security in South Africa” prepared by Darya Yu. Bazarkina, DSc, Leading Researcher at the Department of European Integration Research, Institute of Europe of the Russian Academy of Sciences, examines the current level of AI development in South Africa, as well as existing and potential threats to psychological security caused by the malicious use of AI in this country, and initiatives to fight them. With the development of the AI sector in the country, a number of new threats to psychological security from the malicious use of AI have emerged: social destabilization within the country associated with the replacement of people by AI in production, cyberbullying, the use of deep fakes in political confrontation, etc. Darya Yu. Bazarkina notes that most of the existing risks and threats are anthropogenic in nature: Weak AI is used by people themselves. In response to these threats and risks, the South African government is developing national legislation. In addition, South Africa is also developing public initiatives aimed at combating the malicious use of AI. South Africa’s experience in this area is valuable for developing general principles and mechanisms for countering “smart” psychological operations based on AI in the BRICS countries.

Picture 4. Speech by Marius Vacarelu (online)

Marius Vacarelu, Lecturer, PhD, Counselor of Vice-Rector, National School of Political and Administrative Studies, Bucharest, in his paper “Malicious Use of Artificial Intelligence against BRICS: A Necessity?” notes that geopolitical competition seems to be a logical and natural continuation of the development of technologies affecting human consciousness and intelligence. In the current century, the struggle for global supremacy is being waged with the help of high-tech tools based on AI, and the fact that the right to vote has become universal forces all countries looking for solutions for the psychological security of their citizens. In this context, the malicious use of AI becomes natural in geopolitical rivalries, despite the fact that official discourse prohibits such methods of use. In his paper, Marius Vacarelu focuses on the prospects for the use of AI in competition between the BRICS countries and the collective West.

Picture 5. Speech by Aleksandr Raikov (online)

Aleksandr Raikov, Сhief Scientist of the Jinan Institute of Supercomputing Technology (China) in his paper « Countering the Malicious Use of Artificial General Intelligence by Increasing Its Ability to Explain » noted that the danger of malicious use of AI and the possibility of countering such use will increase with the development of the computing base and AI technologies. This development includes increasing the power of computers, creating new parallel computing methods, and improving AI’s explainable capabilities (Explainable AI). The latter determines the level of confidence in the conclusions and recommendations of the AI, which may be true or false. AI has yet to learn how to solve complex interdisciplinary problems to synthesise explanations; however, its capabilities are increasing due to the development of generative versions of AI and large linguistic models, increase in computer processing.

The development of elements of strong and general artificial intelligence is already underway. However, for example, the possibilities of generative versions of AI based on a logical, statistical approach have many limitations, including the weakness of taking into account the personalisation and cognitive semantics of AI models, the limited length of tokens, ignoring the atomic structure of natural neurons when creating artificial neural networks. The latter must introduce non-local quantum and relativistic effects into the processes of semantic data processing. The performance growth of supercomputers is limited by the difficulties of further reducing the size of transistors and enormous energy costs for calculations. The digital representation of natural analogue signals leads to a reduction in their spectra and, as a result, an increase in the calculation time, if desired, to increase their accuracy. Eliminating such limitations requires new scientific research in optical computing and synthesising original optical devices and photonic materials. These studies require international and interdisciplinary scientific collaboration and unique convergent technology. We are working out the elements of such convergent technology and optical analogue computing technologies in the processes of collective strategic planning, as well as modelling complex social and engineering tasks on supercomputers.

Picture 6. Speech by Vitali Romanovski (online)

Vitali Romanovski,Researcher from the Belarusian State University (Minsk), in his paper “Priority Areas of International Cooperation in the Field of Ensuring International Psychological Security of the BRICS Member-States: Aspect of Malicious Use of Artificial Intelligence” emphasized that the increased attention of the BRICS countries to counteraction the malicious use of AI has become a natural response to the fact of degradation of the system of international relations, which is characterized by a violation of public security, the adoption and conclusion of deliberately unfavorable decisions and international treaties, and the deterioration of interstate relations, creating socio-political tensions. The use of AI-based technologies, according to Vitali Romanovski, is one of the determining factors influencing the growth of psychological tensions. The ongoing regionalization of the cyber-sphere and the rise of psychological security threats highlight the growing significance of international cooperation in multilateral associations such as BRICS to tackle these challenges and develop effective counteraction strategies.

Picture 7. Speech by Pierre-Emmanuel Thomann

Pierre-Emmanuel Thomann, PhD, lecturer in geopolitics, EMlyon Business School, (Lyon France), The Catholic Institute of Vendée (ICES, Roche-sur-Yon, France) and the Institute of Social, Economic and Political Sciences (ISSEP, Lyon, France), the President of Eurocontinent (Brussels, Belgium), in his paper “MUAI and Cognitive War in Context of Great Power Geopolitical Rivalry : The Danger of Polarization between Multilateral Alliances like BRICS, SCO, EU, NATO instead of Cooperation” noted that in the context of multipolarity, threats from the malicious use of AI, especially “cognitive warfare,” could have a profound and lasting impact on the international system by creating new, unbalanced geopolitical hierarchies. This could lead to total and permanent war and put all of humanity in danger. At the same time, the multilateral system of international organizations such as the UN, OSCE, NATO, etc., which emerged after the Second World War, is increasingly being challenged because it is based on an old spatial order that no longer exists. At the same time, we are seeing the empowerment of new multilateral organizations reflecting the new order, such as BRICS and the SCO. Dr. Thomann asks whether the polarization of competing blocs of different states and multilateral organizations can be avoided by promoting multilateral cooperation in the field of psychological security based on an ethical and people-oriented order combined with a better geopolitical balance of power. It is clear that by promoting new models of global cooperation in the field of AI between “old” and “new” organizations and alliances, it is possible to prevent many threats of malicious use of AI and avoid cognitive warfare.

Picture 8. Speech by Sergey A. Sebekin (online)

Sergey A. Sebekin, PhD, History,Senior Lecturer, Department of Political Science,History and Regional Studies, Irkutsk State University, Expert of the Institute of Contemporary International Studies of the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation, in his paper “Challenges to Democracy in the Age of Artificial Intelligence: How Information Influence Through AI Can Influence Political Processes (on the Example of the BRICS Countries)” showed how AI technologies make it possible to influence political processes in the BRICS countries by shaping the information agenda, managing political preferences and tuning the necessary electoral behavior of target groups. Such an impact can be fraught with serious consequences – large-scale disorientation of the population, violation of the foundations of civil society, and undermining the democratic process itself. The rapid implementation of AI systems in the BRICS member countries, the socio-political contradictions existing within these countries, and the pressure of hostile geopolitical actors create favorable opportunities for psychological influence with the active use of AI technologies. Dr. Sebekin notes that one of the predicted features of psychological influence technologies through AI is to find “pain points” in the political processes of certain states, jeopardizing the reputation of political figures, imposing false ideas about the political situation, and undermining trust in political institutions.

Conclusion

The two sessions of the panel were attended by established and young researchers, including postgraduates and students from Brazil, Belarus, China, France, Russia, Romania, and Turkey. The contributors of eight papers answered numerous questions from the audience. Summing up the results of the two sessions of the forum, it should be noted that the rapid development of AI is accompanied by an increasing threat of its use for malicious purposes. AI technologies are already being used today by antisocial actors who pose a threat to international psychological security. The threat of malicious use of AI is multi-layered, therefore, to effectively counter it, it is necessary to introduce technical, political and legal measures at the international level as part of a socially oriented transformation of the system in accordance with the opportunities and risks of the 21st century.