Researchers from Six Countries Discussed the Challenges for International Psychological Security in the Context of the Use of Artificial Intelligence
On 12 November 2020, a panel discussion “Artificial Intelligence and International Psychological Security: Theoretical and Practical Implications” was held at St. Petersburg State University as part of the international conference “Strategic Communications in Business and Politics” (STRATCOM-2020). The discussion was moderated by Konstantin Pantserev, DSc in Political Sciences, Professor of the St. Petersburg State University, and Evgeny Pashentsev, DSc in History, Professor, Leading Researcher at the Diplomatic Academy of the Ministry for Foreign Affairs of the Russian Federation, senior researcher at the St. Petersburg State University, coordinator of the International Research Group on Threats for International Psychological Security by Malicious Use of Artificial Intelligence (Research MUAI).
The event was broadcasted on the Zoom platform, so everyone could listen to participants talk about pressing issues related to AI. 12 papers were presented at the two panel sessions (including the papers by six speakers online via Zoom). The total number of participants, including those who took part in the discussion, was about 40. Members of the Research MUAI Group from Russia, Romania and France actively participated in the panel. In the work of the panel took part also the researchers from Belarus, China and India. Among Russian universities and research centers were represented: St. Petersburg State University, Moscow State University, Diplomatic Academy of the Ministry of Foreign Affairs of Russia, Russian Presidential Academy of National Economy and Public Administration (RANEPA), Moscow State Institute of International Relations (MGIMO University), International Center for Social-Political Studies And Consulting, Volga State Technological University (Yoshkar-Ola).
The first paper “Artificial Intelligence in the Context of the Ensuring of the International Psychological Security” was presented at the panel session by Prof. Konstantin Pantserev. The necessity of the development of advanced technologies is considered as the indispensable condition for the ensuring of the global leadership for every country in the contemporary world. Special attention is paid to technologies which are based on artificial intelligence (AI). And the capabilities of such technologies are growing at unprecedented speed. Nowadays AI-algorithms are widely used in the design of intelligent machine translation systems, medical diagnostics, electronic commerce, online education, in the creation of intelligent transport systems and even in the production of news and information. And all world leading search engines have suggested voice assistants for their users which significantly simplified and accelerated the search of the appropriate information.
Very often domestic developers of AI-based technological solutions are supported by governments of their countries of origin. According to official data, nowadays more than 30 countries have elaborated their national strategies and roadmaps at the field of AI. But it becomes evident that all technological novelties which should simplify our life can be used maliciously in the future. Thus the rapid growth of our dependence on hybrid computer intelligent systems makes us extremely vulnerable for plotters who use AI-based technologies both for satisfaction of their personal demands or which is much worse for damage of the critical infrastructure of the country. This last point put the serious challenge for the international psychological security.
Konstantin Pantserev underlined following key threats of the malicious use of advanced technologies:
- The loss of control over autonomous weapon. That means that once there appear autonomous armament systems which will be programmed for killing. Nowadays all leading nations pay increased attention for the elaboration of different intelligent armament systems. Thus we can predict that in the nearest future the nuclear arms race will be replaced by the race of the development of military hybrid intelligent systems. But we still can’t predict what will happen if the nation loses control over such intelligent systems or if such systems would be in hands of terrorists.
- Social manipulative practices. Nowadays social media with the aid of AI-algorithms are very effective in the field of target marketing. But this technology can be also used maliciously. AI-algorithms by having access to personal data of millions of people and knowing their needs, strengths and weaknesses can suggest target propaganda and manipulative practices aimed at concrete individuals.
- Invasion of privacy. This threat has already become a reality. Now plotter can track and analyze every step of an online user, as well as the time when he or she is doing his or her daily tasks. Cameras are almost everywhere, and facial recognition algorithms easily identify us.
- Mistakes of operators. The human element will rest crucial even if there appear smart machines which can learn by themselves. AI is valuable to us primarily because of its high performance and efficiency. However, if we do not clearly define a task for an AI-system, then its optimal execution can have dangerous consequences.
- Lack of data. As it is known all AI-algorithms are based on the processing of data and information. The more data that has been uploaded into the system the more accurate will be the result. But if there will not be enough data for the achievement of the proper result or the data will be poisoned by terrorists for example – this can cause a glitch in the functioning of the whole AI-system and has unpredictable consequences for people.
Finally, Konstantin Pantserev pointed out that there is one more threat that should be considered as the most significant one. It is our increased dependence on advanced technologies that have already penetrated the everyday life of every person and become responsible for the functioning of numerous applications and even the critical infrastructure. It becomes evident that such technologies will be valuable for plotters and terrorists of any type.
On the other hand, such plotters by the use of the method of the synthesis of the human image, which is known as deepfake technology, can make any person of his or her choice (politicians, businessmen, any other well-known persons) saying and doing things he or she never said or did. And this threat has already become a reality. Starting from December 2017 there has appeared on the Web a number of fake porno video clips with the participation of well-known Hollywood actresses such as Gal Gadot, Chloë Moretz, Jessica Alba and Scarlett Johansson. Of cause, such films do not threaten the international psychological security but they represent the real threat to the personal psychological security because nobody wants to become a hero of such porno fake films. But what is much worse now there already appear fake video clips with the participation of well-known politicians (e.g. Vladimir Putin, Donald Trump, Barak Obama). And there is no algorithm which can detect deepfakes with 100% probability. Besides, it is necessary to take into account the fact that deepfakes are also developing and each detection of the deepfake improves it. This is the first side of the problem. The second one is related to the fact that now no law would regulate the process of creation and distribution of deepfakes.
Konstantin Pantserev concluded that when elaborating an appropriate legislative base aimed to stop the further distribution of deepfakes we can face serious problems because any unjustified prohibition on the creation and the distribution of a fake video can be interpreted as the violation of the basic principle of freedom of speech and expression. Thus we see the first task of the State to distinguish on the legislative level the malicious use of deepfakes which are aimed at the creation of toxic content from the sort of satire, creativity and self-expression. Until this conundrum won’t be solved one should not expect the appearance of the law which would regulate the process of the creation and distribution of deepfakes. It means that in the nearest future a great number of highly realistic and difficult to detect fake video clips which can destabilize the stability of the international system and threaten global psychological security can appear.
Prof. Evgeny Pashentsev presented his paper “Artificial General Intelligence and Superintelligence: Threats to International Psychological Security in the Process of Working on Their Creation” at the Plenary Session, then, he continued this topic at the first session of the panel.
Artificial Narrow Intelligence (ANI) is the only form of artificial intelligence that humanity has achieved so far. This is AI that is good at performing a single task, such as playing chess or Go, or making purchase suggestions, sales predictions and weather forecasts. The possible creation of human-equivalent Artificial General Intelligence (AGI) in the current century what many experts in AI pay attention to, and consequently the almost inevitable and very rapid arrival of Artificial Super Intelligence (ASI) and potential singularity, will inevitably introduce fundamentally different realities. Some aspects of the creation of AGI seem important in the context of the topic of the current panel.
AGI promises great prospects: to become a financial Eldorado of the twenty-first century and to give answers to many questions considered today a matter of a very distant future (from personal immortality to flight to other stars). There are attempts to speculate on the natural expectations of people to present the creation of the AGI is almost a fait accompli. This fact explains, on the one hand, the continuing skepticism of some researchers about the possibility of creating AGI, and on the other, panic publications of tabloids about the end of the world and the seizure of power by ASI almost tomorrow. The emergence of financial bubbles on investment expectations in a new promising area is also possible over time.
Unlike hypothetical aliens, AGI will be an intelligence with historical, scientific, philosophical and cultural roots in modern human civilization. It will be an intelligence that will develop faster and better than any of the past human generations. However, it will have its origin in us, based on a deep understanding of human civilization in all its contradictory development and achievements. We do not consider our distant ancestors, who lived 2,000 years ago, animals, but they would take us for gods in many circumstances. It is another matter that this intelligence may not want to put up with several negative and dangerous manifestations of modern human society that are dangerous to humans and the entire planet, such as the threat of world war, environmental pollution and other growing problems.
On the other hand, the infant is only an individual, but not a fully-formed personality. Personality forms in the process of socialization. In other words, each individual’s mind forms not by itself, but in the course of a long assimilation of the achievements of human civilization in theory and practice, rationally and emotionally. The child will not find the human mind on his or her own. The sad fate of young children who fell by the will of fate in the community of animals quite convincingly confirms this. Thus, the human mind is to a certain (if not decisive) extent “artificial,” and AGI can become in a sense more “human” than any people.
The threats from AGI to the psychological security of human society appear just now when it is not created yet. Among them Evgeny Pashentsev mentioned:
- The threat of loss of meaning in life by people, the real threat of unemployment because of ANI and especially AGI, other consequences of rapid implementation of AI constitute a real threat of destabilization of public consciousness long before the real appearance of AGI.
- Explosive nature of psychological destabilization with rapid AGI progress.
- The rivalry of great powers and different other state and non-state actors with selfish interests in an increasingly socially and politically polarised world which is accompanied by sharp psychological warfare. The image of AGI will inevitably be used by the opposing sides in this confrontation for a variety of purposes. The use of AGI will be used, for example, to intimidate people: « AGI is better than us”, “we will not have a place on earth” and thus to provoke dissatisfaction with technological progress in a particular country, etc.
- Exaggeration and understatement of the possibility and significance of creating AGI in the current century. The second is more dangerous, because it disorients and does not prepare people for possible drastic changes in their fate.
- We can consider the nature of AGI as the possibility of the emergence of an integrated mind with its will and feelings. However, its birth and initial development will take place in the human environment, based on human information and knowledge. It is important that AGI will not be a product of humanity in general, but specific people. There are also possible options, until AGI’s appearance in the laboratory, controlled by antisocial, reactionary, militaristic circles. In addition, if the environment often deforms people (of different intelligence), then why is this not applicable to General AI? Another thing is if we get an integrated powerful intellectual potential, capable of solving problems only on human instructions. Then we are dealing simply with a more powerful machine, and the pros/cons of using it will depend on the people running it. Perhaps the second will precede the first. Let’s see.
This is only part of the obvious moments that do not allow us to bow our heads under the knife of the ruthless guillotine of singularity with mystical horror. In addition, today everything still depends on people who, alas, are divided and do not think in the majority about a strategy for the development of society. The development of humanity on a progressive basis with the elimination of acute social contradictions and inequality, with the development of physical and creative abilities of a person in symbiosis with improving artificial intelligence, will allow us to move to a qualitatively new phase of human society.
Darya Bazarkina, DSc in Political Sciences, Professor at the Russian Presidential Academy of National Economy and Public Administration and Senior Researcher at the Saint Petersburg State University, presented the paper “Appeals to the Topic of Artificial Intelligence in Terrorist Propaganda: the Target Audience and Ways to Influence It (on the Example of the Magazine ‘Kybernetiq’).”
As she stressed, Kybernetiq magazine can be considered an example of how terrorist propaganda adapts to changing social, political, military, and technical realities. The magazine has appeared four years ago (the first issue was published on December 28, 2015, the second and third ones in November 2016 and December 2017, respectively). The authors of Kybernetiq set the task of publishing fictional stories in each issue of the magazine, where “the main characters are diverse, but the story always takes place in the same places,” which is worth talking about separately. Despite the fact that, according to the authors themselves, it is “just a fiction,” its purpose is emphasized: “Muslims around the world should convey motivation and ideas” For example in the novel “Unity” the evil in this story (which Iran personifies) is a state ruled by artificial intelligence. To strengthen the trust of citizens and, his influence, “AI Ayatollah Khomeini II” uses idols with hidden mechanisms inside spraying medical nanoparticles, neutralizing the effects of weapons of mass destruction, which the AI itself used earlier. The most contemporary and future, predicted, technologies (up to the mention of the technological singularity, which is reduced to an attempt to achieve immortality) become attributes of a society built precisely on the blind faith of the manipulated society. From the point of view of psychological security, it is interesting that Kybernetiq includes references to materials about deepfake technology and unsupervised image-to-image translation results obtained by specialists from NVIDIA.
Thus, the reactionary political actors fully understand that effective propaganda should take into account the real situation. Kybernetiq is a good example of adaptation of such propaganda to a new round of technical progress, involving new types of weapons and requiring a certain rationality of implementor’s thinking (of course, only at the tactical level). The political component of the novel “Unity” is served in a more attractive, artistic, form than the direct calls for blind obedience to the leader, demanded by the so-called ‘Islamic State’ (IS). This is because the magazine’s propaganda content is aimed at recruiting a more technically educated part of society that is not receptive to traditional terrorist propaganda.
Based on all the above, in addition to the obvious target audience of the magazine (terrorist fighters seeking to hide their digital footprint), Darya Bazarkina distinguishes the following “risk groups”:
- Young people who are passionate about technology (as well as the cyberpunk subgenre in the sci-fi literature and art) initially. Recruiters can start a “hunt” for the part of this group, especially for the people engaged in writing their own programs, modification of computer games, etc., for example, students of faculties and departments of computer science, and separately for propaganda purposes— for young people who study design or are fond of it.
- ICT professionals. These are likely to be much rarer cases, since for established professionals the salary will be a decisive factor, while terrorists usually use propaganda to reduce the cost of recruitment. A separate category can be composed of persons who already sympathize with terrorist organizations in real life and seek to strengthen them technically.
- A wide audience of people interested in both political issues and science fiction literature, advanced technologies (a part of these people can unwittingly become new conductors of terrorist propaganda).
The use of advanced technologies’ images in terrorist propaganda creates new risks. However, by carefully studying them and having (in most cases) much greater technical capabilities, as well as cooperating with each other, state authorities, intelligence agencies, public institutions and the private sector can greatly anticipate the effect of terrorist propaganda and at least prevent recruitment. When more people are using darknet, there is a task for the technical experts to develop the tools for monitoring the hidden part of the Internet (ideally—predictive analysis tools that can work in the darknet). While the technological equipment of terrorist organizations is still overestimated, forecasting the threats associated with the increasingly widespread advanced technologies in a world where the eradication of terrorism is still far away is a matter of international security. Today, more than ever before, we have no right to prepare to fight the last war.
Marius Vacarelu, PhD, professor at National School of Political and Administrative Studies, Bucharest, Romania, presented the paper: Deep-fake as “permanent warfare”.
He underlined that last decades gave birth – among other things – to the concept of “permanent political competition” and from its title the contemporary realities transform this to a second concept: “permanent electoral campaign”. Both “the permanent competition” and “the permanent election campaign” involve huge costs for parties and politicians. However, the most important cost is not the financial one – although in concrete terms it is the highest – but the psychological cost. For a politician with a weak personality it becomes impossible to be in a permanent campaign, because the mental cost of daily stress is very high.
Citizens insist on the politician behaving impeccably, being at the same time a good specialist in administrative and economic issues. From our daily observation, we can see that these normal requirements prove to be difficult to meet by the political class. In reality, the absence of any education standards set to enter into politics allows the access of many people unable to meet these standards, with major consequences for both his psyche and his career.
In this need to adhere daily to the highest standards of politics, the politician saw himself a prisoner of party financial support. The permanent election campaign cost enormously and those who did not have money had a hard time reaching the top of the political hierarchy. At the same time, the consequence of this relationship between politics and money led the electorate to cynicism, people instinctively knowing that with money careers could be boosted in one direction or another.
The advent of the Internet and its development has led to the emergence of an ocean of information. This information’ ocean have as main effect the psychological anesthesia of the electorate, which needs more than ever that real force of messages and politicians. This anesthesia of the electorate is not a national one, but a transnational one, and its effects are equally international. The methods used in the contemporary press are immediately copied in every country, and the leveling effects of human psychology are obvious, because from the same “information ocean water” people can drink a single set of information, and consumers are not always able to distinguish manipulations or truths.
In this competition between psychological-informational anesthesia and the need of politicians to be perfect, AI and especially manipulation through it are interposed. AI has become more and more present today, and in this informational leveling its role will increase, because it has two very useful characteristics for those concerned with the political game. First, data storage capacity is useful in training politicians, as well as in providing appropriate messages to different types of voters. Second, AI is constant in its presence and efforts, which makes it potentially invigorating in any discussion, because the fatigue of the human mind occurs at some point, an aspect that AI does not have.
Among the methods that AI handlers can use, one of the most interesting is the creation of deep-fakes, which are starting to become better and more usual. Like any instrument at the beginning, there are still errors in their production, but their impact is quite strong, precisely because of this novelty.
Deep-fake can be considered today as a way to eliminate the psychological citizen anesthesia. The costs of using AI per unit of product are decreasing, which makes deep-fakes more and more present. The main problem is that one day – sooner than we believe – even the use of Deep-fakes will be something usual and it will be necessary to re-create politics and political education.
For this reason, some questions arise:
– How quickly will deep-fakes processes become widespread in use?
– How quickly will the human mind get used to deep-fakes products?
– What criteria will future generations use to distinguish between truth and falsehood?
– When the deep-fakes procedures will be trivialized, what will have to be invented to reach the minds of voters?
– How will their minds react to those transformations?
The future will give us these answers, but in the face of today’s informational anesthesia – not yet fully installed –AI can be useful for discussing the great problem of this century: the psyche of people in every technologically developed country will resist to all forms of election campaign? The question is very important, because the consequence of a negative answer will not only be a generalized depression, but also the loss of confidence in any form of social organization, which will ultimately call into question governments and even political regimes.
Darya O. Matiashova, B.Sc. (International Relations), Master student at the St. Petersburg State University, mailto: in her paper “Strategies of AI promotion in German and Indian economies: comparison and prospects for world policy system” analyzed trends and problems of strategic planning in the field of AI. She stressed that the introduction of new technologies, which are potential drivers of modernization, into traditional sectors of the economy, implies the formulation of norms and principles according to which the « updated » sectors will function. For the states, these norms become a resource of “normative power” – the ability to expand political influence through the promotion and consolidation of their principles and standards in the institutions of other actors. The concept of normative power is traditionally used in the discourses relating to juridical norms or functioning bureaucratic institutions but its usage theoretically can be expanded to the economic sphere.
Artificial intelligence (AI) is a technology that can increase the productivity of all sectors of the economy, which is an incentive for its active implementation. At the same time, potential threats of both socio-economic and technological nature create a special need for principles regulating the use of AI. The successful development of such principles at the state level can bring both economic and political dividends. This is especially important for rising powers which generally have to rely on non-power tools one of which is normative power. Developing the principles of AI application in the economy will strengthen their regulatory power and become an incentive for the development of the information economy.
India and Germany are salient examples of modern rising powers. Both states have the extensive potential of normative power – India due to the attractiveness of its economic model for the global South (an example of a country of the global South that has successfully integrated into the world economy), Germany due to the regulatory power potential that has been formed within the EU. For both of them, it is important to stimulate the development of the information economy due to competition with the leaders in this industry – the United States and China. Proper planning, as reflected in national AI strategies, will contribute to the promotion. If India and Germany become world leaders in the field of AI, the principles set out in them can be borrowed by other countries that want to repeat their success. In this context, the question arises about the potential for conflict in promoting their norms, standards, and principles of applying AI in the economy. The conflict potential will increase in case of vast normative divergences and will be diminished in case of sharing common principles of AI application to the economy.
In the context of India and Germany, the convergence factors are the social orientation of AI as a tool (both countries declare « using AI for the common good » as their strategic goal) taking the levels of economic and social development of these countries into account, focus on smart cities and the transport sector as on one of the main drivers of AI implementation in the economy, focus on the use of AI to promptly inform the population about emergency situations, the operation of complex monitoring systems, and the elimination of the consequences of man-made and natural disasters and, finally, perspective on the AI implementation to the economy as the result of a multi-stakeholder dialogue between research centers, the state, the financial sector, and industry. There are also such factors of the AI threats underestimation and the general optimism regarding the AI implementation into sectors connected with high security and health risks (such as medicine, transport, rescue services) – nevertheless, they can be regarded as the convergence factors only in the long term outlook and only in case of common cooperation to prevent threats underestimated.
The divergence factors have tactical (while for India the dialogue has primarily a technocratic character, the German strategy involves a broader review of the legal framework and the introduction of public and private audit), subjective (if Germany is focused on creating common research institutes and industrial clusters with the EU, then India plans to create them together with mainly American and British TNCs and universities) and strategic (if India’s ambitions are focused on overcoming existing economic problems and offering effective solutions to the developing world, then Germany’s goals are to make German AI a global quality mark, and European AI a global industry leader) dimensions. The latter can provoke intensive technological competition which is able to spill over into “standards competition” and even a trade war. The identity basis for this competition, in its turn, can be rooted in the perception of the Western technologies promotion as “rival” and “neocolonial” one, on the one hand, and in the willingness to push the weakening economic growth in the West due to the demand in the East.
Thus, as Darya Matiashova concluded, the German and Indian principles of implementing AI in the economy bring together social orientation and focus on the transport, security and education sectors, which creates a space for cooperation and dialogue. Both states focus on the same fields where common or close industrial standards can be developed. What is more, the common values of guaranteeing “soft” security can be a factor of rapprochement. However, different views on the role of non-state institutions in shaping norms and standards, different vectors of cooperation, and Germany’s global ambitions create the potential for diverging standards at the level of individual industries and, as a result, make conflicts over these standards possible.
Dmitrii Rushchin,PhD in History, Associate Professor at the St. Petersburg State University, points out in his paper “Problems and Prospects of Artificial Intelligence Development in Russia” that in recent years, much attention has been paid to artificial intelligence (AI) technologies in Russia. Russian President Vladimir Putin said the following during the All — Russian open lesson on September 1, 2017: « Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world. And we would very much not want this monopoly to be concentrated in someone’s specific hands, so if we become leaders in this area, we will share this know-how with the entire world, the same way we share our nuclear technologies today ». However, the actual conditions and development potential of artificial intelligence in Russia are very low, concede to most well-developed countries of the world. For a very long time, Russia did not have a strategy for developing technologies in the field of artificial intelligence. The situation changed significantly in October 2019. Then the « National Strategy for the Development of Artificial Intelligence for the period up to 2030 » was approved.
The National Strategy for the Development of AI defines two reference points for the development of artificial intelligence in Russia. These are 2024 and 2030. It is expected that by the first date, the country will significantly improve its position in this area, and by 2030 it will eliminate the lag behind well-developed countries, as well as achieve world leadership in certain areas related to artificial intelligence. At the same time, according to the document, the key priorities for the development of artificial intelligence in Russia are correlated with the National goals and strategic objectives of the development of the Russian Federation for the period up to 2024, known as the « May Decrees of Vladimir Putin 2018 ».
In general, Russia’s National strategy for the development of artificial intelligence differs significantly from the goals and objectives of national strategies of other countries, has a clear anti-sanction connotation and implies a leading and guiding role of the state. It ignores the crucial role of private investment and private business for the successful development of breakthrough technologies, relying on large state and near-state structures, both when developing plans for the development of artificial intelligence in the country and when implementing these plans.
Despite some relatively developed areas, Russia is currently very far from leading in the field of artificial intelligence. It is quite difficult to determine Russia’s place in the development of artificial intelligence technologies since existing international ratings use different methodologies. Russia is not included in many of them because, until the fall of 2019, the state did not have an official strategy for the development of artificial intelligence.
To understand the level of development of artificial intelligence technologies in Russia, you should look at the selected indicators of technological development associated with it. For example, in terms of the number of supercomputers in the top 500 most powerful in the world, Russia is currently in 15th place in the global ranking. At the same time, it has only three such supercomputers at its disposal. For comparison, there are 228 of them in China, 117 in the US, and 29 in Japan.
Another important indicator is the number of scientific publications on artificial intelligence. According to the Scientific Journal Rankings, Russia is ranked 31st in this indicator.
In the field of computer science, which includes machine learning, 16 Russian universities were among the 684 best universities according to the World University Rankings 2019. However, only two Russian universities — ITMO in St. Petersburg and Lomonosov Moscow State University – were in the top 100.
One of the key indicators of the development of artificial intelligence in a particular country is the number of startups engaged in the development of these technologies. According to TRACXN company, there are currently 168 startups in the field of artificial intelligence in Russia. For comparison, TRACXN counted 6,903 artificial intelligence-related startups in the United States and 1,013 in China. TRACXN is an analytics firm co-founded by Sequoia and Accel alumni to track data from startups.
In part, the small number of startups in the field of artificial intelligence is due to the fact that in Russia, this industry is dominated by already established companies. This feature distinguishes it from most other well-developed technological powers. According to the Project named “Map of artificial intelligence of Russia”, there are a total of 400 companies engaged in development in the field of artificial intelligence in Russia.
As for the use of artificial intelligence and its level of development in the military sphere, it should be noted that due to the secrecy of data, international ratings do not take into account military developments. Meanwhile, for many decades, it was military science and the defense industry that were the locomotives of the country’s technological development.
For the country’s leadership, a high priority for the development of artificial intelligence is determined by significant, by Russian standards, financing of industry projects. However, Russia can hardly claim a good global position in the field of artificial intelligence with the existing level of funding for R&D (Research and Development): 2 times less percentage of GDP than in France or Singapore; 3 times less percentage of GDP than in Finland or Japan; 4 times less percentage of GDP than in South Korea or Israel.
Currently, we are still lagging behind the leading technological powers (primarily the United States and China) and there are some negative fundamental factors, such as the small volume of the venture capital investment market. A more likely scenario is the successful development of certain areas of the use of artificial intelligence technologies, where local leadership of Russia is possible.
On the international stage, Russia opposes the ban on Lethal Autonomous Systems (SAS) or so-called “Killer Robots” and the military use of artificial intelligence. At the same time, the country participates in a dialogue with other states and players, and supports the development of clear universal rules and ethical standards.
Thus, the development of artificial intelligence is becoming an important priority for Russia’s national development. However, the country’s chances of catching up with the leaders (the US, China, Japan and the UK) are very unlikely. If there is political will and sufficient funding, Russia can become a major player in the field of artificial intelligence and achieve leadership in certain areas, concluded Dmitrii Rushchin.
Konstantin Golubev, PhD in History, Associate Professor at the St. Petersburg State University, presented the paper “Oppositional News Channels on Youtube: Threat to Information Security”. He highlighted the public control over the national information infrastructure as the most important strategic objective for the establishment. A loss (partial or complete) of it would create gaps in the production of hegemonic discourse within a single national-linguistic community, hindering its ability to monopolise news agenda or to determine the framing of the most important events. Over the past few years in Russia, the development of information and communication technologies and their growing mass outreach, on the one hand, while on the other, the decline of people’s trust in state-run media among strategically important audiences, has created a situation where those audiences have unhindered access to the once marginal discourse. The latter is exerting a significant influence on those audiences’ perceptions of reality, in fact, turning from marginal into the one that effectively opposes the official discourse. This situation is quite unprecedented for Russia.
Much of the recently published international academic literature on journalism, production, circulation and consumption of news, is devoted to digital hybrid ecosystems, such as Twitter and Facebook, which, according to most authors, now play the role of “gatekeepers” of news. Traditional ways of conveying to audiences what in the past was determined by professional journalists as important or worthy to be called “news” are no longer working. Modern algorithms applied by digital hybrid ecosystems allow users to determine what is presented to them and how it is presented. Moreover, after being published by professional journalists, news is often re-contextualised or re-interpreted by influencers who treat news content from different angles, with their own comments, criticisms, etc. Thus, the concepts of priming and setting a news agenda, as well as of controlling media discourse by a limited number of professionals are no longer relevant. The news diet of large segments of society is determined rather by the number of “likes” and views by other users, as well as by individual behaviour and preferences of a particular consumer.
Russia, in this respect, is no exception, although with some peculiarities. Thus, the phenomenon of Russian oppositional news channels on YouTube is rather unique. On the one hand, Russia is one of the countries where people enjoy access to affordable and fast Internet on par with those western democracies that have independent media, whereas, unlike in the West, the mainstream media are under strict state control in Russia. These two circumstances are conducive to a rapid development of a popular initiative of citizen journalism that channels oppositional news video content via YouTube.
In recent years, the phenomenon of “citizen journalism” has been occasionally brought up in Western discourse, mainly in the context of the “democratic deficit” inherent to traditional journalism, for example in countries such as the United States and Great Britain, where professional journalists rarely bother to put uncomfortable questions to the powerful, especially at the local level. The attitude of the professional community to citizen journalism is ambiguous. In the West, it is somewhat condescending. Thus, CNN and BBC, when using content produced by citizens in their reports, usually emphasise for audiences that the footage is unprofessional and cannot be verified. In Russia, the situation is quite different. Citizen journalists are quite professional for they are often former members of news institutions that, however, due to political reasons, have found themselves outside of their organisations, and are forced to continue their careers as private bloggers. Therefore, it is hardly possible to label them as lacking professionalism, and so the official media simply ignore them, never mention them by name, and deny them access to television studios. Still, they are able to reach broad audiences, often by far exceeding those of the state media.
A characteristic feature of the social news genre is its “positionality”, i.e. explicitly positioning oneself “within issues and stories” from an evaluative point of view, instead of adopting a neutral, abstract position “above the fray” as is traditionally dictated by the norms of journalism objectivity. In conditions of deep fragmentation of the audience into smaller segments, such a manner of presenting information is more readily shared, especially if such views resonate with the inner beliefs of a particular segment of the channel’s subscribers.
Researchers commonly note such “pitfalls” of alternative media as an absence of professionalism, pointing out possible “financial and organizational instability, low technological resources, know-how and fragmented audiences”. In the case of Russia, given the relatively high popularity of alternative citizen media, their financial position is quite stable, while the quality of their technical equipment is constantly improving. Since the number of views of their content can exceed hundreds of thousands over a relatively short duration, the revenues from advertising alone can be quite impressive, not to mention donations from subscribers and sympathisers, as well as other sources of income, such as organisations (or states) “with an interest in intervening in public discourse for political, cultural, religious or financial reasons”. Thus, the excessive rigidity of the media system in Russia has created fertile ground for the success of alternative citizen media that pose a considerable threat to information security and political stability of the current regime, concluded Konstantin Golubev.
Dr Pierre-Emmanuel Thomann, professor in geopolitics, Lyon University 3 Jean Moulin and ISSEP Lyon (France), and President of Eurocontinent ( Belgium) submitted his paper on the theme “Artificial Intelligence, Digitalization and Global Geopolitical competition: What Role for Europe for a More Cooperative Agenda to Counter the Malicious Use of Artificial Intelligence”.
The world is facing an increasing geopolitical fragmentation with the multiplication of actors, the reinforcement of power gap between states and the changing of previous geopolitical hierarchies. Moreover, geopolitical confrontation is more and more the theatre of hybrid warfare including psychological warfare. In this context, digitalization associated with the emergence of Artificial Intelligence (AI) used as a geopolitical weapon through the destabilization of International psychology security (IPS) might contribute to determining the international order of the coming new century, accelerating the dynamics of previous cycle in which technology and power mutually reinforce each other. It will transform some paradigms of geopolitics through new relationships between territories, spatio-temporal dimensions and immateriality.
Not only the malicious use of artificial intelligence (MUAI) used at tactical level by terrorist groups or states can have powerful effects in a conflict for geopolitical influence, but also more « neutral » aspects of AI powered digitalization will create new geopolitical hierarchies if it is used as a way that reinforces a monopoly by one or several states. Both have the potential to destabilize the system of international relations.
From the European point of view, it is admitted that the US and China will dominate AI and digitalization in the international geopolitical arena in the years to come.
The main focus of European Union regarding AI was so far on ethical and economic aspects and this is reflected in his main communication strategy.
The EU commission has published in February 2020 a White Paper on AI and a Report on the safety and liability aspects of AI. The new documents do not address the development and use of AI for military purposes.
Regarding the risks concerning MUAI, the EU White Paper focuses mainly on the question of safety and liability of AI products that will circulate on the EU internal markets. It is just mentioning « AI tools can provide an opportunity for better protecting EU citizens from crime and acts of terrorism. Such tools could, for example, help identify online terrorist propaganda, discover suspicious transactions in the sales of dangerous products, identify dangerous hidden objects or illicit substances or products, offer assistance to citizens in emergencies and help guide first responders. »
The European strategy for data, which accompanies this White Paper, also aims to enable Europe to become the most attractive, secure and dynamic data-agile economy in the world. But as the EU also does not promote European « GAFA » or EU search engines, and this means Europeans are very dependent on the United States.
This is in line with EU promotion of « multilateralism » as international doctrine, and is supposed to foster international cooperation at European and global levels. But is it enough to deal with MUAI and the threats to international psychological security (IPS) in a context of great power rivalry ?
Behind the EU main communication strategy as an « ethical « actor » the perception and strategies of individual member states differ greatly. France for example would like to build strategic alliances to avoid « cyber-vassalization » when Germany focuses more on the economic aspects.
The new EU commissioner Thierry Breton (French nationality) has stressed he will defend digital sovereignty and the use of digital technology in the EU to be able to compete in the international race for exploitation of data collected from communication technology. Facing the risk of strengthening geopolitical imbalances due to unequal access to AI and collection of data, is international cooperation possible for a more balanced distribution of AI research results with common international platforms ?
In 2019, France was also the first European state to publish a military AI strategy. as Defense was designated as a priority AI sector for industrial policy in the French 2018 national AI strategy. France’s approach to AI includes a strong geopolitical dimension as its wants France and Europe to avoid becoming dependant on the United States and China.
In Germany, the military, security and geopolitical elements of AI is not included in its 2018 national AI strategy. However, the German approach on military AI focuses on arms control and debates disarmament.
This difference between French and German approaches is problematic for a strong common EU strategy as well as for common military industrial projects like the French-German new future Combat Air System fighter jet , supposed to include AI technology.
How can EU member states with all their diversity contribute to international cooperation to counter MUAI and protect IPS with other global actors like the US, China and Russia but also secondary actors ? May be international cooperation based on inclusiveness, respect and reciprocity be better achieved with a better geopolitical balance regarding AI and digitalization between global actors like the US, China, Russia and EU member states.
The lively discussion caused by the presented papers showed the novelty and importance of the problems of the psychological security caused by the implementation of artificial intelligence. The panel participants agreed to cooperate in joint events and publications.
Kaleria Kramar MA (ISCPSC), Oleg Sarychev (ISCPSC)