Accelerating Actions and Promoting Digital Wellness (DW) in the context of Artificial Intelligence(AI) Conference report

Accelerating Actions and Promoting Digital Wellness (DW) in the context of Artificial Intelligence(AI) Conference report

  •  

Two Day Online International Conference
on
Accelerating Actions and Promoting Digital Wellness (DW) in the context of Artificial Intelligence(AI)

organized by
India Centre of Excellence for Information Ethics (ICEIE), Centre for Digital Learning, Training and Resources (CDLTR), University of Hyderabad (India) 
Information Ethics Network @ Future Africa, University of Pretoria (South Africa) Russian National IFAP Committee, Interregional Library Cooperation Centre (Russian Federation) International Center for Information Ethics (ICIE)
UNESCO Chair on Language Policies for Multilingualism, Federal University of Santa Catarina (Brazil) Indian National Commission for Cooperation with UNESCO, Dept. of Education (New Delhi)

under the Auspices of UNESCO
Intergovernmental Information for All Programme (IFAP)
supported by Federation of Indian Chambers of Commerce and Industry (FICCI)
March 24-25, 2021
http://cdltr.uohyd.ac.in/international-conference-on-ai-dw/



Summary Report


Day One, 24 March, 2021

Opening Address

·         Prof. Appa Rao Podile, Vice Chancellor, University of Hyderabad:


Inaugurating the conference, he mentioned that the e-Learning Centre, University of Hyderabad, took several initiatives, including information ethics and digital wellness.

Prof.Appa Rao Podile, Vice-Chancellor, UoH and Prof.J.Prabhakar Rao, Chaiperson


Prof. Appa Rao Podile said that the digital wellness website was launched in 2019 and the digital wellness content will be launched today.


Such an initiative is the first of its kind by HEI in India. While appreciating the collaboration with the international partners and with the industry, he wished that the conference will deliberate on various aspects of AI ethics and its link with DW.

 

·         Prof. J. Prabhakar Rao, Chair Person of the Conference:

​At the outset, on behalf of the Organising Committee, he welcomed all the participants. He has highlighted the aim and objectives of the conference by focussing on the uniqueness of the conference, saying that this may be the first conference which emphasises the link between AI ethics and digital wellness.




Emerging technologies like AI, are not advancing development, but increasing divides. Access to information and knowledge is therefore of utmost importance. This meeting is already building on the work that has already been taking place, with substantive meetings and research. Recognises the acknowledgement of the SDGs – education (4), peace, justice and strong institutions (16), as well as revitalise the global partnership for sustainable development (17).


The design, consumption, control of AI needs to be monitored to ensure quality. We cannot have a knowledge society without quality control of the content. There is an invitation to look at the history (World War, etc.) – especially looking at societal upheavals due to the pandemic, pertaining to how technology is being consumed. We also experience the reality where people are not considering how that technology is impacting their lives. The social implications of technology are not much in focus and hoped that the deliberations of the conference will contribute to this.

 

·         Mr. A. Murali Krishna Reddy, Co-chairman, FICCI,

Highlighting the role of FICCI in forging collaborations with the academia, he stressed the need to have a policy on AI. He reiterated the support of FICCI in developing such a policy. He was delighted for FICCI to be a part of this conference.


Keynote Address: Analysing Health Discourse

Geoffery Rockwell, University of Alberta (Canada)

Chairperson: Prof. Rafel Cappuro (Germany)

An observatory of people by people is recommended as a way to manage the impact of communications technology on wellbeing. Mental health and the importance to address digital wellness and the impact that digital fatigue has thereon. Citizen science is a great path to develop such an observatory, though it varies from discipline to discipline. This is why it is important not to preach, but to develop an Observatory where people can play with interactives. Don’t tell people what to believe, let them explore the data and hope that they will begin to reflect on it. We should invest more and more in citizen science and also in more deliberative spaces to improve the dialogue between citizens and science institutions. Of course, we have many challenges to do it, in this moment of partisan-political instrumentalization of science. Deliberative spaces for dialogue between citizens and science institutions are key.


And how shall universities step back from politicization and go back to being objective spaces of enquiry? I am not sure we can ever be totally without politics. One can however be open and transparent. We can say “this is what we gather, now you can observe the data for yourself”.


There is also a geopoliticization of Covid -19 in great power rivalry context and disinformation is also used by officials and this will be more powerful with AI trolls, and increase distrust of citizens. Since politicization and geopoliticization cannot be avoided, the balance of power, access to various sources for information and AI literacy and education of citizens is the key.


Session 1: AI, DW and Indigenous Societies

Chairperson: Evgeny Kuzmin (Russia)

The world is changing rapidly. Our digital world is flooded by useful and reliable materials, but also false information. We should try to be positive, but a huge number of ignorant people, with suspicious intentions, can circulate content and impose it on the public, without obstacles. ICTs will be used to spread fake news, manipulate behaviour, conduct cyber and information wars. Cybercrimes have become a daily occurrence, as indicated by Konstantin Pantserev. How then do we ensure digital wellbeing for all people and how can AI be used to advance the wellbeing of all people?

·         Gilvan Müller. de Oliveira. UNESCO Chair on Language Policies for Multilingualism, UFSC (Brazil) AI for building digital well-being in multilingual and multimodal digital literacy: strategies in an academic context

·         R. Siva Prasad and J. Prabhakar Rao, University of Hyderabad (India), Role of AI and DW in the Wellbeing of Indigenous Communities

The impact of AI in education is vividly noticeable in pedagogical, informational, communicative and administrative processes. The digital media makes traditional literacy practices obsolete. Though COVID-19 pandemic has enormously enhanced the demand for digital literacy, its reach to large part of the population is yet to be achieved. However, remote learning and teaching has created discomfort to both students and teachers in using little-known tools. Appropriate policies are needed to reap the benefits of digital transformation. This will enable to promote multimodality and multilingualism.


The social and cultural ethos of the indigenous and marginalized societies should be considered while designing AI systems and the communities need to be involved in this process. Indigenous knowledge of the communities needs to be incorporated into the governance of AI technologies for realizing inclusive societies and for fulfilling sustainable development goals. Technology is part of the sociocultural system. It is right to consider that AI is not just technical, but a techno-social system. In terms of AI and Indigenous Societies, we should focus on the social-structural implications of AI to achieve inclusive societies. We should also attempt to avoid the colonization of mind (indigenous people, diversity of nations, of cultures languages) by “AI engineered digital empires », which is a key challenge of this century.


A discussion arose on “data fetishism”, where the saying, « Data is God » might qualify this era. Consider that « Data is necessary but not sufficient » and that « data » itself has to be problematised – what data, which categories, who, what, why, how, etc. etc. Of course, we need data-based evidence to contribute to political decision-making. An inherent concern to data fetishism is that we cannot consider critical thinking of humanities and social sciences as a minor science or pseudoscience. It is important to recognise the importance of humanities theories to think about society, the world, social inequalities, etc. We cannot think that science is just evidence-based on data. Critical thinking is required to counter data-driven fetishism. This is why education is central to this discussion.


Session 2: AI, DW and Online Education,

Chairperson: Gilvan Müller de Oliveira (Brazil)

·         Francis Ssekitto, Makerere University (Uganda), Staying safe while teaching and learning online in Library and Information Science training schools in Uganda: The case of Makerere University

·         Manas Ranjan Panigrahi and Shiffon Chatterjee, Commonwealth Educational Media Centre for Asia (CEMCA) ( India), Artificial Intelligence Integration in Online Learning: Experiences from a CEMCA MOOC


Issues of wellness and safety in online spaces have to be incorporated into the culture of staff and student orientation. « Unplanned e-learning » brought about by the pandemic is a source of many unethical issues in many academic institutions. In Kenya for instance, Lecturers now carry more responsibility including using their home spaces and other facilities without institutional support. The experience of seeing students struggle to access learning without any psychological support mechanisms are rife. How can one address the problem of access to online learning and the problems faced by students from disadvantaged sections? For example, pertaining to connectivity, some have negotiated for zero-rating of online learning spaces.



In many people’s experience, recorded classes have impacted the Teacher-Student relationships. In France for example, the lack of opportunity to ‘meet’ and interact with other students which is essential for ‘lateral’ learning and socialisation into learning, is a big issue, with psychological implications. Social meetings should be integrated into online learning to facilitate informal exchange, ‘time out’. Class time should integrate ‘social’ time, ‘meeting’ others etc

There is agreement that there is a need to create some « social spaces » within lessons. Some do it at the beginning of the lesson as every student’s logs in. At other times, lessons are taken to « evaluate’ previous lessons and students are able to share their personal experiences.


Session 3: Promoting awareness of and developing tools for Digital Wellness Chairperson: Atul Negi (India)

·         Ramesh Anumukonda, Chief Gamer and Founder A Plus Associates (India), Responsible use of Gamification In the age of “AI + ML + AR + VR”

·         Mr. Anil Rachamalla, Internet Ethics and Digital Wellbeing Founder – End Now Foundation (India), Responsible use of Artificial Intelligence based emerging technologies for Digital Wellness & Digital Dilemma

How can gamification of learning help disadvantaged students? (Particularly those that are visually impaired?). If the data collected is largely from people’s consumer choices and behaviour, how would disadvantaged sections be represented in the modelling? On the question of whether gaming is a classrelated phenomenon., the response was that play is a natural way to be, without boundaries. It transcends boundaries as well. Easy to learn, hard to master, but can overcome challenges – experiential learning.



The promotion of responsible use of Artificial Intelligence must be ensured, especially in relation to the biases and threats to human dignity and people’s awareness of privacy-related aspects in the cyber world. We should consider continuing to develop AI responsible matrix tools and digital wellness tools.


Day Two, 25 March 2021

Session 4: Ethical Implications of AI

Chairperson: Coetzee Bester (South Africa)

·         Fatima Roumate, Professor of international public law, Mohammed V University, President of the International Institute of Scientific Research (Morocco), Ethics on AI and technological sovereignty

·         Atul Negi, University of Hyderabad (India), AI for Social Good – A Faustian Bargain


In the era of AI, new reforms are needed at different levels, considering the new identity of local and international societies, with the emergence of new players, especially transnational corporations that have invested in AI more than some states. Therefore, sovereignty in AI is key! There is a need to work together between government, corporations and institutes of education. States must protect individual human rights, for it is not the duty of international corporations who are profit-driven. Companies are motivated by profit and demotivated by taxes. The aim is a call for action and new strategies suitable to the new world order announced by COVID 19 and the massive use of AI. This new world order urges international society to rethink the international public law and international institutions and to enhance the ethical framework insofar it pertains to AI and other emerging technologies.


Other recommendations include 1) Fair Data Gathering Certification: All usage of AI in digital marketing etc. must certify that the data gathered has satisfied the requirements of data being given voluntarily and with the knowledge of those giving data. And 2) Certification of Developers of AI Tools and Products: Those who are building AI Tools and Software for use by the general public must certify that they know and understand the biases in data and ethical issues in the usage of their products by uninitiated or unaware users.

Session 5: Community Empowerment & Information Ethics

Chairperson: Vasuki Belavadi (India)

·         Thaiane Oliveira and Aline Paes, Fluminense Federal University (Brazil), Artificial Intelligence and Media and Information Literacy to tackle scientific disinformation

·         Hellen Amunga, Department of Library and Information Science, Faculty of Arts, University of Nairobi (Kenya), The need to promote a culture of peace and non-violence electioneering in Kenya through Social Media and AI


We should acknowledge the current epistemic crises, resulting in the distrust in epistemic institutions. This also leads to political and civil disputes, which disrupts our belief systems and complicating an already complex digital environment. To address this, two recommendations are made 1) Humancentred approach: Put the human being at the centre of the process, create methods that help people to have independence, knowledge, and insight to better judge the information they will accept as true, create methods that help the process of communication and discussion between people. And 2) Network-centered approach: Recognise that individual initiatives and activities have a local and limited reach. The importance of networks, partnerships, civil society and democratic institutions, media, legislature, academia, education, governments, NGOs.

The use of Social Media, Artificial Intelligence and any other technological innovations for peacebuilding in communities should be complementary to other efforts: socio-economic, legal and political. Information Ethics permeates all these variables; and there is a need for stakeholders to not only come up with the best mechanisms of using these innovations for peacebuilding but also to partner in ensuring a level of inherent responsibility towards community empowerment and peace in their development.

We should invest in systems to recognise and reward good actors over bad actors. Finally, forgery detection and authenticity detection and authenticity infrastructure can help, but more traditional journalism skills and fact-checking is currently of utmost importance, essentially underscoring the importance of MIL capabilities (which include the requisite skills, knowledge and attitudes).

Session 6: Round Table – Malicious Use of Artificial Intelligence: Challenging the International Psychological Security (with an academic support of the International Research Group of Specialists on the Threats to International Psychological Security through the Malicious Use of Artificial Intelligence (Research MUAI)

Chairpersons: Evgeny Pashentsev and Darya Bazarkina (Russia)

·         Evgeny Pashentsev, Diplomatic Academy of the Ministry of Foreign Affairs, Russian Federation, Malicious Use of Artificial intelligence through Agenda-Setting: the Risks Are Rising

·         Marius Vacarelu, National School of Political and Administrative Studies, (Romania), Morality and AI tools in Political Campaigns,


 

·         Konstantin A. Pantserev, Saint-Petersburg State University (Russia), The Existing Practice of Malicious Use of AI in Sub-Saharan Africa

·         Pierre-Emmanuel Thomann, Eurocontinent-Brussels-Belgium, Lyon 3 Jean Moulin University (France), EU Main Policies and Paradigms Regarding AI, and Its Ability to Anticipate New Geopolitical Challenges through the MUAI and its Threat to IPS

·         Darya Bazarkina, Russian Presidential Academy of National Economy and Public Administration (Russia), MUAI and Terrorist Communication: Future Threats

Local, international and transnational cooperation are keywords in the dialogues pertaining to the MUAI and IPS. “Wisdom belongs to everyone”, and that is why we must share best practices. International cooperation based on inclusiveness, respect and reciprocity will be better achieved with a better geopolitical balance regarding AI between global actors, like the US, China, Russia and EU member states. The EU should therefore focus more on questions of data sovereignty, geopolitical balance and policies to counter threats to international psychological security (IPS) by the malicious use of artificial intelligence (MUAI). As has been established, there are also existing practices of MUAI in Sub-Saharan Africa and elsewhere, hence it is truly a global challenge.

The panellists have suggested that there is a need to « Support the development of national and international grant projects on the social consequences of the use of AI, including in the field of preventing threats to international psychological security associated with the malicious use of AI ». Serious problems require serious research, and this is a matter of no less serious consideration.


Valedictory Address: International Instruments for AI and Ethics – convergences and divergences

Yves Poullet, IFAP Vice chairman in charge of the Info ethics WG (Belgium)

Chairperson: Prof. J. Prabhakar Rao (India)


Among more than 60 documents on AI ethics, four significant IPO documents stand out. They are 1) OECD Council of Ministers recommendations on AI (2019); 2) UNESCO PRELIMINARY REPORT ON THE DRAFT RECOMMENDATION ON THE ETHICS OF ARTIFICIAL INTELLIGENCE adopted by the Group of experts in September 2020 (to be discussed in April); 3) COUNCIL OF EUROPE CAHAI « « Feasibility study on a legal framework for the creation, development and application of AI based on Council of Europe standards », December 2020; and 4) EU Parliament Resolution on 20 October 2020 containing recommendations to the Commission on a framework for the ethical aspects of artificial intelligence, robotics and related technologies. Ethics refers to acting pragmitacally for the welfare of the humans. Code ethics address certain number of principles of actions and minimum standards to be followed. UNESCO suggests a strong international legal framework. According to CAHAI, soft law instruments and self-regulation initiatives can however play an important role in complementing mandatory governance, especially where the interests of the different actors are more aligned and where no substantive risk of negative effects on human rights, democracy and the rule of law is present. It is to be noted that EU Parliament has rightly demarcated between High Risky AI systems and other AI systems while formulating regularotary mechanisms.

In this regard, the recommended risks’ assessment by the UNESCO include impact on human rights, rights of vulnerable groups, labour laws, the environment and ecosystems, while mandatory risks assessment

proposed by CAHAI emphasise on tailoring the mitigating measures to the above risks are very important. To ensure effective implementation of AI ethics it is suggested to Inform the public and conduct open public discussion to create trust in AI systems. This will ensure a participatory approach and the involvement of different stakeholders. Further, Governments should adopt a regulatory framework that sets out a procedure for, in particular, public authorities to carry out impact assessments of AI systems in order to anticipate impacts, mitigate risks, avoid adverse consequences, facilitate citizen participation and address societal challenges. An independent administrative authority as a supervisory body need to be constituted by the Member States.


Concluding Remarks:


The two-day conference concluded on the note that the promotion of Digital Wellness in AI development and implementation requires a deeper and more nuanced understanding of factors, such as culture, social justice and responsibility, law, inclusion, access, progress, and general human wellbeing. This conference considered potential solutions to the development of an equitable AI-based culture. The IFAP priorities, such as Information Ethics and Information Literacy, take on a new meaning in a “new-normal COVID-19 era”, and these should be central in the discussions pertaining to the design of emerging technologies.


Way forward:

The conference has proposed the following:

1) to translate the Hyderabad Declaration on AI and DW into different languages for its dessimination; 2) to form an international Consortium on AI and DW; and 3) to organise an international conference on IFAP Priority Areas on 07 – 09 September 2021.