Challenge 7: Preventing inequalities

The objective of this challenge is to pay further attention on the ways in which Artificial Intelligence (AI) technologies can trigger positive effects in terms of reducing the existing socio/economic/cultural differences. There are several areas in which the integration of AI solutions would reduce social inequalities [1]. These include:

  • education and training;
  • knowledge and guarantee of individual rights;
  • health and disability, intended as support for situations of hardship.

A significant intervention of intelligent learning support systems is conceivable in the school sector. There is a long tradition of using the computer for these purposes: from Computer Assisted Instruction (CAI) systems to Intelligent Tutoring Systems (ITS). In ITS there is always a student model, understood as a knowledge base in which the student’s characteristics and knowledge are explicitly represented. This solution plays a supporting role as it provides integration to traditional teaching systems, helping to fill learning gaps of students with cognitive problems.

Another point of intervention in the school sector is the reduction of the linguistic gap. The offer of adequately modelled simultaneous translation services could help to close the gap generated by new waves of migration, thus offering valuable assistance to study [2]. Artificial Intelligence technologies could also play a decisive role in the battle against functional illiteracy [3].

Furthermore, the AI could be applied to overcome the limits imposed by the need to have specialist knowledge to carry out certain activities. AI systems could spread access to information, knowledge of rights and could facilitate the methods for exercising them by those who are in difficult living conditions and that do not have certain knowledge, thus contributing to reducing discrimination.

This represents a very important area of work that requires appropriate awareness raising and cultural promotion.

As for the disability sector, some interesting solutions are pointed out as they can guarantee easier, and more usable, access to services thus, improving the quality of life of individuals. For example this is the case of integrated speech synthesizers for visually impaired persons, which could be implemented with automatic editing programs that can remember previous communications and provide drafts of text, or some experiments involving people suffering from degenerative diseases, like ALS, which provide communication systems which can complete and facilitate the communication process.

Considering types of AI solutions already known, the use of digital assistants could fill the gaps in various categories transversally: for example, thanks to AI problems such as dyslexia could be monitored and corrected through the use of digital assistants that can perform the function of the speech therapist or psychologist.

The challenge of inequalities should also be tackled from the perspective of the need to prevent increasing existing inequalities. There are two levels of potential discrimination, one involving access to and use of AI technologies and one induced by the same AI systems, based on race, gender and other social factors.

It is necessary to operate in order to ensure access to AI tools and solutions as well as to ensure awareness of their use, in order to avoid that only certain categories can benefit from these technologies. In this case, we must avoid thinking that the AI is in itself a value, especially if its use is not accompanied by appropriate interventions aimed at reducing the possibility of creating further gaps. It should also be avoided that the AI technologies themselves lead to inequalities.

A PA linked to the paradigm of social responsibility should not create situations in which the most advanced contact methods, which are also the simplest and which guarantee greater accessibility of services, are the exclusive preserve of those who, by culture, propensity, social extraction or technological endowment are more predisposed to such uses [4].

It is necessary that the Public Administration carefully manages the development of AI solutions on order to guarantee that:

  • they are inclusive, accessible, transparent and comply with legal requirements;
  • they do not have discriminatory profiles;
  • they are free from bias.

In recent times, one of the most active research areas in the field of AI has been the study of bias [5] both from a more formal statistical point of view and from a broader legal and regulatory profile. In a positive scenario, AI systems can be used to “increase”, improve human judgement and reduce our conscious or unconscious biases. However, data, algorithms and other design choices that can influence AI systems, can reflect and amplify the cultural assumptions existing at a given historical moment and, consequently, inequalities.

Consequently biases become the basis for making decisions, favouring some scenarios instead of others, creating disparities and non-homogeneous distribution of opportunities [6].

To do this, it is necessary to expand the bias search and mitigation strategies, not limiting them to a strictly technical approach. Biases, by their nature, constitute structural and longterm distortions that require a deep interdisciplinary research in order to be faced. Addressing and solving critical issues linked to bias therefore necessarily requires an interdisciplinary collaboration and transversal listening methods to different disciplines [7].

This is where the most important game for the prevention of inequalities is played. This is the context in which the Public Administration has the task of intervening, addressing the development of AI solutions, aware of the enormous potential that these have in the promotion of a more widespread equity and in reducing the gaps existing in our community.

CASES OF BIAS

Some cases of bias that recently have figured prominently: A case of unconscious bias/discrimination is, for example, the percentage of male personnel who develop AI services compared to the female percentage (Ref. Global Gender Gap Report 2017 https://assets.weforum.org/editor/AYpJgsnL2_I9pUhBQ7HII-erCJSEZ 9dsC4eVn5Ydfck.png, WEF). Another case, within the United States courts https://www.propublica.org/article/machine-bias-riskassessments-in-criminal-sentencing (Software used in the United States with the aim of predicting which individuals more than others are likely to be “future criminals” - has highlighted bias/prejudic eagainst people of colour.Lastly, the extensive use of NLP techniques is rapidly showing how much the vocabularies of the mostspoken languages https://www.technologyreview.com/s/602025/how-vector-space-mathematicsreveals-the-hidden-sexism-in-language/ are strongly affected by gender bias.

Footnotes

[1]Ref. the “Technology Challenge”.
[2]The use of artificial intelligence at the service of machine translations is now widespread (for example the cases of Google, DeepL).
[3]Ref. https://www.compareyourcountry.org/pisa/country/ITA?lg=en.
[4]Ref. Art. 8 of the Digital Administration Code (Legislative Decree no. 82/20015).
[5]Ref. “Ethical Challenge”.
[6]Episodes of this kind have occurred in many cases: in rating algorithms, in the assignment of gig economy jobs and, in general, in algorithmically mediated work.
[7]Ref. Ainow, “Expand AI bias research and mitigation strategies beyond a narrowly technical approach”, 2017, p.2