Authors:
(1) Muneera Bano;
(2) Didar Zowghi;
(3) Vincenzo Gervasi;
(4) Rifat Shams.
Table of Links
Abstract, Impact Statement, and Introduction
Defining Diversity and Inclusion in AI
Conclusion and Future Work and References
Abstractā As Artificial Intelligence (AI) permeates many aspects of society, it brings numerous advantages while at the same time raising ethical concerns and potential risks, such as perpetuating inequalities through biased or discriminatory decision-making. To develop AI systems that cater for the needs of diverse users and uphold ethical values, it is essential to consider and integrate diversity and inclusion (D&I) principles throughout AI development and deployment. Requirements engineering (RE) is a fundamental process in developing software systems by eliciting and specifying relevant needs from diverse stakeholders. This research aims to address the lack of research and practice on how to elicit and capture D&I requirements for AI systems. We have conducted comprehensive data collection and synthesis from the literature review to extract requirements themes related to D&I in AI. We have proposed a tailored user story template to capture D&I requirements and conducted focus group exercisesto use the themes and user story template in writing D&I requirements for two example AI systems. Additionally, we have investigated the capability of our solution by generating synthetic D&I requirements captured in user stories with the help of a Large Language Model.
Impact Statement ā As AI systems become increasingly prevalent in everyday life, ensuring they respect and reflect the diversity of society is of paramount importance. Failing to address this issue can lead to AI solutions that perpetuate societal biases, inadvertently sidelining certain groups and prolonging existing inequalities. Our research proposes a mechanism for AI developers engaged in responsible AI engineering to seamlessly integrate diversity and inclusion principles during the system development, ensuring AI decisions and functionalities uphold ethical standards. Our proposal has the potential to directly influence further research on how AI systems are conceived, designed and developed, ensuring that they are inclusive of the needs of diverse users. On a social level, users can be more confident that advancements in AI technology would not come at the cost of marginalisation and potential discrimination.
Index Termsā Diversity, Inclusion, Requirements, Artificial Intelligence.
I. INTRODUCTION
The pervasive role of Artificial Intelligence (AI) in social interactions, from generating and recommending contents, to processing images and voices, brings numerous benefits but also necessitates addressing ethical implications and risks, such as ensuring equitable and non-discriminatory decision-making, and preventing the amplification of existing inequalities and biases [1]. Diversity and inclusion (D&I) in AI involves considering differences and underrepresented perspectives in AI development and deployment while addressing potential biases and promoting equitable outcomes for all concerned stakeholders [1]. Incorporating D&I principles in AI can enable technology to better respond to the needs of diverse users while upholding ethical values of fairness, transparency, and accountability [2].
Requirements engineering (RE) is well acknowledged to be an essential part of software development in general, and in developing AI systems in particular. RE includes the identification, analysis and specification of stakeholder needs, ideally captured in a consistent and precise manner. By focusing on users' and stakeholders' needs, RE aims to contribute to achieving user satisfaction and successful system adoption [3]. Using traditional RE practices for AI systems presents new challenges [4], as these traditional methods need to evolve to address new AI systems requirements, including those related to data and ethics [5, 6].
To develop ethical and trustworthy AI systems, it is recommended to consider embedding D&I principles throughout the entire development and deployment lifecycle [7]. Overlooking D&I aspects can result in issues related to fairness, trust, bias, and transparency, potentially leading to digital redlining, discrimination, and algorithmic oppression [8]. We posit that RE processes and practices can be tailored and adopted to identify and analyse AI risks and navigate tradeoffs and conflicts that may arise due to neglecting D&I principles. For instance, RE can facilitate decision-making in scenarios where maximising inclusion may lead to reduced performance or efficiency or when data transparency about members of under-represented groups could compromise their privacy. By acknowledging diverse users and emphasising inclusive system development, RE can aid in achieving a balance between conflicting objectives and promote the development of ethical AI systems [9].
While a number of guidelines on ethical AI development exist [10], (e.g. addressing bias[11], fairness [12], transparency [13] and, explainable and responsible AI [14]), the published literature shows a scarcity of research about D&I in AI and to the best of our knowledge not much can be found on the topic from an RE perspective. Our research aims to fill this gap and to explore the operationalisation of D&I in AI guidelines into RE process.
Our research methodology encompasses three stages: 1) data collection and analysis from the published literature on D&I in AI to extract relevant themes, 2) proposing a tailored user story template, and 3) focus group exercise to explore the use of the extracted themes and user story template to specify D&I requirements for AI systems. Furthermore, given that involving many stakeholders with diverse attributes in requirements elicitation is challenging and time-consuming, we decided to explore the utility of Large Language Models in generating user stories from the D&I in AI themes. After each focus group exercise, we used GPT-4 to generate D&I user stories. We aimed to examine how closely the user stories from both human analysts and GPT-4 are aligned in terms of diversity attributes and covering the themes.
The major contributions of our work are:
ā¢ Addressing the research gap regarding elicitation of D&I requirements for AI systems.
ā¢ Identifying 23 unique D&I in AI themes from a comprehensive literature review.
ā¢ Introducing a tailored user story template for capturing D&I requirements for AI systems
The structure of this paper is as follows: Section II establishes a foundation by defining diversity and inclusion within the AI ecosystem. Section III presents the research motivation underpinning our work. Section IV details our research methodology, while Section V offers a summary of the results. Section VI delves into a discussion of the findings. Lastly, Section VII concludes the paper and proposes future research directions.
This paper is available on arxiv under CC 4.0 license.