Plot 6105 Valley Rd, Canaan Sites, Gayaza Nakwero


+256 414 660 733

Our Stories

DEI should be at the center of AI tools deployed in Africa’s healthcare systems

By Frank Ssekamwa

DEI stands for Diversity, Equity, and Inclusion which underpins the representation and participation of underrepresented documented. African health systems are characterized by several structural and systemic biases that lead to marginalization and discrimination of some underrepresented communities including women and young adolescents, sexual minorities, and persons living with disabilities. The introduction of Artificial Intelligence (AI) has been seen as an opportunity to address these biases. However, there are also concerns that AI might worsen existing biases and lead to more discriminatory health outcomes. Consequently, considering DEI in the use of AI presents an opportunity to address existing discriminatory practices within healthcare systems and mitigate potential biases in AI tools.

African healthcare systems are currently lacking DEI due to several prejudices based on religion, age, sex, gender, culture, and socioeconomic status among others. An example is the negative attitudes of healthcare professionals towards patients with mental health challenges or early pregnancies. It is also evidenced in HIV patients who suffer stigmatization and ill-treatment in health facilities and communities. Studies on adolescents and Sexual Reproductive Health Rights reveal that female healthcare workers have a bad attitude towards adolescents trying to access SRH services and products. In addition, gender minorities are yet another group of people that face marginalization and discriminatory challenges in accessing healthcare in Africa.

Such discrimination is legitimized by existing colonial legal, policy, and institutional frameworks that embrace anti-abortion, anti-LGBTQ frameworks and SRH for adolescents, and bar patients with specific health challenges related to those classes and orientations from accessing healthcare. This means data collection and reporting cannot reveal certain diseases, conditions, orientations, and services offered.  Consequently, the current AI training programs are utilizing biased data, leading to biased and discriminatory AI decision-making.

AI tools are only as good as the data that trains them, therefore, as an elementary principle, AI datasets must be quality, accurate, reliable, and unbiased. Conversely, most datasets are composed of what is perceived as ‘normal’ and ‘legal data’ that is not necessarily ethical. Training AI with such data will lead to biased algorithms, which will be a costly human oversight.

Why DEI?

With DEI in AI tools, African health systems have a chance to preen conscious and subconscious biases inherent in our healthcare systems. Without it, underlying structural and systemic biases will be inherited and exacerbated by AI tools. Countries have made several commitments to adopting technologies in a sustainable manner that respects fundamental human rights, including the rights to health, equal treatment, and freedom from non-discrimination. DEI provides an opportunity to develop and deploy technologies that align not only with human rights principles but also with the values, principles (such as Ubuntu), and social-cultural contexts that shape African societies.

In Africa, data bias, lack of representation, cultural insensitivity, unequal access, privacy, digital colonialism, and lack of accountability. In particular, it leads to inappropriate/offensive healthcare, biased diagnosis, inequitable access due to language barriers in AI tools, lack of representation in medical datasets leads to inaccurate or ineffective health outcomes for marginalized groups, and perpetuates digital divides in healthcare access. Lack of transparency and accountability. Thus, there is a need to define what Africa understands by DEI to inform AI initiatives. Underrepresented and marginalized communities should participate in every step of AI development. Most of the leading AI companies are based in the global north, and unfortunately, it might be hard for them to have the voices of these communities on the continent. But if it is for us, it must conform with our norms. The Focus is on the present and future rather than history. There is no right to manipulate historical prejudices such as facts such as colonialism and imperialism. For instance, there is no point in having a black woman Pope who has never lived in history in the name of DEI.

AI trends in Europe and the US have revealed several subjective interpretations attributed to DEI globally. For instance, the recent Google’s Gemini depiction of “black Nazis” and “woman pope” was a misinterpretation of DEI. Unfortunately,  majority of the AI tools are developed from outside Africa in these regions and there is a danger in AI to not only fail to align with your values but you could also have the AI oppose your values.

How it should be done

DEI by design: AI developers should adopt the concept of “DEI by design” during the process of AI development and deployment. This requires AI companies to have diverse, balanced and inclusive teams where various African diversities are represented depending on the intended use of the AI tool. These should be present from data collection, AI training, and deployment. The local contexts of communities such as social settings, economic, language diversities where the AI is likely to be deployed should also be priority.

Research and Community engagement: AI developers should obtain accurate data for machine training. Thus, research should be conducted to build datasets that are accurate and devoid of biases. There should be Community engagement at all levels from data collection, AI training, regulation, and deployment. Developers must ensure co-creation of AI models with communities to guarantee equity and inclusion of the various African diversities. Research should be conducted to define what and how African communities understand by DEI and find middle grounds on the same. 

Regulation: There should be regulation of AI at national, regional, and global based on applicable human rights standards. DEI should be a legal obligation rather than just a mere social responsibility. Relevant institutions should promulgate AI regulatory frameworks that require mandatory adoption of DEI within AI tools used in the health sector and those that may impact healthcare outcomes, require constant periodical reporting, and establish special enforcement authorities, ensuring prompt disclosure, adequate compensation, and other redress actions against the responsible actors in case of non-compliance or breach. meaningful representation and decision making at high-level organs such as UN in the UN Advisory Body on AI,  to ensure representation of African cultures, values, and diversities at all levels.

Adopting Africentric/Afrocontric Approaches

According to M. Adebara et al, each language encodes knowledge about people, their traditions, wisdom, and environment, as well as how it is that they interact with the sum of the concepts in their own culture. Therefore, by deploting  AI tools trained on Afrocentric approaches such pretrained multilingual language models (mPLM) from Africa’s 2,000 languages will ensure inclusive and equitable AI-based decisions.

AI impact assessment: Impact assessments of AI should be a legal requirement as a precautionary measure to assess the impact of the AI tool. A risk-based approach should be adopted in the development and  AI tools in the health sector. A high-risk-based approach to AI as it applies to health is paramount, in cases where inherent bias and discriminatory AI-based decisions against African communities are likely possibilities.

Capacity enhancement: There should be a concerted effort to train of existing and future healthcare workforce on the practicalities of DEI in their work.

Advocacy for AI rights: Strategic advocacy for DEI within the AI tools is key if this concept is to manifest practically rather than theoretically – the concept of Patients AI rights. Patients Individuals must be informed the extent to which an AI tool is to be used in making healthcare decisions. Patients or users who are unhappy with AI decisions should have the option of seeking a human medical practitioner since AI has the potential of becoming a threat to human existence.


Many underrepresented communities in Africa currently face structural and systemic bias, prejudice, discrimination and marginalization within the healthcare system due to various factors. This ultimately affects the datasets used for training AI tools and could lead to the exacerbation of these inherent structural and systemic discriminations. DEI brings a huge opportunity for African communities to reduce these biases and enhance human rights accountability to ensure sustainable deployment of AI leading to the enjoyment of the most attainable standard of health and other human rights by all persons including the underrepresented communities. Therefore, DEI should be at the core of AI integration in Africa’s healthcare systems. This applies to both public and private sectors of all sizes, small or large. 

Leave a Reply

Your email address will not be published. Required fields are marked *