To shine a long-overdue spotlight on AI-focused women academics and professionals, TechCrunch has initiated a series of interviews that highlight the remarkable contributions of women to the AI revolution. These interviews are published throughout the year as the AI boom escalates, aiming to underscore important but often overlooked work. More profiles can be read here.
Chinasa T. Okolo, currently a fellow at the Brookings Institution’s Center for Technology Innovation within the Governance Studies program, has made significant strides in the field. She previously contributed to Nigeria’s National Artificial Intelligence Strategy and served as an AI policy and ethics advisor for organizations such as the Africa Union Development Agency and the Quebec Artificial Intelligence Institute. Okolo earned her Ph.D. in computer science from Cornell University, focusing949To ensure that AI-driven women academics and others receive the recognition they’ve long deserved, TechCrunch has been featuring a series of interviews spotlighting exceptional women who have significantly contributed to the AI revolution. These profiles will be released throughout the year as the AI industry continues to thrive, shedding light on groundbreaking work that often goes unnoticed. You can read more profiles here.
Chinasa T. Okolo is a fellow at the Brookings Institution within the Center for Technology Innovation’s Governance Studies program. Prior to this, she was part of the ethics and social impact committee that contributed to Nigeria’s National Artificial Intelligence Strategy. She has also advised multiple organizations on AI policy and ethics, including the Africa Union Development Agency and the Quebec Artificial Intelligence Institute. Recently, she earned her Ph.D. in computer science from Cornell University, where her research focused on the impact of AI on the Global South.
How did you initially become involved in AI? What drew you to the field?
My transition into AI stemmed from recognizing the potential of computational techniques to advance biomedical research and democratize healthcare access for marginalized groups. In my senior year at Pomona College, I began work with a professor specializing in human-computer interaction, which introduced me to the issues of bias within AI. During my Ph.D., I focused on understanding how these biases affect populations in the Global South, who make up a majority of the world’s population but are often excluded from AI development.
Which achievements in AI are you most proud of?
I am particularly proud of my involvement with the African Union (AU) in developing the AU-AI Continental Strategy for Africa. This strategy aims to aid AU member states in the responsible adoption, development, and governance of AI. The strategy was drafted over 1.5 years and released in late February 2024. It is now open for feedback, with the goal of formal adoption by AU member states slated for early 2025.
As a first-generation Nigerian-American who grew up in Kansas City, MO, and only traveled abroad during my undergraduate studies, I have always sought to focus my career on Africa. Participating in such impactful work early in my career fuels my enthusiasm to engage in similar initiatives that promote inclusive global AI governance.
How do you manage challenges in the male-dominated tech and AI industries?
Building a community of individuals who share my values has been vital in navigating the male-dominated tech and AI industries.
I have had the opportunity to witness significant progress in responsible AI, led by Black women scholars such as Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Deb Raji. Connecting with many of these leaders over recent years has inspired me to persist in my work and highlighted the importance of challenging the status quo to make a meaningful impact.
What advice do you have for women aspiring to enter the AI field?
Do not be deterred by a non-technical background. AI is a multi-faceted field that requires insights from various domains. My research has been profoundly influenced by contributions from sociologists, anthropologists, cognitive scientists, philosophers, and other experts from the humanities and social sciences.
What are some of the most critical issues facing AI as it progresses?
One key issue is improving the equitable representation of non-Western cultures in leading language and multimodal models. Currently, most AI models are trained in English and rely on data that predominantly reflects Western contexts, thus excluding valuable perspectives from the majority of the global population.
Additionally, the race to develop larger AI models will exacerbate the depletion of natural resources and magnify climate change impacts, issues that disproportionately affect countries in the Global South.
What should AI users be mindful of?
Many AI tools and systems publicly touted tend to exaggerate their abilities and are not always effective. Tasks that people aim to solve with AI might be better addressed using simpler algorithms or basic automation.
Furthermore, generative AI has the potential to amplify the harmful biases seen in earlier AI tools, leading to negative outcomes for vulnerable communities. Educating individuals on AI’s limitations can foster more responsible use of these technologies. Enhancing AI and data literacy among the general public will be crucial as AI becomes increasingly integrated into society.
What is the best approach to responsibly developing AI?
The responsible development of AI requires a critical evaluation of both intended and unintended use cases. Developers of AI systems must reject their use in harmful scenarios such as warfare and policing and should seek external advice to determine the appropriateness of AI for other applications. Given that AI often exacerbates existing social inequalities, it is essential for developers and researchers to be mindful in creating and curating datasets for training AI models.
How can investors advocate for responsible AI?
There is widespread concern that the surge in venture capitalist interest in AI has led to the proliferation of dubious AI products, often referred to as “AI snake oil.” I concur with this view and believe that investors, alongside academics, civil society stakeholders, and industry leaders, must champion responsible AI development. As an angel investor myself, I have encountered numerous questionable AI tools. Investors should invest in AI expertise for evaluating companies and insist on external audits of tools showcased in pitch decks.
Anything else you would like to add?
The current “AI summer” has resulted in a proliferation of self-proclaimed “AI experts” who often distract from critical discussions on the real risks and harms of AI, providing misleading information about the capabilities of AI technologies. I urge individuals interested in learning about AI to be discerning of these voices and seek reputable sources for accurate information.