The Ethics and Economics of AI: Lessons for Universities

By tackling the economic and ethical dimensions of AI, African leaders offer valuable insights for universities.

By: Adi Gaskell
featured-image

While the precise impact of generative AI remains shrouded in a degree of mystery, what seems more certain is that it will continue to impact higher education. The debate is about ensuring that this impact is positive. For instance, the International Monetary Fund recently highlighted AI’s potential to exacerbate existing inequalities. For higher education, this translates into a pressing need to ensure AI enhances access to learning, fosters equitable outcomes and prepares students for a dramatically shifting job market.

During the 2024 IESE Alumni Conference, Rita Babihuga-Nsanze, chief economist at the Africa Finance Corporation, explored how AI could reshape global economies and local ecosystems. Her insights offered valuable lessons for higher education leaders navigating an era of rapid technological and societal change.

Lessons from Africa: Building Technological Sovereignty

Babihuga-Nsanze explained that there is generally a high degree of optimism around AI as a potential catalyst for economic growth across Africa. She highlighted similar transformations from mobile technology, which served as the first modern infrastructure in many places, allowing citizens to tap into the modern economy in various ways.

For instance, the African Union’s Agenda 2063 outlines the continent’s future aims. The strategy envisions Africa as a global powerhouse, though concerns exist around its apparent external focus rather than cultivating local AI communities. These communities are crucial for achieving technological sovereignty.

Universities, too, often rely on third-party edtech platforms. A better approach would be to foster in-house expertise through AI-focused research and development, allowing universities to ensure their technology solutions align with their specific values and priorities.

Overcoming the Deficit Model

If AI is to fulfill its transformational potential, it requires overcoming the so-called “deficit model.” This model focuses on what’s missing rather than what’s present, often leading to external solutions being imposed. As William Easterly refers to it, this is akin to the “White Man’s Burden.”

The deficit model offers a cautionary tale for higher education. Universities have an opportunity to lead in ethical AI adoption by creating tailored systems that reflect their unique missions and serve their diverse student populations. However, this mission might falter if they rely solely on external solutions.

Empowering local stakeholders to devise and scale up proven solutions ensures that traditionally marginalized and underrepresented groups are included in AI policymaking. This approach underpins a fair approach to data governance, viewing data as a shared digital commons.

Trust and Transparency in AI Systems

AI is increasingly deployed across society, from driver-assistance tools in cars to virtual assistants on our devices, and it is a growing presence in universities. For these tools to gain widespread acceptance, users must inherently trust them to be reliable. Ariadna Font Llitjós, CEO of generative AI company Alinia, emphasizes that the ability to control AI is central to fostering trust.

A recent INSEAD study shows that when AI feels autonomous, trust among users erodes. Previous generations of AI were rule-based, giving users a high degree of control. Modern AI, however, operates on deep learning, making it less predictable. If this trust is violated, users often feel a profound sense of betrayal, comparable to human interactions.

To address this, developers must make AI transparent and reliable while managing user expectations. In some cases, lowering the perceived autonomy of AI systems can be beneficial, particularly in fields like medicine, where overreliance on technology poses risks.

Universities as Ethical AI Leaders

For Font Llitjós, aligning AI with human values and societal norms is essential. She advocates that controlling AI should be considered a new human right. In universities, AI tools used for admissions, grading or resource allocation must be transparent and manageable by staff. Guardrails can help ensure that technology reflects institutional values and societal standards, preventing unethical behavior.

Ethics boards provide an excellent first step toward aligning AI development and deployment with university values. Embedding ethics into AI teaching and research ensures that graduates are both tech-savvy and ethically conscious.

Preparing Students for the AI Economy

AI is automating routine jobs and reshaping global supply chains. Universities must adapt by equipping students with skills that machines cannot replicate, such as critical thinking, creativity, and ethical reasoning. Partnerships with industries can also provide students with experiential learning opportunities, helping them understand AI’s impact across various sectors.

Babihuga-Nsanze underscores that AI’s power must be wielded as a tool of empowerment rather than division. Inclusive and proactive approaches allow universities to bridge gaps in provision and support for students while offering personalized learning pathways that enhance access for underserved populations.

Shaping Global AI Norms

Universities are uniquely positioned to influence global AI norms. Through interdisciplinary research, public engagement, and collaborations with policymakers, they can ensure AI serves society’s best interests. By prioritizing ethics over convenience, universities uphold their mission to advance knowledge and opportunity responsibly.

The Path Forward

As with many technologies, AI has the potential to perpetuate societal inequities or serve as a force for good. Universities must critically engage with AI to ensure its applications promote equitable and inclusive futures while amplifying diverse voices and visions. The lessons from Africa and beyond provide a compelling roadmap for higher education leaders ready to lead in this transformative era.

Adi Gaskell

Adi Gaskell

Contributor

Adi Gaskell currently advises the European Institute of Innovation & Technology and is a researcher on the future of work for the University of East Anglia. Previously, he was a futurist for the sustainability innovation group Katerva and mentored startups through Startup Bootcamp. He is a recognized thought leader on the future of work and has written for Forbes, the BBC, the Financial Times, and the Huffington Post, as well as for companies such as HCL, Salesforce, Adobe, Amazon, and Alcatel-Lucent. When not absorbed in the tech world, Adi loves to cycle and get out to the mountains of Europe whenever possible.


Newsletter Sign up!

Stay current in digital strategy, brand amplification, design thinking and more.