Experts have called for the localisation of artificial intelligence (AI) models, stronger regulatory frameworks, and closer collaboration between academia, industry, and government to accelerate responsible AI adoption in Nigeria.
The call was made at a postdoctoral fellowship panel session convened by the New Thoughts Media Support Foundation (NTMSF), which brought together leading researchers and practitioners to examine the country’s evolving AI landscape.
Dr. Aderonke Lawal highlighted the rapid but largely superficial adoption of AI across Nigeria. She noted that while Nigerians are enthusiastic users of AI tools in education, business, and creative industries, many lack a deep understanding of how these technologies work.
“Nigerians are very enthusiastic about embracing technology,” she said. “But what we’re seeing is a surface-level adoption. People use these tools without fully understanding their inner workings, which poses potential risks.”
She identified key barriers such as unreliable infrastructure, limited AI literacy, underrepresentation of local languages in global AI models, and the absence of clear regulatory guidelines.
Dr. Kayode Odeyemi emphasised that Nigeria’s AI journey is marked by a mismatch between enthusiastic usage and the absence of localised solutions.
“We are using these tools, but without localising them to our cultural and linguistic contexts,” he said. “To bridge this gap, we must build models that reflect our norms, languages, and cultural logic.”
He argued that localisation would help address issues such as language bias and misclassification, which often occur when foreign-trained AI models encounter Nigerian linguistic and cultural nuances.
Dr. Odeyemi also stressed the importance of developing coherent national policies to align the efforts of different innovation centres and institutions.
Dr. Lawal, whose research focuses on AI-driven misinformation, warned that AI-generated content is blurring the line between fact and fiction, complicating efforts to combat fake news in a country already grappling with low media trust and widespread illiteracy.
She proposed three strategies: Building a verification-first culture in newsrooms, Improving AI literacy among journalists and media practitioners and Promoting transparency and traceability in AI use, including clear disclosure when AI tools are involved in content creation.
“Seeing is no longer believing,” she cautioned. “We need systems that make the media accountable and ensure accuracy over speed.”
Looking ahead, Dr. Lawal identified stronger collaboration between academia and industry as the single biggest opportunity for Nigeria’s AI ecosystem.
“Many academics work in isolation, and industry players often develop solutions without leveraging academic research,” she said. “If we can build innovation pathways through universities and bring all stakeholders—including policymakers and legal experts—to the table, we can move beyond consumption to creating our own AI models.”
Dr. Jibril Abdullahi, who sent in his remarks virtually, outlined a four-pillar framework for effective AI development in Nigeria. The framework focuses on:
Data application tailored to local contexts, Infrastructure and enabling environments, Rigorous evaluation and ethical standards, and Inclusive adaptation through stakeholder feedback.
He emphasised the role of government, academia, industry, and civil society in ensuring that AI solutions address real societal problems.
“Despite the presence of AI in Nigeria, there’s still a lot of foundational work to be done,” Dr Abdullahi noted. “Collaboration and policy alignment are essential.”



