The National Information Technology Development Agency (NITDA) has issued a high-priority cybersecurity advisory, warning Nigerian professionals and organisations of “serious vulnerabilities” discovered in the latest versions of OpenAI’s artificial intelligence models, specifically GPT-4 and GPT-5.
In a statement released via its official channels, the agency revealed that these flaws could allow malicious actors to manipulate AI outputs and gain unauthorised access to sensitive user data.
The agency’s technical analysis highlighted seven key security gaps that leave users exposed. According to NITDA, the most pressing threat involves indirect prompt injection. This technique allows attackers to hide malicious instructions within everyday digital content, such as social media comments, blog posts, or shortened URLs.
When a user asks the AI to summarise a webpage or browse the internet, the model may unknowingly execute these hidden commands.
This can lead to data exfiltration, where private information is secretly sent to external servers, or memory poisoning, a sophisticated exploit where the AI’s memory is altered over time to cause persistent bias.
Furthermore, attackers may use markdown rendering bugs to bypass safety filters, tricking the AI into generating restricted or dangerous content.
While OpenAI has acknowledged several of these issues and implemented initial patches, NITDA cautioned that Large Language Models still fundamentally struggle to distinguish between legitimate user prompts and cleverly disguised malicious code.
The agency noted that the risk is particularly high for attacks where a user can be compromised simply by having the AI process a malicious search result without any further action required from the human operator.
To mitigate these risks, NITDA is urging the Nigerian tech community to adopt a cautious approach when interacting with AI tools.
The agency recommends that users always verify AI-generated outputs, especially when they involve technical or sensitive data. Organisations are also advised to stay alert to suspicious online content and monitor AI behaviour for any signs of unauthorised actions or data leaks.
NITDA reaffirmed its commitment to monitoring the evolving AI landscape to protect Nigeria’s digital economy from emerging cyber threats.



