by Tobechukwu Ndunagu, Lead Counsel
Introduction
In early 2026, a new social-media “viral trend” swept platforms from Instagram to LinkedIn: users began uploading personal photos and prompts to artificial intelligence (notably OpenAI’s ChatGPT) to generate custom caricatures that reflect their personality, careers, interests, and even seemingly intimate biographical details. Although many participants describe it as a playful or ego-boosting exercise, cybersecurity and privacy experts are raising serious alarms.1
At first glance, these caricatures are amusing digital portraits intended to be exaggerated, colorful reinterpretations of self. But in a moment when AI systems integrate user data and algorithmically reshape it into expressive outputs, what feels like playful participation is, in reality, a consequential transfer of personal data, specifically biometric identity. As experts observe, this seemingly harmless engagement may carry long-term exposures to fraud, identity theft, deepfake exploitation, and future misuse of personal data.2
Cybersecurity Risks: Beyond Fun and Filters
1. Data Ingestion: What You Give, What They Keep
When users upload images or provide prompts that include personal details, the AI service ingests that content into its processing pipeline and, in many cases, its data storage. These images contain biometric data — such as facial features, proportions, and physical identifiers that cannot meaningfully be “taken back” once shared.3
Moreover, in the absence of transparent, enforceable deletion guarantees, uploaded content may persist in unknown stores for an undefined period. Even if a platform claims limited retention, the reality is that cloud-based systems and training back-ends can retain data far beyond user expectations.4
2. The Fraudster’s Gold: Combining Images and Metadata
Identity theft and account takeover threats are common concerns in cybersecurity. In the hands of bad actors, a high-quality AI-generated caricature paired with contextual data about a user’s job, location, hobbies, or social circle becomes a rich “attack surface” for social engineering. Fraudsters could impersonate a target, spoof their voice or image, or craft highly convincing phishing narratives that rely on intimate personal detail.5
These trends effectively encourage users to aggregate disparate pieces of personal information and make them public (a practice that would have once been labeled reckless in the cyber risk community).6
3. Normalization of Oversharing With AI
Privacy researchers warn that the caricature craze may have a more insidious effect than the individual image itself. By normalizing the sharing of personal photos and information with AI systems, the trend erodes users’ instinctive caution about data flows, effectively conditioning people to trade privacy for novelty. This behavioral shift may carry broader repercussions as AI adoption deepens across sectors.7
4. Biometric and Identity Risk
Photos contain more than appearances. Embedded metadata (such as time stamps, location tags, and camera details) and nuanced facial features can be used to correlate digital identities across platforms. In combination with AI’s data aggregation capabilities, this raises the risk of biometric profiling, automated identification, and persistent tracking at a scale previously seen only in intelligence or commercial surveillance contexts.8
5. Deepfake and Content Manipulation Risks
While the caricature trend itself is artistic, it sits squarely within the emerging landscape of generative content tools capable of producing sophisticated deepfakes. The technology underpinning caricature generation (large language and image models) is the same technology that fuels deepfake creation (a class of threat that already poses serious risks to personal reputations, corporate security, and democratic institutions worldwide).
Legal, Ethical, and Regulatory Considerations
Nigeria’s approach under the Nigeria Data Protection Act, 2023 (NDPA) aligns closely with global privacy architecture.9 Like the European Union’s General Data Protection Regulation (GDPR), the NDPA classifies biometric data used for unique identification as sensitive personal data, subject to heightened safeguards including explicit consent, purpose limitation, data minimization and strict storage controls.10 Across the United States, while there is no single federal privacy statute equivalent to the GDPR, state-level laws similarly recognize the unique risks associated with biometric identifiers and impose consent, notice and accountability obligations. The convergence is notable, facial data is no longer treated as casual information. It is regulated, risk-weighted data, and its processing carries legal consequences across jurisdictions.
The AI caricature trend may look harmless. But under the NDPA uploading your face is not merely social participation. It is the processing of sensitive personal data. Section 65 of the NDPA affirms this position by its definition of Sensitive Personal Data to include:
Biometric data for the purpose of uniquely identifying a natural person.
Facial images, particularly those capable of identification, fall within this category. Therefore, it is no longer ordinary data. It attracts heightened protection, and stricter conditions apply. Furthermore, under Section 30 of the NDPA, Sensitive Personal Data (including biometric data) cannot be processed unless specific conditions apply.
Processing is permitted only where:
- the data subject has given explicit consent;
- it is necessary for employment law obligations;
- it is necessary to protect vital interests;
- it relates to data manifestly made public by the data subject; or
- it is necessary for legal claims, public interest, health, etc.
This section imposes stricter requirements than ordinary personal data processing.
Because biometric data is sensitive, consent must be:
- freely given,
- specific,
- informed,
- unambiguous, and
- explicit.
This becomes particularly relevant when AI platforms collect facial images.
For AI Platforms, Core Data Protection Principles Still Apply
The core principles set out in Section 24 of the NDPA—lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; and integrity and confidentiality—are not abstract ideals. They apply directly (and with greater sensitivity) to AI platforms that collect, analyze, store or otherwise process personal data. These obligations closely mirror the foundational architecture of the GDPR, which similarly binds technology providers, including generative AI services, to process personal data for specific, legitimate purposes and to do so proportionately and securely. In the United States, although privacy regulation remains sectoral and state-driven, comprehensive statutes such as the California Consumer Privacy Act (CCPA)11 and biometric-specific regimes like the Illinois Biometric Information Privacy Act (BIPA)12 impose parallel transparency, consent and security obligations on companies deploying data-driven technologies. Therefore, AI platforms are not operating in a regulatory vacuum. Across jurisdictions, modern privacy frameworks demand restraint, proportionality and accountability, particularly where biometric data is processed at scale. Yet many AI platforms still rely on broad, ambiguous terms of service that permit extensive use of uploaded content for training, improvement, and third-party sharing.13
For users, clicking “share” or submitting an image often means giving more than you realize: platforms may retain, analyze, and repurpose that data unless specific opt-out mechanisms are exercised. Without robust regulatory guardrails or enforceable user controls, individuals remain vulnerable to misuse far beyond the original, ephemeral social-media post.
Practical Advice for Users
Professionals and everyday users alike should consider these precautions:
- Limit Sensitive Uploads: Avoid supplying real, high-resolution face images or personal identifying details.
- Review Privacy Settings: Disable features that store or train on your data when possible or when considered necessary.
- Use Temporary or Anonymous Prompts: None of your core identifying details should be included in prompts.
- Think Long Term: What feels like harmless participation today may become reputational or security exposure tomorrow. After sensitive information is transmitted into cloud-based systems, practical control over its deletion diminishes significantly.
Conclusion
For years, privacy advocates urged caution in sharing personal data online. Yet AI tools are reframing data disclosure as creative participation. The more intimate the prompt, the better the result. The higher the image resolution, the sharper the output. The incentive structure rewards oversharing. Every uploaded selfie contains biometric identifiers: facial geometry, skin texture, distinguishing features. These are not passwords. They cannot be reset. When paired with the rich personal prompts many users include (job titles, locations, hobbies, educational background), the result is a neatly packaged identity profile.In the wrong hands, such data becomes a gift to fraudsters.
The world already battles sophisticated social engineering schemes. Fraudsters no longer rely solely on crude phishing emails; they exploit personal familiarity. A high-quality AI-enhanced likeness, combined with scraped professional details, can enable impersonation scams, synthetic identity fraud, or convincing account recovery attempts. The technology that creates playful portraits today is built on the same generative infrastructure capable of producing realistic manipulated media tomorrow.
Social media conditions us to believe trends are fleeting. Today’s caricature will be forgotten next month. But ‘Data’ is not so ephemeral. Even where AI providers state that uploads are not used for model training by default, users rarely interrogate retention periods, third-party processors, cross-border transfers, or security architecture. Screenshots do circulate beyond original platforms. Images can be scraped, cached, downloaded, and repurposed. A caricature meant for amusement can be extracted from its context and redeployed maliciously. And unlike a stolen credit card, a compromised face cannot be canceled.
For policymakers, the caricature craze underscores an urgent need for clarity around biometric data governance, algorithmic accountability, and cross-border enforcement cooperation. Regulators, including the Nigeria Data Protection Commission (NDPC), will increasingly confront questions about AI data collection and consent validity.
For corporations, especially financial institutions and fintech platforms operating in the rapidly digitizing economy, the proliferation of high-quality AI imagery heightens identity verification challenges. Know-Your-Customer systems built around facial recognition must evolve in a world where synthetic faces and stylized likenesses are ubiquitous.
For individuals, the assessment is direct: short-term participation may create long-term exposure. Generative AI offers extraordinary creative and economic opportunities. But as entrepreneurs, developers, and artists leverage these tools in transformative ways digital maturity requires proportional caution.
End Notes
- ChatGPT’s AI caricature social media trend could be a gift to fraudsters, experts warn https://www.euronews.com/next/2026/02/14/chatgpts-ai-caricature-social-media-trend-could-be-a-gift-to-fraudsters-experts-warn
- AI caricatures go viral, but data privacy risks are not to be ignored https://www.bitdefender.com/en-us/blog/hotforsecurity/chatgpt-caricatures-trend?
- Digital Rorschach Test: Why are we obsessed with how AI “sees” us? https://cybernews.com/ai-news/chatgpt-caricature-ai/?
- Privacy warning as ChatGPT caricature trend sweeps social media https://en.roya.tv/articles/18762?
- ChatGPT’s AI caricature social media trend could be a gift to fraudsters, experts warn https://www.euronews.com/next/2026/02/14/chatgpts-ai-caricature-social-media-trend-could-be-a-gift-to-fraudsters-experts-warn
- Privacy warning as ChatGPT caricature trend sweeps social media https://en.roya.tv/articles/18762?
- AI caricatures go viral, but data privacy risks are not to be ignored https://www.bitdefender.com/en-us/blog/hotforsecurity/chatgpt-caricatures-trend
- This Viral AI Caricature Trend Is Taking over the World and Your Privacy https://beebom.com/viral-ai-caricature-trend-taking-over-your-privacy/?
- Nigeria Data Protection Act, 2023 (No. 37, Laws of the Federation of Nigeria, 2004)
- The General Data Protection Regulation (Regulation (EU) 2016/679)
- California Consumer Privacy Act of 2018, Cal. Civ. Code §§ 1798.100–1798.199
- Illinois Biometric Information Privacy Act, 740 ILCS 14/1 (2008)
- Privacy warning as ChatGPT caricature trend sweeps social media https://en.roya.tv/articles/18762?