The Rise of AI Impersonations and Executive Fraud Risks
- Apr 24
- 6 min read
Threat actors have increased their deployment of fraudulent, online based investment scams- by creating artificial intelligence (AI)-generated social media impersonations of high-profile investment professionals and successful executives. This type of impersonation fraud has a multi-pronged approach.
Using the Meta-based environment of Instagram, Facebook and most commonly WhatsApp, threat actors create impersonating accounts of well known, high-net worth and successful investment professionals, place social media-based advertisements leveraging the name, likeness and reputation of the impersonated executives, all designed to use these aspects to attract victim “investors.” Promises of incredible financial returns, paired with the fear that the investment group has a limited membership and timeframe to join, create strong emotional pressure for victims to engage.
Once hooked, victims are redirected to encrypted messaging channels such as WhatsApp, where “team members” of the impersonated executive (e.g., assistants, directors, chief investment officers) persuade the victims into paying initial membership fees or investing in sham stocks that generate incredible returns. AI-generated content supports this engagement phase, keeping victims enthralled by the illusion of receiving guidance directly from a prestigious investor. Throughout the process, fraudulent “team member” actors repeatedly request additional funds to maintain group membership. Naturally, a victim is unable to retrieve their money once they realize the extent of the scheme.
Executive impersonation scams have reached a historic high of 54%, surpassing even brand impersonations in sophistication and believability. Insite continues to observe rising financial fraud leveraging AI technologies across social media and encrypted platforms. Malefactors continue to deploy new tactics and subterfuges to keep the social media impersonations active, including linking accounts, posting seemingly authentic pictures, and changing usernames or display names to stay one step ahead of the host network. These masquerades pose severe risks to the impersonated organizations and their executives, including reputational damage, unsolicited communications, and emotionally volatile victims.
These impersonation scams show no signs of abating; if anything, they are becoming more prevalent. On March 18th, the U.S. Securities and Exchange Commission authored the following post on X addressing the influx of impersonations.
Investor Alert: Investors should never rely solely on information from group chats when making investment decisions. Be wary of unsolicited investment advice from unfamiliar individuals, as this is a common starting point for scams.
Emerging AI Impersonations Patterns
A recent review of impersonation trends among numerous C-level executives in the asset management sector shows striking similarities in profile construction, language, content, and posting habits. While it is possible that individual scammers are using shared templates, scripts, or AI-generated content, similarities across each profile suggest that a single fraud ring or an automated impersonation platform may be responsible.
Key Similarities Identified
Formulaic bios stating exact ages and career histories with a decades long background in investment.
Frequently overlapping content, including near identical stock recommendation lists, matching disclaimers, and repeated opening lines such as:
“People ask: ‘Why don’t you charge? I’ve made enough...’”
“My monthly income is approximately…”
Duplicate visuals, including recycled handwritten trading notes and charts.
Synchronized posting rhythms, including daily stock lists and multi-post content dumps at the same time of day.
These patterns are consistent with scalable automated impersonations, meaning the volume of fraudulent accounts could rise quickly and extensively. Perpetrators also historically pivot and cycle whom they are targeting, meaning new executives may face a wave of impersonations without forewarning.
Brand impersonations: 43.09%
Employee impersonations: 2.67%
Executive impersonations: 54.24%
The Impersonation Process
Perpetrators of impersonation scams use a multi-pronged approach that begins on social media. They steal passwords and seize existing accounts, using AI technology to repurpose the accounts to present a realistic profile of a prominent and successful executive. By leveraging existing pages, the bad actors circumvent social media protocols that would have flagged the account as fraudulent.
Using an existing account also enables the fraudsters to increase the credibility of the bogus profile due to the historical record of the seized account. It also segments the username and the display name, bypassing the host social network’s automated impersonation detection.
Then, via the impersonated account, advertisements leveraging the name or likeness of a key executive are generated to attract victim “investors.” Perpetrators pay for and utilize the social media platform’s ads for greater visibility than a run-of-the-mill scam. This can waterfall: on Meta, the use of Advantage+ allows ads to be circulated across both Facebook and Instagram at a high rate due to both the interoperability of the platforms and the use of AI targeting.
When victims engage with the ad or the impersonated account (e.g., by clicking on a link or direct messaging the perpetrators), they are redirected to messaging apps with end-to-end encryption, like WhatsApp or Signal. In this space, the bad actors offer victims trading advice, bitcoin, stocks, or other inducements that involve trading something “of value” for real cash or cash equivalent, resulting in the theft of the victim’s money.
The average social media user may fall victim to these scams for a few reasons. First, the perpetrators leverage the executive’s bona fides to commit fraud schemes. Additionally, the accounts appear more legitimate because they run as ads on Facebook, Instagram, LinkedIn, and X (formerly Twitter). Finally, because perpetrators use AI-generated images and even audiovisuals, obvious red flags are less prevalent.
AI-generated impersonations and subsequent scams persist due to the severe decrease in resources that social media networks like Meta deploy for detection and removal of stolen or fraudulent accounts: there has been a greater than 90% decrease in headcount across Meta’s Trust and Safety Teams over the past several years.
Therefore, these platforms have been unable to remove impersonation accounts swiftly, even when required to do so by their own terms and conditions.
Impacts
Because AI-generated impersonations are sophisticated and believable, victims may become emotionally volatile, especially if they suffer severe financial losses. Some may even seek to confront the executives they believe defrauded them.
For example, in 2020 a cryptocurrency executive received violent threats after scammers used AI impersonation tactics to steal thousands of dollars from a victim. Believing the impersonated CEO was responsible, the victim threatened the executive and his family.
Financial and reputational risks are equally alarming. In Hong Kong, a finance worker fell victim to an AI-generated deepfake video, causing a $25 million loss. Additionally, brand reputations can be severely damaged by fake content attributed to executives. A well-known example occurred when a Twitter impersonation of Eli Lilly announced insulin was free, resulting in a 4.37% stock drop.
Recommendations
The realm of AI-generated impersonations presents a rapidly evolving landscape of security risks for organizations and their executives. As AI technology continues to advance, perpetrators will continue to create more sophisticated and convincing scams.
To mitigate the escalating risks posed by AI-generated impersonations, Insite recommends a multi-layered security strategy:
Deploy Fraudulent Impersonation Mitigation Services
Fight fire with fire: Monitor digital footprints using a platform that leverages artificial intelligence to quickly identify impersonations and expeditiously remove them. Insite offers these capabilities for clients daily. Please contact us to learn more about our capabilities.
Domain Monitoring
Identify and remediate typosquatting or malicious redirection tactics that bolster deepfakes.
Remove Online Personal Identifiable Information
Reduce the personal data available for fraudsters to construct realistic impersonations. Insite’s Personal Information Removal (PIR) program helps clients eliminate exposed PII from data-broker sites.
Publish a Clear Public Warning Statement
Post a conspicuous notice on your website and your professional social media accounts clarifying that executives and employees will not contact individuals via social media or encrypted apps.
Verify Your Accounts; Use Authenticated Content
For your brand and key executives, set up and verify social media accounts with the host providers to fast-track takedown requests of impersonations. Implement commercial social media management tools integrated with your own company’s identity and access management tools (e.g., Okta, Microsoft) and standards to prevent account takeover.
For visual content, enable digital watermarking, verification markers, chain-of-custody logging, and content attribution tools such as YouTube Content ID or Adobe Content Authenticity Initiative. These tools either prevent the dissemination of copyrighted material on certain platforms, or they make it easier to detect and remove AI-generated impersonations.
Follow Best Practices for Unwanted Solicitations
a. Scrutinize unexpected emails; do not reply, open attachments, or click links.
b. Document suspicious phone calls and texts; provide no personal information.
c. Do not open suspicious physical mail; notify Corporate Services immediately.
d. Insite offers assessment and mitigation services for our clients who receive concerning/threatening communications.
Strengthen Personal Security
Use robust cybersecurity tools including antiviruses, VPNs, and webcam protection for home networks, laptops, and smartphones.
Educate Employees
Share details of impersonation schemes internally and outline clear reporting pathways for suspicious activity. Utilize a multi-person and possible “secret code” system for larger transactions to help reduce the likelihood of a financial scam succeeding.


