Look out for AI-generated ‘TikDocs’ who exploit the general public’s belief within the medical occupation to drive gross sales of sketchy dietary supplements
25 Apr 2025
•
,
3 min. learn

As soon as confined to analysis labs, generative AI is now accessible to anybody – together with these with sick intentions, who use AI instruments to not spark creativity, however to gas deception as an alternative. Deepfake know-how, which might craft remarkably lifelike movies, photos and audio, is more and more changing into a go-to not only for movie star impersonation stunts or efforts to sway public opinion, but in addition for identification theft and all method of scams.
On social media platforms like TikTok and Instagram, the attain of deepfakes, together with their potential for hurt, could be particularly staggering. ESET researchers in Latin America not too long ago got here throughout a marketing campaign on TikTok and Instagram the place AI-generated avatars posed as gynecologists, dietitians and different well being professionals to advertise dietary supplements and wellness merchandise. These movies, usually extremely polished and persuasive, disguise gross sales pitches as medical recommendation, duping the unwary into making questionable and doubtlessly outright dangerous purchases.
Anatomy of a deception
Every video follows the same script: a speaking avatar, usually tucked right into a nook of the display screen, delivers well being or magnificence suggestions. Disbursed with an air of authority, the recommendation leans closely on “pure” cures, nudging viewers towards particular merchandise on the market. By cloaking their pitches within the guise of knowledgeable suggestions, these deepfakes exploit belief within the medical occupation to drive gross sales, a tactic that’s as unethical as it’s efficient.

In a single case, the “physician” touts a “pure extract” as a superior various to Ozempic, the drug celebrated for aiding weight reduction. The video guarantees dramatic outcomes and directs you to an Amazon web page, the place the product is described as “leisure drops” or “anti-swelling aids”, with no connection to the hyped advantages.
Different movies improve the stakes additional, pushing unapproved medication or pretend cures for critical diseases, generally even hijacking the likeness of actual, well-known medical doctors.

AI to the “rescue”
The movies are created with authentic AI instruments that enable anybody to submit quick footage and remodel it into a cultured avatar. Whereas it is a boon for influencers trying to scale their output, this identical know-how could be co-opted for deceptive claims and deception – in different phrases, what may go as a advertising and marketing gimmick shortly morphs right into a mechanism for spreading falsehoods.
We noticed greater than 20 TikTok and Instagram accounts utilizing deepfake medical doctors to push their merchandise. One, posing as a gynecologist with 13 years of expertise underneath her belt, was traced on to the app’s avatar library. Whereas such misuse violates the phrases and situations of frequent AI instruments, it additionally highlights how simply they are often weaponized.

In the end, this will not be “simply” about nugatory dietary supplements. The results could be extra dire, as these deepfakes can erode confidence in on-line well being recommendation, selling dangerous “cures” and delaying correct therapy.
Preserving pretend medical doctors at bay
As AI turns into extra accessible, recognizing these fakes turns into trickier, posing a broader problem even for tech-savvy individuals. That mentioned, listed here are just a few indicators that may aid you spot deepfake movies:
- mismatched lip actions that don’t sync with the audio or for facial expressions that really feel stiff and unnatural,
- visible glitches, like blurred edges or sudden lighting shifts, additionally usually betray the fakery,
- a robotic or overly polished voice is one other purple flag,
- additionally, verify the account itself: new profiles with few followers or no historical past increase suspicion,
- watch out for hyperbolic claims, like “miracle cures”, “assured outcomes” and “medical doctors hate this trick”, particularly in the event that they lack credible sources,
- at all times confirm claims with trusted medical assets, keep away from sharing suspect movies, and report deceptive content material to the platform.
As AI instruments proceed to advance, distinguishing between genuine and fabricated content material will develop into more durable quite than simpler. This menace underscores the significance of growing each technological safeguards and enhancing our collective digital literacy that can assist shield us from misinformation and scams that would influence our well being and monetary well-being.