Tagged: deep fakes Toggle Comment Threads | Keyboard Shortcuts

  • Geebo 9:00 am on November 11, 2025 Permalink | Reply
    Tags: celebrity, , deep fakes, ,   

    The Rise of Celebrity Deepfake Scams 

    By Greg Collier

    Picture this: you’re scrolling through TikTok or Instagram and suddenly see your favorite celebrity share a video endorsement. The voice, the smile, even the familiar expressions all feel authentic. Maybe it’s an investment opportunity, a charitable donation, or a new product launch.

    It feels real—but it isn’t.

    Recently, a woman in Southern California believed she was speaking directly with actor Steve Burton from General Hospital. Through a series of video and voice messages, she was convinced they were in a relationship. By the time the truth surfaced, she had lost over $430,000, including money from selling her home.

    In another case, influencer Molly-Mae Hague had to warn her followers after a realistic video appeared online promoting a perfume she never endorsed. Supermodel Gisele Bündchen’s image was also used in a fake Instagram campaign that netted scammers millions of dollars before being taken down.

    These aren’t isolated incidents. Deepfake technology is rapidly becoming one of the most dangerous new tools in online fraud.

    What’s Happening:

    Scammers have learned to use publicly available photos and videos to create realistic AI-generated likenesses of celebrities. Once they have enough material, they can digitally clone a person’s face and voice with startling accuracy.

    Here’s how the schemes often unfold:

    • They create a convincing video or audio clip using AI trained on interviews, social media clips, and public footage.
    • The fake content is shared through social platforms, private messages, or even live video streams.
    • Victims are told to invest in a product, send donations, or even begin a “personal relationship” with the celebrity.
    • Once trust is established, the scammer asks for money, crypto transfers, or sensitive information.
    • The real celebrity often has no idea their name and likeness are being used until it goes viral.

    Actress Helen Mirren recently issued a public warning after her image was used to promote a fake charity campaign. Each of these examples shows how scammers manipulate trust in famous faces to create a false sense of connection and urgency.

    Why It Works:

    Celebrity scams are powerful because they mix emotional appeal with technological realism.

    Fans already feel connected to public figures. When a message sounds and looks exactly like someone they admire, skepticism fades. Add a personal touch like “I wanted to reach out to you” or “You’ve been selected for a private offer,” and even cautious people can fall for it.

    Modern AI has also become so sophisticated that voice clones capture tone, pacing, and personality. Even professionals who work with these tools admit they sometimes can’t tell the difference.

    Finally, these scams thrive on emotion—whether that’s excitement, admiration, or loneliness. Victims of romantic deepfake scams often describe feeling special or chosen, which makes it harder to question what’s happening.

    Red Flags:

    Be cautious if you notice any of the following:

    • A “celebrity” contacts you directly through DMs or messaging apps like WhatsApp or Telegram.
    • The conversation quickly moves off the platform where it started.
    • The message includes links to unknown websites or online stores.
    • You’re asked for money, cryptocurrency, or gift cards.
    • The product or cause doesn’t appear on the celebrity’s verified social pages.
    • Something feels slightly “off”—the background, speech pattern, or body language doesn’t quite match.

    Quick tip: If a celebrity asks you to act—send money, buy something, or share personal information—pause and verify through their official accounts or press releases. Real endorsements rarely happen in private messages.

    How to Protect Yourself:

    1. Check official channels. Always verify through the celebrity’s verified social media accounts or website before engaging.
    2. Don’t share personal details. Never send money, ID photos, or banking information in private messages.
    3. Be skeptical of “exclusive” offers. If it sounds like you’re being personally chosen, it’s probably a scam.
    4. Use secure payment methods. Credit cards offer protection that crypto and wire transfers do not.
    5. Talk about it. Share these risks with family members who might be more vulnerable to emotional manipulation.
    6. Report impersonations. Use the “report” feature on social platforms and file a complaint with the Federal Trade Commission at ReportFraud.ftc.gov.

    If you’re a brand or public figure, consider setting up automated alerts for your name and image. This makes it easier to spot and remove fake content before it spreads widely.

    What to Do if You’re Targeted:

    • Stop responding immediately and save all evidence such as screenshots or messages.
    • Contact your bank or payment service to flag suspicious transfers.
    • File a report with the FTC or your local consumer protection office.
    • Monitor your financial accounts for unusual charges.
    • Let others know. Sharing your experience can prevent someone else from becoming the next victim.

    Final Thoughts:

    The rise of AI-generated celebrity content is changing what we can trust online. It’s no longer enough to recognize a familiar face or voice. Today, anyone with a laptop and access to AI tools can create a realistic imitation capable of fooling millions.

    Before you act on a celebrity endorsement or message, take a step back and check the source. Verification only takes a few minutes—and it can save you thousands of dollars.

    Awareness, not fear, is our best defense.

    Further Reading:

     
  • Geebo 8:00 am on October 22, 2025 Permalink | Reply
    Tags: , deep fakes, , political donations, , , ,   

    Deepfake Donors: When Political Voices Are Fake 

    Deepfake Donors: When Political Voices Are Fake

    By Greg Collier

    You get a text from your “preferred political candidate.” It asks for a small donation of ten dollars “to fight misinformation” or “protect election integrity.” The link looks official. The voice message attached even sounds authentically passionate, familiar, and persuasive.

    But it isn’t real. And neither is the person behind it.

    This fall, investigators from the U.S. Treasury and U.K. authorities announced their largest-ever takedown of cybercriminal networks responsible for billions in losses tied to fraudulent campaigns, fake fundraising, and AI-generated political deepfakes. This operation struck transnational organized criminal groups based especially in Southeast Asia, including the notorious Prince Group TCO, a dominant cybercrime player in Cambodia’s scam economy responsible for billions in illicit financial transactions. U.S. losses alone to online investment scams topped $16.6 billion, with over $10 billion lost to scam operations based in Southeast Asia just last year.​

    These scams are blurring the line between digital activism and manipulation right when citizens are most vulnerable: election season.

    What’s Going On:

    Scammers are exploiting voters’ trust in political communication, blending voice cloning, AI video, and fraudulent donation sites to extract money and personal data.​

    Here’s how it works:

    • A deepfake video or voicemail mimics a real candidate, complete with campaign slogans and “urgent” donation requests.
    • The links lead to fraudulent websites where victims enter credit card details.
    • Some schemes even collect personal voter data later sold or used for identity theft.

    In 2024’s New Hampshire primaries, voice-cloned robocalls impersonating national figures were caught attempting to sway voters, a precursor to the tactics now being scaled globally in 2025.​

    Why It’s Effective:

    These scams thrive because people trust familiarity, especially voices, faces, and causes they care about. The timing, emotional tone, and recognizable slogans create a powerful illusion of legitimacy.

    Modern AI makes it nearly impossible for the average person to distinguish a deepfake from reality, especially when wrapped in high-stakes messaging about public service, patriotism, or “protecting democracy.” Add in social pressure, and even cautious donors lower their guard.

    Red Flags:

    Before contributing or sharing campaign links, pause and check for these telltale signs:

    • Donation requests that come through texts, WhatsApp, or unknown numbers.
    • Voices or videos that sound slightly “off,” mismatched mouth movements, odd pauses, or inconsistent lighting.
    • Links that end in unusual extensions (like “.co” or “.support”) rather than official candidate domains.
    • Payment requests through Venmo, CashApp, Zelle, or crypto.
    • No clear disclosure or FEC registration details at the bottom of the website.

    Quick tip: Official campaigns in the U.S. are required to display Federal Election Commission (FEC) registration and disclaimers. If that’s missing, it’s a huge red flag.

    What You Can Do:

    • Verify before donating. Go directly to the official campaign site; don’t use links from texts or emails.
    • Treat urgency as a warning. Real campaigns rarely need “immediate wire transfers.”
    • Listen for tells. Deepfakes often have slightly distorted sounds or mechanical echoes.
    • Cross-check messages. If you get a surprising call or voicemail, compare it with the candidate’s latest verified posts.
    • Report and share. Submit suspicious calls or videos to reportfraud.ftc.gov or your state election board.

    Platforms including Google, Meta, and YouTube are now launching active detection systems and educational tools to flag deepfake political content before it spreads.​

    If You’ve Been Targeted:

    • Report donations made to fake campaigns immediately to your bank or credit card provider.
    • File a complaint through the FTC and local election authorities.
    • Freeze credit if personal or voter identity data were shared.
    • Publicize responsibly. Sharing examples with the right context can warn others, but avoid amplifying active scams.

    Final Thoughts:

    Deepfakes are no longer a distant concern; they’re reshaping political communication in real time. What makes this wave dangerous isn’t just money loss; it’s trust erosion.

    The recent takedown of the Prince Group’s transnational criminal networks by U.S. and U.K. authorities, which included sanctions on key individuals and cutting off millions in illicit financial flows, underscores the global scale of this problem. Their coordinated actions disrupted the infrastructure enabling these massive fraud campaigns, providing a much-needed deterrent to criminals using AI-based scams during critical democratic processes.​

    Staying safe now means applying the same critical awareness you’d use for phishing to the content you see and hear. Don’t assume your eyes or ears tell the full story.

    Think you spotted a fake campaign video or suspicious fundraising call? Don’t scroll past it; report it, discuss it, and share this guide. The more people who know what to look for, the fewer fall for it.

    Further Reading:

     
  • Geebo 8:00 am on October 16, 2024 Permalink | Reply
    Tags: , , , deep fakes, , ,   

    How AI is Fueling a New Wave of Online Scams 

    How AI is Fueling a New Wave of Online Scams

    By Greg Collier

    With the rise of artificial intelligence (AI), the internet has become a more treacherous landscape for unsuspecting users. Once, the adage “seeing is believing” held weight. Today, however, scammers can create highly realistic images and videos that deceive even the most cautious among us. The enhanced development of AI has made it easier for fraudsters to craft convincing scenarios that prey on emotions, tricking people into parting with their money or personal information.

    One common tactic involves generating images of distressed animals or children. These fabricated images often accompany stories of emergencies or tragedies, urging people to click links to donate or provide personal details. The emotional weight of these images makes them highly effective, triggering a quick, compassionate response. Unfortunately, the results are predictable, stolen personal information or exposure to harmful malware. Social media users must be on high alert, as the Better Business Bureau warns against clicking unfamiliar links, especially when encountering images meant to elicit an emotional reaction.

    Identifying AI-generated content has become a key skill in avoiding these scams. When encountering images, it’s essential to look for subtle signs that something isn’t right. AI-generated images often exhibit flaws that betray their synthetic nature. Zooming in on these images can reveal strange details such as blurring around certain elements, disproportionate body parts, or even extra fingers on hands. Other giveaways include glossy, airbrushed textures and unnatural lighting. These telltale signs, though subtle, can help distinguish AI-generated images from genuine ones.

    The same principles apply to videos. Deepfake technology allows scammers to create videos that feature manipulated versions of public figures or loved ones in fabricated scenarios. Unnatural body language, strange shadows, and choppy audio can all indicate that the video isn’t real.

    One particularly concerning trend involves scammers using AI to create fake emergency scenarios. A family member might receive a video call or a voice message that appears to be from a loved one in distress, asking for money or help. But even though the voice and face may seem familiar, the message is an illusion, generated by AI to exploit trust and fear. The sophistication of this technology makes these scams harder to detect, but the key is context. Urgency, emotional manipulation, and unexpected requests for money are red flags. It’s always important to verify the authenticity of the situation by contacting the person directly through trusted methods.

    Reverse image searches can be useful for confirming whether a photo has been used elsewhere on the web. By doing this, users can trace images back to their original sources and determine whether they’ve been manipulated. Similarly, checking whether a story has been reported by credible news outlets can help discern the truth. If an image or video seems too shocking or unbelievable and hasn’t been covered by mainstream media, it’s likely fake.

    As AI technology continues to evolve, scammers will only refine their methods. The challenge of spotting fakes will become more difficult, and even sophisticated consumers may find themselves second-guessing what they see. Being suspicious and fact-checking are more important than ever. By recognizing the tactics scammers use and understanding how to spot AI-generated content, internet users can better protect themselves in this new digital landscape.

     
  • Geebo 8:00 am on September 11, 2024 Permalink | Reply
    Tags: , , deep fakes, , ,   

    Crypto Scammers Exploit Apple’s iPhone 16 Event 

    By Greg Collier

    On September 9, as Apple enthusiasts eagerly tuned in for the launch of the iPhone 16 during the ‘Glowtime’ event, scammers took advantage of the hype by launching an elaborate crypto scam. Using deepfake technology, the scammers created videos that impersonated Apple CEO Tim Cook, promoting fraudulent cryptocurrency giveaways and investment schemes. These videos, which were posted on YouTube, lured unsuspecting viewers into participating in crypto transactions by flashing QR codes on the screen. Viewers were asked to send their cryptocurrency to fake websites that closely resembled Apple’s official site.

    This isn’t the first time scammers have deployed deepfake technology to impersonate prominent figures. Earlier this year, similar tactics were used to imitate Elon Musk, spreading false crypto giveaways.

    The deepfake videos on YouTube managed to garner thousands of views before being taken down, but not before several people flagged them on platforms like X (formerly Twitter). Social media users expressed concern over the growing misuse of advanced technologies like artificial intelligence (AI) for fraudulent purposes.

    This event serves as a chilling reminder of the increasing sophistication of cryptocurrency scams. With cryptocurrencies’ volatile nature and the difficulty of tracing transactions, hackers have found fertile ground for fraud. The FBI has warned that crypto scammers are using ever more advanced techniques, with members of the crypto community losing over $5.6 billion to such scams last year.

    It’s important to remember that celebrity endorsements of cryptocurrency schemes are usually fake. Scammers often exploit the likeness and voices of well-known figures, like Tim Cook or Elon Musk, to create a false sense of trust and credibility. These endorsements are rarely, if ever, legitimate. Instead, they are sophisticated traps designed to manipulate and deceive people into investing in fraudulent schemes. When it comes to crypto, always exercise caution and verify information through trusted sources before making any transactions. If something seems too good to be true, it probably is.

     
  • Geebo 8:00 am on May 22, 2023 Permalink | Reply
    Tags: , deep fakes, , ,   

    AI scams aren’t limited to just voice 

    AI scams aren't limited to just voice

    By Greg Collier

    AI voice spoofing scams are on the rise and have really grabbed our attention recently. Again, this is when scammers take a sample of someone’s voice from online and run the sample through an AI program to make the voice say whatever they want. We see it mostly used in phone scams, where the scammers need you to believe the victim is talking to a loved one. With the advent of AI-generated voices, scammers have gone back into their bag of tricks to make an older scam even more convincing, and that’s the deep fake video.

    A deepfake video refers to a manipulated or synthesized video created using artificial intelligence techniques. In the context of deepfake videos, the AI is used to manipulate or replace the appearance and actions of individuals in existing videos, making it appear as though someone said or did something they didn’t actually say or do. However, to make the voice sound more convincing in deep fakes, a lot more voice sampling was needed than today. Now, bad actors only need a few seconds of someone’s voice to make the cloned voice sound more convincing.

    Recently, a man in Louisiana received a video that appeared to come from his brother-in-law. The video was received over Messenger, and the man’s brother-in-law said in the video that he needed $250 and couldn’t explain why, just that he was in trouble. The message also contained a link to a payment app account where the man could send the $250. The video disappeared from the message, but the link remained.

    Unfortunately for the scammers, they had sent their message to a police sergeant, who knew this was a scam. He called his brother-in-law, who was in no immediate danger.

    If you receive a phone call or instant message from a loved one asking for money, always verify their story before sending any funds. Even if it appears that it’s your loved one contacting you, verify the story. With advances in technology, you can’t believe your eyes or ears in situations like these.

     
  • Geebo 8:00 am on September 9, 2019 Permalink | Reply
    Tags: deep fakes, ,   

    Protect yourself against deepfake fraud 

    Protect yourself against deepfake fraud

    Last week, it was revealed that a German energy company doing business in the UK was conned out of more than $240,000. The scammers were using a form of deepfake technology that mimicked the voice of the company’s CEO. A director of the company was instructed by the phony CEO over both phone and email to wire payment to an account in Hungary to avoid a late payment fine. Reports say that the director could not distinguish between the AI-assisted deepfake and the CEO’s actual voice so the money was wired without question. The plot may not have been uncovered if it wasn’t for the scammers’ greed.

    The scammers tried getting the director to wire more funds to another account. At this point, the director felt like something was up and called the CEO himself. It was at this point that the scammers posing as the CEO called the director while the director was on the phone with the CEO himself. Unfortunately, by this time it was too late to do anything about the original payment. The funds had been scattered across the globe after being wired to the initial account and no suspects have been named as of yet.

    [youtube https://www.youtube.com/watch?v=S1-HV031Fps%5D

    The Better Business Bureau (BBB) has some good news and bad news about deepfake audio though. The bad news is that the technology is advancing at such a rapid pace it could only be a matter of time before scammers would only need to keep you on the phone for a minute before getting enough of your voice to make a deepfake out of you. However, the good news is that companies can fight deepfakes by instilling a culture of security. They suggest that companies should confirm transactions like this by calling the person who supposedly requested the transaction directly.

     
  • Geebo 8:00 am on July 10, 2019 Permalink | Reply
    Tags: deep fakes, defense contractor, , ,   

    These tech scams are frightening! 

    These tech scams are frightening!

    This week’s set of scams are incredibly troubling. Technology has advanced to a point where scams have become harder to spot. Not to mention that some of the tactics used by these scammers are like something out of a movie.

    The first scam is kind of confusing and seems a little convoluted for something that doesn’t bring that much to the scammers. If you’re not familiar with Google Voice, it’s a service that provides you with a free supplementary phone number. Scammers are using Google Voice to hijack phone numbers from personal numbers that have been shared online. For example, if you’ve posted your phone number in a classified ad the scammers will attempt to hijack that number. The scammers won’t be able to take any money from you but could potentially use your number for criminal activity. If your number has been hijacked in one of these scams this article has instructions on how to get your number back. Unfortunately, the steps won’t be that easy.

    The next scam, while rare, is very disconcerting. Security firm Symantec has said that they have found a handful of scams where the scammers have used deep fake audio of business executives in order to trick employees into transferring money to the scammers. Deep fakes are AI generated video or audio that can be hard to tell from the real thing. We’ve previously discussed the potential harm that deep fakes could cause here. The process to generate these deep fakes can cost thousands of dollars ut could end up costing businesses untold losses in the future.

    [youtube https://www.youtube.com/watch?v=VnFC-s2nOtI%5D

    Our last scam for today is the most alarming. According to news site Quartz, a US military defense contractor was taken for $3 million in top-secret equipment by international con artists. All the scammers had to do was use an email address that looks similar to official military domains. This is basically the same phishing scam that’s used to try to steal your banking information except a company with a high government security clearance fell for it to the tune of $3 million. Thankfully, the scammers were apprehended after federal investigators tracked them down through the mailing address they used that they claimed was a military installation. Disturbingly, neither the Quartz article nor the legal documents Quartz obtained state whether or not the sensitive equipment was ever recovered.

     
  • Geebo 8:00 am on May 28, 2019 Permalink | Reply
    Tags: , deep fakes,   

    Could Deep Fakes ruin our world? 

    Could Deep Fakes ruin our world?

    Recently, a video was distributed on social media od Speaker of the House Nancy Pelosi that made it look and sound like she was slurring her speech. While the video was determined to be fake it was put out by someone who supposedly did not support Pelosi’s politics. However, with politics being what it is in this country today there were people who believed it to be real. While this particular video was made using simple editing tricks of an actual video it does bring up the matter of what will happen when ‘Deep Fakes’ become more prevalent in our society and media.

    For those of you unfamiliar with the term Deep Fake, it refers to a process where someone can take a single image or video and by using an AI Assisted program it can turn the original video into just about anything the fabricator wanted. For example, if you wanted to make it look like a beloved celebrity say that they enjoyed stealing candy from babies you probably could. Now take that same process and imagine it being used against candidates running for President. Potentially deep fakes could be used to make it look like any candidate look like they were saying or doing something completely detrimental to their campaign.

    [youtube https://www.youtube.com/watch?v=gLoI9hAX9dw%5D

    The deep fakes could become so commonplace that we wouldn’t be able to tell what was real and what wasn’t. If someone were to commit a heinous act caught on video’ all they would need to say is that the video was a deep fake and scores of people would believe them. Thankfully, the technology is not there yet where a deep fake is indistinguishable from the real thing but it could only be a matter of time before it is. When technologies are used by bad actors, it usually takes law enforcement and government some time to catch up before designing the tools needed to fight them.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel