Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • Geebo 9:00 am on February 12, 2025 Permalink | Reply
    Tags: AI, , ,   

    AI Scam Calls: When Voices Lie 

    AI Scam Calls: When Voices Lie

    By Greg Collier

    A terrifying new scam is targeting families across Georgia and beyond, leaving parents in a state of panic. It starts with a phone call, an urgent plea from a loved one, their voice unmistakable, filled with fear. But law enforcement is issuing a warning. It’s all a hoax.

    One Georgia father experienced this horror firsthand. The call came unexpectedly, his son’s voice screaming, “Dad!” Before he could even process what was happening, the voice on the other end was begging for help, claiming to be in serious trouble. The panic set in immediately, his son’s voice, tone, and mannerisms were all perfect. There was no reason to doubt it.

    As the conversation continued, the situation became more sinister. When he began to question what was happening, the person on the other end turned aggressive, making terrifying threats. They claimed they would harm him, break into his home, and even kill his family. In those moments, fear and confusion took over, making it nearly impossible to think logically.

    It wasn’t until he managed to confirm that his son was safe that the awful truth became clear, he had been scammed. Though no money was lost, the emotional impact was lasting. Even after the call ended, he found himself on edge, constantly aware of his surroundings, shaken by the experience.

    Law enforcement officials confirm that cases like this are becoming more common. Scammers are now using advanced artificial intelligence to replicate voices with chilling accuracy. All they need is a small voice sample, often taken from social media or public videos, and they can create a near-perfect imitation of a loved one.

    What makes these scams even more dangerous is how difficult they are to trace. Investigators say that tracking down the criminals is nearly impossible due to their use of spoofed phone numbers and encrypted communication methods. Despite this, authorities are urging people to take precautions.

    One of the best ways to protect yourself is to have a secret code word with family members, something only they would know. If you receive a distressing call, try reaching out to the person in question through another method before reacting. Police also advise against sharing too much personal information online, as scammers often piece together details from social media to make their stories more convincing.

    This type of fraud preys on emotions, aiming to create fear so victims act before thinking critically. Staying cautious and prepared is the best defense against these increasingly sophisticated scams.

     
  • Geebo 9:00 am on February 3, 2025 Permalink | Reply
    Tags: AI, , , , , , Golden Eagle,   

    AI Deepfake Scam Uses Celebrities to Defraud 

    AI Deepfake Scam Uses Celebrities to Defraud

    By Greg Collier

    The rise of artificial intelligence has brought remarkable advancements, but it has also given scammers a powerful tool to deceive unsuspecting victims. One recent case illustrates how fraudsters used AI-generated videos to impersonate prominent figures, including the sitting U.S. president, the CEO of a major bank, and tech mogul Elon Musk. The scheme revolved around an alleged investment opportunity known as the “Golden Eagles Project,” which falsely promised financial prosperity to those willing to purchase collectible coins.

    Victims were lured in with AI-generated videos that appeared to feature well-known public figures endorsing the scheme. These deepfake-style videos claimed that purchasing a $59 “golden eagle” coin would yield an astronomical return of over $100,000. To make the scam seem even more legitimate, the videos falsely stated that major banks and businesses were participating, allowing people to trade the coins for cash or high-value assets like Tesla cars or stock.

    Despite the seemingly legitimate nature of the endorsements, victims who fell for the scam soon realized the painful truth. The coins were virtually worthless. Even a detailed analysis by precious metal experts confirmed that the items contained no real gold or silver, making them valueless beyond their novelty appeal. One victim, a military veteran, invested thousands of dollars into the scam, believing he was on the path to becoming a millionaire. Instead, he found himself left with nothing but frustration and regret.

    The scam plays on a tactic that has become increasingly common, exploiting public trust in celebrities and high-profile figures. With AI-generated content becoming more convincing, fraudsters have seized the opportunity to create fake videos that appear legitimate to the average viewer. These scams thrive in online spaces where misinformation spreads rapidly, particularly on social media sites where content can circulate without much oversight.

    Beyond the financial losses suffered by individuals, this case also raises broader ethical concerns about the responsibilities of high-profile figures in preventing their likenesses from being misused. While the real individuals behind these fake endorsements had no connection to the scheme, their widely recognized images and voices were weaponized against vulnerable consumers. The damage caused by AI-generated fraud highlights the need for increased digital literacy, as well as stronger regulations around AI-manipulated media.

    Another critical aspect of this scam is the implication that a sitting U.S. president was personally endorsing an investment opportunity. This alone should have been a red flag, as federal law is supposed to prohibit a president from conducting personal business while in office. The position carries enormous influence, and rules exist to prevent any potential conflicts of interest that might arise from commercial endorsements. The idea that a government leader would actively promote a coin-based financial opportunity should have raised immediate skepticism. However, fraudsters took advantage of the public’s trust, crafting a deception convincing enough to ensnare even cautious individuals.

    Scams of this nature serve as a reminder that if an investment opportunity sounds too good to be true, it probably is. While AI technology is advancing rapidly, its potential for deception is growing just as fast. Consumers must remain vigilant, question sensational claims, and verify financial opportunities through reputable sources before making any commitments.

     
  • Geebo 9:00 am on January 27, 2025 Permalink | Reply
    Tags: AI, , , , , , ,   

    AI Voice Scams: The Ransom Threat 

    AI Voice Scams: The Ransom Threat

    By Greg Collier

    In a chilling evolution of traditional scams, a new wave of ransom schemes is targeting families with advanced technology, creating fear and financial loss. These scams, which have been reported in Westchester County, New York, and Chatham County, Georgia, use artificial intelligence (AI) to replicate the voices of loved ones and phone number spoofing to make calls appear authentic. The alarming frequency and realism of these incidents leave victims shaken and desperate.

    In Peekskill, New York, families in a local school district were targeted with calls claiming their child had been kidnapped. Using AI-generated voice replication, scammers made the calls sound as though they were coming directly from the child. The calls included cries for help and demands for ransom, creating a terrifying sense of urgency for the families. Similarly, in Chatham County, Georgia, law enforcement received reports of scam calls where the voices of loved ones were mimicked, and their phone numbers were spoofed. Victims believed they were speaking directly with their family member, further convincing them of the alleged kidnapping.

    This type of scam, known as the virtual kidnapping scam, is made possible by the proliferation of digital tools capable of replicating a person’s voice with only a few audio samples. These samples are often taken from social media, where individuals frequently share videos and voice recordings. Additionally, phone number spoofing allows scammers to manipulate caller IDs, making it seem as though the call is originating from the victim’s own phone or from a familiar contact.

    Authorities have noted that these scams exploit advanced technology and human psychology to maximum effect. The sense of urgency created by threats of violence and the apparent authenticity of the call make it difficult for victims to pause and assess the situation critically. Victims often feel immense pressure to act quickly, believing that hesitation could lead to harm for their loved ones.

    In both Peekskill and Chatham County, authorities have emphasized the importance of verifying the safety of family members independently and resisting the temptation to provide personal or financial information over the phone. Families are being encouraged to create unique verification methods, such as secret passwords or phrases, to quickly confirm the legitimacy of a call. Law enforcement in both areas continues to investigate these cases and spread awareness to prevent further victimization.

    While the technological tools enabling these scams are growing more sophisticated, education remains a powerful defense. By understanding how these scams operate and staying cautious about unfamiliar links or calls, individuals can protect themselves and their loved ones from falling victim to these disturbing schemes.

    With the rise of these incidents, it’s clear that continued efforts to promote awareness and implement preventative strategies will be key in combating this alarming trend.

     
  • Geebo 9:00 am on November 5, 2024 Permalink | Reply
    Tags: AI, , , , ,   

    A Mother’s Close Call with AI Voice Cloning 

    A Mother's Close Call with AI Voice Cloning

    By Greg Collier

    Imagine the terror of receiving a phone call with a familiar voice in distress, only to realize it was a cruel, high-tech scam. This harrowing experience recently befell a mother in Grand Rapids, Michigan, who nearly lost $50,000 over a weekend due to a sophisticated AI-driven scam. This scam, called ‘voice cloning’ mimicked the voice of her daughter so convincingly that it bypassed her natural skepticism and sent her scrambling to respond to what seemed like an emergency.

    It started with a phone call from an unknown number, coming from a town her daughter often frequented. With her daughter’s faint, panicked voice on the other end, she felt an instant urgency and fear that something was gravely wrong. Then, as she listened, the tone shifted; a stranger seized control of the call, asserting himself as a captor and demanding an immediate ransom. Her daughter’s supposed voice—distorted, mumbled, and terrified—amplified the mother’s fears. Desperation began to cloud her judgment as she debated how to produce such a vast sum on short notice.

    In her fear and confusion, she was prepared to do whatever it took to ensure her daughter’s safety. She was ready to withdraw cash, find neighbors who might accompany her, and meet the caller, who had directed her to a local hardware store for the exchange. But her instincts were seconded by her husband, who, while she negotiated, placed a call to the local police department. They advised him to contact their daughter directly, which they did, only to find she was safe and sound, unaware of the horrifying call her mother had just endured.

    This unsettling experience highlights a chilling reality of today’s world: the power of artificial intelligence to manipulate emotions, creating distressing scenarios with fabricated voices. These AI scams work by exploiting easily accessible samples of people’s voices, often found in social media videos or recordings. Voice cloning technology, once a futuristic concept, is now accessible and advanced enough to replicate a person’s voice with unsettling accuracy from just a brief clip.

    The Better Business Bureau advises those targeted by similar scams to resist the urge to act immediately. The shock of hearing a loved one’s voice in peril can push us to respond without question, but taking a pause, verifying the caller’s claims, and contacting the loved one directly are critical steps to prevent falling victim.

    Protecting yourself from AI-driven voice cloning scams requires both awareness and a proactive approach. Start by being mindful of what you share online, especially voice recordings, as even brief audio clips on social media can provide the material needed for cloning. Reducing the number of public posts containing your voice limits potential exposure, making it harder for scammers to replicate.

    Establishing a safe word with family members is also an effective precaution. A unique, shared phrase can act as a verification tool in emergency calls. If you ever receive a call claiming a loved one is in distress, use this word to confirm their identity. By doing so, you create a reliable check against scams, especially when emotions run high.

    It’s essential to take a moment to verify information before reacting. Scammers count on people’s tendency to act on instinct, especially when fear and urgency are involved. If you receive an alarming call, try to reach the person directly using a familiar number. Verifying information before sending money or following instructions can prevent falling victim to such fraud.

    In the end, a calm, measured approach, grounded in verification and pre-established safety measures, can make all the difference in staying protected against AI-driven threats.

     
  • Geebo 8:00 am on October 16, 2024 Permalink | Reply
    Tags: AI, , , , , ,   

    How AI is Fueling a New Wave of Online Scams 

    How AI is Fueling a New Wave of Online Scams

    By Greg Collier

    With the rise of artificial intelligence (AI), the internet has become a more treacherous landscape for unsuspecting users. Once, the adage “seeing is believing” held weight. Today, however, scammers can create highly realistic images and videos that deceive even the most cautious among us. The enhanced development of AI has made it easier for fraudsters to craft convincing scenarios that prey on emotions, tricking people into parting with their money or personal information.

    One common tactic involves generating images of distressed animals or children. These fabricated images often accompany stories of emergencies or tragedies, urging people to click links to donate or provide personal details. The emotional weight of these images makes them highly effective, triggering a quick, compassionate response. Unfortunately, the results are predictable, stolen personal information or exposure to harmful malware. Social media users must be on high alert, as the Better Business Bureau warns against clicking unfamiliar links, especially when encountering images meant to elicit an emotional reaction.

    Identifying AI-generated content has become a key skill in avoiding these scams. When encountering images, it’s essential to look for subtle signs that something isn’t right. AI-generated images often exhibit flaws that betray their synthetic nature. Zooming in on these images can reveal strange details such as blurring around certain elements, disproportionate body parts, or even extra fingers on hands. Other giveaways include glossy, airbrushed textures and unnatural lighting. These telltale signs, though subtle, can help distinguish AI-generated images from genuine ones.

    The same principles apply to videos. Deepfake technology allows scammers to create videos that feature manipulated versions of public figures or loved ones in fabricated scenarios. Unnatural body language, strange shadows, and choppy audio can all indicate that the video isn’t real.

    One particularly concerning trend involves scammers using AI to create fake emergency scenarios. A family member might receive a video call or a voice message that appears to be from a loved one in distress, asking for money or help. But even though the voice and face may seem familiar, the message is an illusion, generated by AI to exploit trust and fear. The sophistication of this technology makes these scams harder to detect, but the key is context. Urgency, emotional manipulation, and unexpected requests for money are red flags. It’s always important to verify the authenticity of the situation by contacting the person directly through trusted methods.

    Reverse image searches can be useful for confirming whether a photo has been used elsewhere on the web. By doing this, users can trace images back to their original sources and determine whether they’ve been manipulated. Similarly, checking whether a story has been reported by credible news outlets can help discern the truth. If an image or video seems too shocking or unbelievable and hasn’t been covered by mainstream media, it’s likely fake.

    As AI technology continues to evolve, scammers will only refine their methods. The challenge of spotting fakes will become more difficult, and even sophisticated consumers may find themselves second-guessing what they see. Being suspicious and fact-checking are more important than ever. By recognizing the tactics scammers use and understanding how to spot AI-generated content, internet users can better protect themselves in this new digital landscape.

     
  • Geebo 9:00 am on February 28, 2024 Permalink | Reply
    Tags: AI, , , , voice c\,   

    The terrifying rise of AI-generated phone scams 

    By Greg Collier

    In the age of rapid technological advancement, it appears that scammers are always finding new ways to exploit our vulnerabilities. One of the latest and most frightening trends is the emergence of AI-generated phone scams, where callers use sophisticated artificial intelligence to mimic the voices of loved ones and prey on our emotions.

    Recently, residents of St. Louis County in Missouri were targeted by a particularly chilling variation of this scam. Victims received calls from individuals claiming to be their children in distress, stating that they had been involved in a car accident and the other driver was demanding money for damages under the threat of kidnapping. The scammers used AI to replicate the voices of the victims’ children, adding an extra layer of realism to their deception.

    The emotional impact of such a call cannot be overstated. Imagine receiving a call from someone who sounds exactly like your child, crying and pleading for help. The panic and fear that ensue can cloud judgment and make it difficult to discern the truth. This is precisely what the scammers rely on to manipulate their victims.

    One brave mother shared her harrowing experience with a local news outlet. She recounted how she received a call from someone who sounded like her daughter, claiming to have been in an accident and demanding a $2,000 wire transfer to prevent her kidnapping.

    Fortunately, in the case of the St. Louis County mother, prompt police intervention prevented her from falling victim to the scam. However, not everyone is as fortunate, with some parents having lost thousands of dollars to these heartless perpetrators.

    Experts warn that hanging up the phone may not be as simple as it seems in the heat of the moment. Instead, families should establish safe words or phrases to verify the authenticity of such calls.

    To protect yourself from falling victim to AI-generated phone scams, it’s essential to remain informed. Be wary of calls that pressure you to act quickly or request payment via gift cards or cryptocurrency. If you receive such a call, verify the authenticity of the situation by contacting the threatened family member directly and report the incident to law enforcement.

     
  • Geebo 9:00 am on January 12, 2024 Permalink | Reply
    Tags: AI, , , , , ,   

    More police warn of AI voice scams 

    More police warn of AI voice scams

    By Greg Collier

    AI voice spoofing refers to the use of artificial intelligence (AI) technology to imitate or replicate a person’s voice in a way that may deceive listeners into thinking they are hearing the real person. This technology can be used to generate synthetic voices that closely mimic the tone, pitch, and cadence of a specific individual. The term is often associated with negative uses, such as creating fraudulent phone calls or audio messages with the intent to deceive or manipulate.

    Scammers can exploit a brief audio clip of your family member’s voice, easily obtained from online content. With access to a voice-cloning program, the scammer can then imitate your loved one’s voice convincingly when making a call, leading to potential deception and manipulation. Scammers have quickly taken to this technology in order to fool people into believing their loved ones are in danger in what are being called family emergency scams.

    Family emergency scams typically break down into two categories, the virtual kidnapping scam, and the grandparent scam. Today, we’re focused on the grandparent scam. It garnered its name from the fact that scammers often target elderly victims, posing as the victim’s grandchild in peril. This scam has been happening a lot lately in the Memphis area, to the point where a Sheriff’s Office has issued a warning to local residents about it.

    One family received a phone call that appeared to be coming from their adult granddaughter. The caller sounded exactly like their granddaughter, who said they needed $500 for bail money after getting into a car accident. Smartly, the family kept asking the caller questions that only their granddaughter would know. The scammers finally hung up.

    To safeguard against this scam, it’s crucial to rely on caution rather than solely trusting your ears. If you receive a call from a supposed relative or loved one urgently requesting money due to a purported crisis, adhere to the same safety measures. Resist the urge to engage further; instead, promptly end the call and independently contact the person who is claimed to be in trouble to verify the authenticity of the situation. This proactive approach helps ensure protection against potential scams, even when the voice on the call seems identical to that of your loved one.

     
  • Geebo 9:00 am on November 28, 2023 Permalink | Reply
    Tags: AI, , , , ,   

    AI finds its way into Medicare scams 

    AI finds its way into Medicare scams

    By Greg Collier

    We are currently nearing the end of Medicare’s Open Enrollment period. This is the time of year when Medicare recipients can change their plan from the traditional Medicare coverage to a Medicare Advantage plan, or change back if they so desire. This is also the time of year when scammers specifically target Medicare eligible seniors with their scams.

    When it comes to scams, identity theft poses a significant risk to seniors, especially during Open Enrollment. Scammers often employ tactics such as impersonating government officials, adopting titles like ‘health care benefits advocate,’ to deceive victims. These fraudsters make enticing promises, assuring the victim of enrollment in equivalent or superior coverage at a reduced cost. To accomplish their scheme, the fraudulent agent requests the victim’s personal information, including their Medicare number.

    The stolen Medicare number becomes a tool for these scammers to commit Medicare fraud, involving unauthorized charges for procedures or items. This fraudulent activity has the potential to impact the victim’s benefits in the future. Additionally, scammers resort to high-pressure tactics, such as claiming that the victim’s benefits may expire if immediate information is not provided. In some cases, these deceptive calls may even display Medicare’s official phone number, adding an extra layer of trickery. It is crucial for seniors to be vigilant and cautious to protect themselves from falling victim to such identity theft scams during the Open Enrollment period.

    Though not strictly a scam, certain unscrupulous insurance brokers may exert undue pressure on seniors to switch to their company’s Medicare Advantage plan. While Medicare Advantage plans can offer advantages for some individuals, they may also have limitations that may not suit everyone’s needs. The decision to switch should be based on the individual’s personal healthcare requirements, yet some insurance agents may prioritize making a sale over the well-being of the patient.

    If contemplating a transition from Medicare to a Medicare Advantage Plan, it is essential to conduct thorough research on the potential benefits and drawbacks. Avoid succumbing to the tactics of salespersons, who may push for a decision that could lead to regret in the following year. Taking the time to make an informed decision ensures that the chosen healthcare plan aligns with individual needs and preferences.

    There is also another potential threat with this year’s Open Enrollment, and not surprisingly, it’s related to AI. Experts are warning that scammers could be using AI-generated voice programs to make scam phone calls sound more authentic. These calls could even be used to try to record a victim’s voice, which could then be used in other voice spoofing scams.

    It’s important to be cautious when receiving calls related to your Medicare plan. Legitimate Medicare plans typically contact their members if necessary, but if you ever feel uneasy during such calls, consider calling your insurance company’s official customer service number to verify the legitimacy of the communication.

    As a general rule, exercise caution about sharing your Medicare or Social Security number over the phone. Medicare and your insurance company already have your information on file and typically don’t need you to provide it again during unsolicited calls. This precaution helps protect you from potential scams or identity theft. Always prioritize your security and verify the authenticity of any calls before sharing sensitive information.

     
  • Geebo 8:00 am on October 10, 2023 Permalink | Reply
    Tags: AI, , ChatGPT, ,   

    Scammers employ new weapon in romance scams 

    By Greg Collier

    Whenever someone develops a new and useful tool, it’s only a matter of time before someone uses it for criminal purposes. The large language model ChatGPT was released to the public last year. Essentially, you can give ChatGPT any kind of prompt, and it will write it out for you. Want to write a professional sounding email to a prospective employer, it can do that for you. Want to have it write a script about Batman meeting Abraham Lincoln? It can do that too. Do you want to have ChatGPT craft the best romantic responses to keep a lonely victim believing they’re in a committed online relationship? Unfortunately, it can do that too.

    According to cybersecurity experts, scammers have developed their own chat AI that will produce authentic looking messages to romance scam victims. For the initiated, romance scammers typically prey on the single and widowed by pretending to be an online romantic interest. These scammers will cultivate a phony online relationship using fake names and pictures, along with a story about how they can’t meet in public. The scammers will cultivate these relationships for months before asking the victim for money. Victims have lost thousands of dollars and even up to millions of dollars each to these scammers.

    Now, armed with an AI chatbot of their own, romance scammers almost have a ‘set it and forget it’ setting for running their scams.

    However, while this may make the romance scam appear more like a legitimate relationship, the steps someone can take to protect themselves are still the same. Anytime a prospective partner sends you a picture of themselves, use Google’s reverse image search to make sure they didn’t steal it from someone else’s social media. If they claim to be working overseas or somewhere where they can’t travel from freely, there’s a good chance they’re a scammer. Lastly, if they ask for money without meeting first, it’s almost guaranteed that they’re a scammer.

     
  • Geebo 8:00 am on September 1, 2023 Permalink | Reply
    Tags: AI, , , , , ,   

    Grandmother scammed for weeks in AI voice-spoofing scam 

    By Greg Collier

    It’s been a short while since we last discussed the AI voice-spoofing scam. For new readers, this is when scammers obtain a sample of someone’s voice from online, and run it through an AI program, which allows the scammers to make the voice say whatever they want. The scammers then use the person’s voice to convince that person’s loved one to send the scammers money.

    Voice-spoofing is typically used in one of two consumer-level scams. The first one is the virtual kidnapping scam, which is exactly what it sounds like. Scammers will use the spoofed voice to make it sound like somebody’s loved one has been kidnapped, and the scammers will demand a ransom.

    The second scam is the one we’ll be discussing today, which is the grandparent scam. In this scam, the scammers pose as an elderly victim’s grandchild who’s in some kind of legal trouble. The scammers will often ask for bail money or legal fees.

    An elderly woman from Utah recently fell victim to the grandparent scam. Scammers called her on the phone using the cloned voice of one of her granddaughters. The ‘granddaughter’ said she had been arrested after riding in a car with someone who had drugs and needed bail money. A scammer then got on the call and pretended to be the granddaughter’s attorney and instructed the woman on how she could send payment. The woman was also instructed not to tell anyone else in the family, as it could jeopardize the granddaughter’s court case.

    One of the many problems with scammers is if you pay them once, chances are they’ll come back for more money, which is what happened here. For weeks, the phony granddaughter kept calling back needing more money each time for various legal proceedings. Keep in mind that with each conversation, the grandmother is not actually talking to anybody but a computer-generated voice, which sounds exactly like her granddaughter.

    Eventually, the grandmother did grow suspicious and told her son, who informed her she was being scammed.

    Don’t trust your ears when it comes to phone scams. If you receive a call from someone claiming to be a relative or loved one in need of money, it’s important to follow the same precautions, even if the voice sounds exactly like them. Hang up on the call and contact the person who’s supposedly in trouble. If you can’t reach them, ask other family members who might know where they are. Be sure to tell them about the situation you encountered, and never keep it a secret. Lastly, never send money under any circumstances.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel