Tagged: artificial intelligence Toggle Comment Threads | Keyboard Shortcuts

  • Geebo 9:00 am on November 19, 2025 Permalink | Reply
    Tags: , artificial intelligence, ,   

    The Data You Forgot Is the Data AI Remembers 

    By Greg Collier

    Your photos, posts, and even private documents may already live inside an AI model—not stolen by hackers, but scraped by “innovation.”

    The Internet Never Forgets—Especially AI:

    You post a photo of your dog. You upload a résumé. You share a few opinions on social media. Months later, you see a new AI tool that seems to know you—your writing tone, your job title, even your vacation spot.

    That’s no coincidence.

    Researchers are now warning that AI training datasets—the enormous data collections used to “teach” models how to generate text and images—are riddled with personal content scraped from the public web. Your name, photos, social posts, health discussions, résumé data, and family info could be among them.

    And unlike a data breach, this isn’t theft in the traditional sense—it’s collection without consent. Once it’s in the model, it’s almost impossible to remove.

    What’s Going On:

    AI companies use massive web-scraping tools to feed data into their models. These tools collect everything from open websites and blogs to academic papers, code repositories, and social media posts. But recent investigations revealed that these datasets often include:

    • Personal documents from cloud-based PDF links and résumé databases.
    • Photos and addresses from real estate sites, genealogy pages, and social networks.
    • Health, legal, and financial records that were cached by search engines years ago.
    • Private messages that were never meant to be indexed but became public through broken permissions.

    A single AI model might be trained on trillions of words and billions of images, often gathered from sources that individuals believed were private or expired.

    Once that data is used for training, it becomes embedded in the model’s neural weights—meaning future AI systems can reproduce fragments of your writing, code, or identity without ever accessing the source again.

    That’s the terrifying part: the leak isn’t a single event. It’s permanent replication.

    Why It’s So Dangerous:

    • No oversight: Most data scraping for AI happens outside traditional privacy laws. There’s no clear consent, no opt-out, and no transparency.
    • Impossible recall: Once data trains a model, it can’t simply be “deleted.” Removing it requires retraining from scratch—a process companies rarely perform.
    • Synthetic identity risk: Scammers can use AI systems trained on real people’s information to generate convincing impersonations, fake résumés, or fraudulent documents.
    • Deep profiling: AI models can infer missing details (age, income, habits) based on what they already know about you.
    • Corporate resale: Some AI vendors quietly sell or license models trained on public data to third parties, spreading your information even further.

    A 2025 study by the University of Toronto found that 72% of open-source AI datasets contained personal identifiers, including emails, phone numbers, and partial credit card data.

    Real-World Consequences:

    • Re-identification attacks: Security researchers have demonstrated that they can prompt AI models to output fragments of original documents—including medical transcripts and legal filings.
    • Voice and likeness cloning: Models trained on YouTube or podcast audio can reproduce a person’s speech patterns within seconds.
    • Phishing precision: Fraudsters use leaked data from AI training sets to craft hyper-personalized scams that mention real details about a victim’s life.
    • Corporate espionage: Internal business documents, scraped from unsecured cloud links, have surfaced in public datasets used by AI startups.

    In short, the internet’s old rule—“Once it’s online, it’s forever”—just evolved into “Once it’s trained, it’s everywhere.”

    Red Flags:

    • AI chatbots or image tools generate content that includes names, places, or images you recognize from your own life.
    • You see references to deleted or private material in AI-generated text.
    • Unknown accounts start using your likeness or writing style for content creation.
    • You receive “hyper-specific” phishing emails mentioning old information you once posted online.

    Quick Tip: If you’ve ever uploaded a résumé, personal essay, or family blog, assume it could have been indexed by AI crawlers. Regularly check what’s visible through search engines and remove outdated or sensitive posts.

    What You Can Do:

    • Limit exposure: Review what’s public on LinkedIn, Facebook, and old blogs. Delete or privatize posts you no longer want online.
    • Use “robots.txt” and privacy settings: These can block crawlers from indexing your content—it won’t erase what’s already scraped, but it stops future harvesting.
    • Opt-out of data brokers: Many sites (Spokeo, PeopleFinder, Intelius) sell personal info that ends up in AI datasets.
    • Support privacy-centric AI tools: Favor companies that publicly disclose training sources and allow data removal requests.
    • Treat data sharing like identity sharing: Every upload, caption, or bio adds to a digital fingerprint that AI can replicate.

    If You’ve Been Targeted:

    1. Search your name and key phrases from private documents to see if they appear online.
    2. File a takedown request with Google or the website hosting your data.
    3. If you suspect your likeness or writing is being used commercially, document examples and contact an intellectual-property attorney.
    4. Report data leaks to the FTC or your country’s data-protection authority.
    5. Consider using identity-protection monitoring services that scan for AI-generated profiles of you or your business.

    Final Thoughts:

    The most dangerous data leak isn’t the one that happens overnight—it’s the one that happens quietly, at scale, in the name of “progress.”

    AI training data leaks represent a new era of privacy risk. Instead of stealing your identity once, machines now learn it forever.

    Until global regulations catch up, your best protection is awareness. Treat every upload, every public résumé, and every online comment like a permanent record—because, for AI, that’s exactly what it is.

    Further Reading:

     
  • Geebo 8:00 am on March 25, 2025 Permalink | Reply
    Tags: artificial intelligence, , ,   

    Scammers Are Still Cloning You 

    Scammers Are Still Cloning You

    By Greg Collier

    A new type of scam is becoming more common, and more convincing, thanks to rapidly evolving artificial intelligence. The Better Business Bureau has issued a warning about voice-cloning scams that are impacting individuals and families across the country.

    These scams rely on technology that can mimic someone’s voice with alarming accuracy. With just a few seconds of audio, sometimes lifted from voicemail greetings, casual conversations, or even online videos, scammers can generate a voice that sounds nearly identical to that of a loved one. This makes it incredibly difficult to distinguish between a real call and a fake one, especially when the voice on the other end is claiming to be in trouble, asking for money, or offering a too-good-to-be-true opportunity.

    In one case recently reported, an individual spent nearly a week performing tasks for what appeared to be a remote job, unaware that the employer’s true intent was to capture voice recordings. The concern is that these recordings may later be used in scams that impersonate the individual or manipulate others into sharing sensitive information.

    Scammers are becoming more strategic. They’re using AI not just to imitate voices, but also to weave those voices into emotional scenarios that cause panic or urgency, situations where someone might act quickly without verifying the call. This emotional manipulation is what makes these scams so dangerous. A familiar voice saying it’s an emergency can override our instincts and judgment in a matter of seconds.

    To protect yourself, take steps that make it harder for these scams to succeed. If you receive a call that seems suspicious, even if the voice sounds familiar, don’t respond right away. Take a moment to pause. Hang up and call the person directly using a known number. This simple step can often expose the scam for what it is.

    Securing your digital presence is also key. Enable multifactor authentication on your accounts whenever possible. It adds an extra layer of protection that can prevent scammers from accessing your information, even if they manage to imitate your voice or steal your password. At work, businesses should invest in cybersecurity training for employees. Building a culture of awareness and caution can prevent data breaches and manipulation.

    AI voice scams are still a developing threat, and organizations like the BBB are working to find solutions and increase public awareness. Until then, staying skeptical, careful, and informed is the best defense. In this new era where hearing a familiar voice doesn’t guarantee safety, taking a second to verify can make all the difference.

     
  • Geebo 9:00 am on February 3, 2025 Permalink | Reply
    Tags: , artificial intelligence, , , , , Golden Eagle,   

    AI Deepfake Scam Uses Celebrities to Defraud 

    AI Deepfake Scam Uses Celebrities to Defraud

    By Greg Collier

    The rise of artificial intelligence has brought remarkable advancements, but it has also given scammers a powerful tool to deceive unsuspecting victims. One recent case illustrates how fraudsters used AI-generated videos to impersonate prominent figures, including the sitting U.S. president, the CEO of a major bank, and tech mogul Elon Musk. The scheme revolved around an alleged investment opportunity known as the “Golden Eagles Project,” which falsely promised financial prosperity to those willing to purchase collectible coins.

    Victims were lured in with AI-generated videos that appeared to feature well-known public figures endorsing the scheme. These deepfake-style videos claimed that purchasing a $59 “golden eagle” coin would yield an astronomical return of over $100,000. To make the scam seem even more legitimate, the videos falsely stated that major banks and businesses were participating, allowing people to trade the coins for cash or high-value assets like Tesla cars or stock.

    Despite the seemingly legitimate nature of the endorsements, victims who fell for the scam soon realized the painful truth. The coins were virtually worthless. Even a detailed analysis by precious metal experts confirmed that the items contained no real gold or silver, making them valueless beyond their novelty appeal. One victim, a military veteran, invested thousands of dollars into the scam, believing he was on the path to becoming a millionaire. Instead, he found himself left with nothing but frustration and regret.

    The scam plays on a tactic that has become increasingly common, exploiting public trust in celebrities and high-profile figures. With AI-generated content becoming more convincing, fraudsters have seized the opportunity to create fake videos that appear legitimate to the average viewer. These scams thrive in online spaces where misinformation spreads rapidly, particularly on social media sites where content can circulate without much oversight.

    Beyond the financial losses suffered by individuals, this case also raises broader ethical concerns about the responsibilities of high-profile figures in preventing their likenesses from being misused. While the real individuals behind these fake endorsements had no connection to the scheme, their widely recognized images and voices were weaponized against vulnerable consumers. The damage caused by AI-generated fraud highlights the need for increased digital literacy, as well as stronger regulations around AI-manipulated media.

    Another critical aspect of this scam is the implication that a sitting U.S. president was personally endorsing an investment opportunity. This alone should have been a red flag, as federal law is supposed to prohibit a president from conducting personal business while in office. The position carries enormous influence, and rules exist to prevent any potential conflicts of interest that might arise from commercial endorsements. The idea that a government leader would actively promote a coin-based financial opportunity should have raised immediate skepticism. However, fraudsters took advantage of the public’s trust, crafting a deception convincing enough to ensnare even cautious individuals.

    Scams of this nature serve as a reminder that if an investment opportunity sounds too good to be true, it probably is. While AI technology is advancing rapidly, its potential for deception is growing just as fast. Consumers must remain vigilant, question sensational claims, and verify financial opportunities through reputable sources before making any commitments.

     
  • Geebo 8:00 am on October 16, 2024 Permalink | Reply
    Tags: , artificial intelligence, , , , ,   

    How AI is Fueling a New Wave of Online Scams 

    How AI is Fueling a New Wave of Online Scams

    By Greg Collier

    With the rise of artificial intelligence (AI), the internet has become a more treacherous landscape for unsuspecting users. Once, the adage “seeing is believing” held weight. Today, however, scammers can create highly realistic images and videos that deceive even the most cautious among us. The enhanced development of AI has made it easier for fraudsters to craft convincing scenarios that prey on emotions, tricking people into parting with their money or personal information.

    One common tactic involves generating images of distressed animals or children. These fabricated images often accompany stories of emergencies or tragedies, urging people to click links to donate or provide personal details. The emotional weight of these images makes them highly effective, triggering a quick, compassionate response. Unfortunately, the results are predictable, stolen personal information or exposure to harmful malware. Social media users must be on high alert, as the Better Business Bureau warns against clicking unfamiliar links, especially when encountering images meant to elicit an emotional reaction.

    Identifying AI-generated content has become a key skill in avoiding these scams. When encountering images, it’s essential to look for subtle signs that something isn’t right. AI-generated images often exhibit flaws that betray their synthetic nature. Zooming in on these images can reveal strange details such as blurring around certain elements, disproportionate body parts, or even extra fingers on hands. Other giveaways include glossy, airbrushed textures and unnatural lighting. These telltale signs, though subtle, can help distinguish AI-generated images from genuine ones.

    The same principles apply to videos. Deepfake technology allows scammers to create videos that feature manipulated versions of public figures or loved ones in fabricated scenarios. Unnatural body language, strange shadows, and choppy audio can all indicate that the video isn’t real.

    One particularly concerning trend involves scammers using AI to create fake emergency scenarios. A family member might receive a video call or a voice message that appears to be from a loved one in distress, asking for money or help. But even though the voice and face may seem familiar, the message is an illusion, generated by AI to exploit trust and fear. The sophistication of this technology makes these scams harder to detect, but the key is context. Urgency, emotional manipulation, and unexpected requests for money are red flags. It’s always important to verify the authenticity of the situation by contacting the person directly through trusted methods.

    Reverse image searches can be useful for confirming whether a photo has been used elsewhere on the web. By doing this, users can trace images back to their original sources and determine whether they’ve been manipulated. Similarly, checking whether a story has been reported by credible news outlets can help discern the truth. If an image or video seems too shocking or unbelievable and hasn’t been covered by mainstream media, it’s likely fake.

    As AI technology continues to evolve, scammers will only refine their methods. The challenge of spotting fakes will become more difficult, and even sophisticated consumers may find themselves second-guessing what they see. Being suspicious and fact-checking are more important than ever. By recognizing the tactics scammers use and understanding how to spot AI-generated content, internet users can better protect themselves in this new digital landscape.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel