Generative AI Explained: Understanding the Technology’s impact and its Potential Harm
As AI technology continues to advance in 2025, we have entered a new generation of publicly accessible AI-generated tools and content. And while you likely have heard of generative AI or have been able to spot it in your social media feed, you are likely encountering it more than you think on social media, browsing the internet and while viewing various websites. Although publicly accessible AI tools aren’t inherently dangerous, it’s becoming harder to distinguish real content from fake. Malicious actors are increasingly using AI to make scams more convincing and human-like, which steadily raises the risk for unsuspecting internet users like yourself. The importance of AI education for virtually anyone online has become more crucial than ever.
Fake people, real consequences
CNN: Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’
In early 2024, a multinational firm in Hong Kong fell victim to a sophisticated deepfake scam, losing approximately $25 million USD. An employee was tricked into transferring the funds after joining a video conference where generative AI deepfakes were used to impersonate senior company executives. The scam’s believability was heightened by the appearance that he was in a routine call with familiar colleagues, including managers and the CEO. In reality, he was the only real person on the call, unknowingly surrounded by AI-generated impostors who tricked him into sending the money. This is just one real-world example of this technology’s misuse, demonstrating how easily threat actors can deceive and cause real harm using this technology
What is Artificial Intelligence (AI)?
Artificial Intelligence (AI) refers to the simulation of human intelligence by machines, enabling them to perform tasks like learning, reasoning, problem-solving, and perception. One major category of AI is Generative AI, which creates new content, such as images, text, music, or videos, by learning from vast amounts of existing data. These systems can produce original outputs that mimic human-created work, with popular tools capable of writing stories, generating artwork, making realistic fake videos and more.
Internet in Uproar: Release of Google’s Veo 3 video-Generator
The latest AI tool making headlines is Google’s Veo 3, a powerful video generator that creates realistic 8-second clips with human-like voices, sound effects, and lifelike portrayals. Available for over $250/month, anyone can access the tool.
As users began posting their creations, the internet responded with both awe and alarm. People worry this advanced generative AI tool could do more harm than good, eroding trust in media, enabling nearly undetectable misinformation, and raising serious ethical, legal, and creative concerns.
Can we still tell what’s real? ‘Unsettling’ new AI tech makes generating ultrarealistic videos easy
How are these tools being used?
You’re likely already encountering generative AI more often than you realize, even if you don’t notice it directly. These tools power many of the texts, images, and videos you see online every day. They’re widely used in entertainment and marketing, where creators use AI to generate music, art, videos, online posts and stories, making it quicker and easier to produce content for social media, websites, streaming platforms, and video games. Marketers use AI to design personalized ads, write product descriptions, and create visuals that grab attention and engage people across platforms.
Article from Forbes.com: Personalization And Context: AI’s Surging Role In Advertising
At the same time, generative AI is being misused. Misinformation can spread more easily through AI-generated fake news articles, deepfake videos, or misleading images that appear highly realistic. Scammers utilize AI to create fake identities, clone voices, and craft convincing messages that deceive people into sharing personal information or sending money. Because the content appears and sounds so realistic, it can be challenging for unsuspecting users to distinguish between genuine and artificially created content.
Statistic
According to Deloitte Centre for Financial Services, generative AI will enable $40 billion in losses by 2027 up from $12.3 billion in 2023. That’s a 32% compound annual growth rate.
The Harms created by Generative AI
Navigate the drop-down menu to read about the different harms that generative AI creates.

Lesser Harm: AI Slop
The internet is being flooded with cheap, low-quality, high-engagement content created using generative AI, often designed to capture attention, evoke emotion, or promote a particular agenda. With the rise of generative AI, disinformation websites are proliferating across the internet. This type of content, whether made by low-level influencers chasing clicks or part of coordinated political influence campaigns, is now being referred to as “AI slop.”
It’s easy to produce, spreads quickly, and contributes to the growing noise online, making it harder to distinguish between useful information and manipulative or misleading content generated and posted by AI, as well as differentiating between human-made content and AI-generated content. It is suggested that the rapid spread of this content is driven by bots programmed to disseminate it, where estimates suggest that bots now account for a large portion of internet traffic.
As generative AI tools improve at imitating human behaviour, it becomes increasingly difficult to distinguish what is real from what is not. While social media companies and browsers claim to be cracking down on the rapid spread of this content, particularly ones containing disinformation, it still makes its way into people’s feeds way more than it should.
How to use Generative AI safely
Generative AI can be very helpful and informative if used correctly and safely. Here are some guidelines for using these tools.
Limit What you Share
Don’t type or upload anything you wouldn’t want made public.
- Avoid names, passwords, ID numbers, or personal stories.
- Don’t upload photos unless you fully understand how the tool handles data.
Assume everything you input could be stored or reviewed.
- Even if the tool says it’s private, company staff may still review samples for training or moderation.
- Anytime a company has your data, it is vulnerable to privacy breaches/exposure to hackers.
Change Default Privacy Settings
Turn off “chat history” or data-saving features.
- Example: In ChatGPT, go to Settings → Data Controls → Turn off “Chat history & training.”
- In Google Gemini, go to myaccount.google.com/activitycontrols → Web & App Activity → Turn off the toggle to stop chat saving → Manage Activity to delete old chats
Regularly delete your activity
- Clear your prompts or uploaded content, where allowed.
Use Tools With Better Privacy Protections
Choose AI tools that process data locally or allow opt-outs.
- Use open-source or on-device tools (like some image generators or assistants).
Search for the tool’s privacy policy
- Look for: “What we collect,” “How we use your data,” and “Can I opt out of training?”
Think Before You Click, post or Share
Review the AI-generated content before copying, pasting, or sharing.
- Especially if it involves other people, medical and legal info, or sensitive topics.
- Use external fact-checkers to make sure the information is correct.
Add a disclaimer if using AI for public content
- “This post includes AI-generated content” is a simple way to stay transparent and avoid issues.
Maintain Good Digital Hygiene
- Log out of AI tools after using them on shared devices.
- Use a strong, unique password and MFA for your AI accounts.
- Update browser extensions or AI plugins regularly.
Stay Informed
- Sign up for AI tool updates or privacy alerts.
- Follow reliable technology news sites to stay aware of policy changes or risks.
Tips on Spotting Generative AI content
AI-Generated Images
AI-Generated Videos
How to avoid Generative AI voice cloning scams:
Always…
Tip Sheet: A Practical Guide to Generative AI
Tip Sheet: How to use generative AI safely
.