A practical Guide

Generative AI Explained: Understanding the Technology’s impact and its Potential Harm

As AI technology continues to advance in 2025, we have entered a new generation of publicly accessible AI-generated tools and content. And while you likely have heard of generative AI or have been able to spot it in your social media feed, you are likely encountering it more than you think on social media, browsing the internet and while viewing various websites. Although publicly accessible AI tools aren’t inherently dangerous, it’s becoming harder to distinguish real content from fake. Malicious actors are increasingly using AI to make scams more convincing and human-like, which steadily raises the risk for unsuspecting internet users like yourself. The importance of AI education for virtually anyone online has become more crucial than ever.

Fake people, real consequences

NEWS ARTICLE

CNN: Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’

In early 2024, a multinational firm in Hong Kong fell victim to a sophisticated deepfake scam, losing approximately $25 million USD. An employee was tricked into transferring the funds after joining a video conference where generative AI deepfakes were used to impersonate senior company executives. The scam’s believability was heightened by the appearance that he was in a routine call with familiar colleagues, including managers and the CEO. In reality, he was the only real person on the call, unknowingly surrounded by AI-generated impostors who tricked him into sending the money. This is just one real-world example of this technology’s misuse, demonstrating how easily threat actors can deceive and cause real harm using this technology

What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) refers to the simulation of human intelligence by machines, enabling them to perform tasks like learning, reasoning, problem-solving, and perception. One major category of AI is Generative AI, which creates new content, such as images, text, music, or videos, by learning from vast amounts of existing data. These systems can produce original outputs that mimic human-created work, with popular tools capable of writing stories, generating artwork, making realistic fake videos and more.

AI-generated images are pictures created using artificial intelligence. These tools can produce realistic photos, artwork, animated characters or entirely fictional scenes without the need for a camera or a human artist. They can be used for creative purposes, but can also be misused to create misleading or fake visuals.

DeepFakes

  • Deepfakes are highly realistic and manipulated videos or audio recordings created using artificial intelligence. They often involve superimposing one person’s face onto another’s body or altering voices and actions to make it appear that someone is saying or doing something they never actually did. This technology can be used to mislead, impersonate, or spread false information.

DeepNudes

  • DeepNudes refers to the use of AI to create or manipulate images to generate nude or sexually explicit content of individuals without their consent. The technology alters uploaded photos to add realistic nudity, raising serious ethical and legal concerns around privacy, consent, and non-consensual image sharing.

AI-generated videos are videos created or modified using artificial intelligence, rather than traditional filming methods. These videos can include entirely synthetic scenes, altered footage (such as deepfakes), or enhanced visuals produced by algorithms. AI can generate realistic human faces and voices of people performing specific actions, animate characters, create futuristic or unrealistic scenes, and more. Today, the public has access to highly advanced AI video generators that can create fully synthetic videos with human-like voices and features in just minutes using simple prompts.

Text generators use artificial intelligence to produce written content based on user prompts. You type in a question or request, and the AI responds quickly, drawing on its training data and the internet to generate the most likely and relevant answer. While it can write blog posts, answer questions, explain concepts, or help brainstorm ideas in clear, neutral language, these tools are not perfect. They can sometimes produce inaccurate information or even invent facts, since they rely on patterns rather than verified sources.

  • Examples: ChatGPT, Google Gemini, Microsoft Copilot

Internet in Uproar: Release of Google’s Veo 3 video-Generator

The latest AI tool making headlines is Google’s Veo 3, a powerful video generator that creates realistic 8-second clips with human-like voices, sound effects, and lifelike portrayals. Available for over $250/month, anyone can access the tool.

As users began posting their creations, the internet responded with both awe and alarm. People worry this advanced generative AI tool could do more harm than good, eroding trust in media, enabling nearly undetectable misinformation, and raising serious ethical, legal, and creative concerns.

Google’s VEO 3 Promotional Video: made entirely By AI
CBC Article

Can we still tell what’s real? ‘Unsettling’ new AI tech makes generating ultrarealistic videos easy

How are these tools being used?

You’re likely already encountering generative AI more often than you realize, even if you don’t notice it directly. These tools power many of the texts, images, and videos you see online every day. They’re widely used in entertainment and marketing, where creators use AI to generate music, art, videos, online posts and stories, making it quicker and easier to produce content for social media, websites, streaming platforms, and video games. Marketers use AI to design personalized ads, write product descriptions, and create visuals that grab attention and engage people across platforms.
Article from Forbes.com: Personalization And Context: AI’s Surging Role In Advertising

At the same time, generative AI is being misused. Misinformation can spread more easily through AI-generated fake news articles, deepfake videos, or misleading images that appear highly realistic. Scammers utilize AI to create fake identities, clone voices, and craft convincing messages that deceive people into sharing personal information or sending money. Because the content appears and sounds so realistic, it can be challenging for unsuspecting users to distinguish between genuine and artificially created content.

Statistic

According to Deloitte Centre for Financial Services, generative AI will enable $40 billion in losses by 2027 up from $12.3 billion in 2023. That’s a 32% compound annual growth rate.

The Harms created by Generative AI

Navigate the drop-down menu to read about the different harms that generative AI creates.

There are countless scams designed to commit financial fraud and theft, and generative AI is making it even easier for scammers to create convincing scenarios, fake ads, false products, impersonation, and phishing schemes.

Voice cloning impersonation phone call scams

  • Scam calls aren’t new, but AI is making them sound increasingly realistic. Instead of the obvious robotic or overseas call centre voices people used to recognize as scams, these calls now feature AI-generated voices that sound more natural, friendly, and familiar. Scammers don’t need to clone your exact voice to fool someone; they need a voice that sounds believable enough. They might call pretending to be you in an emergency, saying you’ve been in an accident or need help, and ask your loved ones to send money. Because the voice sounds human and convincing, people are more likely to be deceived by it. The scam itself hasn’t changed much, but AI is making it much harder to spot.

    As voice cloning technology advances, there is growing concern that scammers may begin targeting individuals more directly, replicating their voices to impersonate them in calls to friends or family. This highlights the growing importance of protecting your voice data and avoiding voice-based security authentication methods. Because AI can now generate highly realistic voices, voice-activated security systems are becoming easier to manipulate. This weakens their effectiveness and increases the risk of personal information being exposed.

AI ads that lead to fake AI products or opportunities

  • While online ads help legitimate companies attract customers, scammers also utilize generative AI to create fake and, convincing ads. These often lead to malicious websites selling fake products or services or connect you with scammers posing as real businesses. For example, a fake job ad might direct you to a fake hiring manager who “hires” you and asks for your bank details, then uses them for fraud.
  • Fake product ads may link to a professional-looking site where you unknowingly hand over your financial info when making a purchase. AI can quickly generate realistic ads, websites, and products, allowing scams to spread rapidly. Legitimate platforms often struggle to remove them in time.

Phishing and Social Engineering

  • Using social engineering, scammers can now create highly detailed, personalized emails with the help of AI. These messages are designed to appear as if they’re coming from a trusted source, such as a CEO, coworker, or business partner, and often request money transfers, invoice payments, or sensitive financial information. Because the emails appear so genuine, they exploit human trust and psychology, tricking people into sending money to scammers they believe are legitimate people.

While sextortion and romance scams aren’t new, generative AI is making them more convincing and easier to scale. Scammers now use generative AI tools and chatbots to automate messages, create fake profiles, and build trust faster than ever.

  • Sextortion involves tricking or coercing someone into sharing intimate images or videos, then threatening to release them unless payment is made. All it takes is one photo for a scammer to use generative AI to create fake yet realistic explicit images or videos. Victims are then blackmailed with these deepfakes, pressured to pay out of fear their friends, family, or co-workers will see these images or videos of them and not know it is fake.
  • Romance scams use fake identities to build fake relationships for financial gain. AI-generated profiles, complete with convincing bios, photos, and even video or voice messages, make it easier for scammers to seem real. AI Chatbots programmed for the purpose of a romance scam can respond in real-time, making the interaction feel human, memorable and personal. Once trust is built, scammers ask for money and may keep asking or vanish after getting it.
    Read about it: Online romance scammers may have a new wingman — artificial intelligence | CBC News

AI text generators can sometimes produce biased or false information by accident—but they’re also being used intentionally to spread disinformation. Some websites are set up with the sole goal of publishing fake stories, building a following, and getting people to believe and share false information, often profiting from the attention or simply aiming to cause chaos, panic, or confusion. These stories range from harmless feel-good pieces to damaging fake news about public figures. With generative AI, this kind of content can be generated and shared online in minutes, fuelling a constant stream of disinformation across social media and the web.
AI Disinformation in a democratic Canada: The battle against AI-driven disinformation

Public figures, including politicians, news anchors, celebrities, and influencers, have numerous photos and videos online, making them easy targets for AI-generated deepfakes. While some are clearly fake and intended as jokes, others are more harmful, as they feature people endorsing scam products, promoting dangerous sites, or spreading false messages. Even if the person knows it’s fake, others may not, and the damage to their reputation can be serious. These deepfakes have been used to trick people into donating money, falling for fake crypto schemes, or even following dangerous calls to action made to look like they came from a trusted leader.

As generative AI content becomes more convincing, it’s starting to erode people’s trust in what they see and hear. In the past, it was easier to spot fake content, but now, with tools that can closely mimic human speech and visuals, it’s harder to tell what’s real. This growing uncertainty can create confusion and doubt, not just about the content itself, but about the platforms sharing it. People may begin to question the reliability of social media, news outlets, and journalism overall, unsure of who’s really behind the information and whether it’s been manipulated.

A collection of posts on social media made with generative AI

Lesser Harm: AI Slop

The internet is being flooded with cheap, low-quality, high-engagement content created using generative AI, often designed to capture attention, evoke emotion, or promote a particular agenda. With the rise of generative AI, disinformation websites are proliferating across the internet. This type of content, whether made by low-level influencers chasing clicks or part of coordinated political influence campaigns, is now being referred to as “AI slop.”

It’s easy to produce, spreads quickly, and contributes to the growing noise online, making it harder to distinguish between useful information and manipulative or misleading content generated and posted by AI, as well as differentiating between human-made content and AI-generated content. It is suggested that the rapid spread of this content is driven by bots programmed to disseminate it, where estimates suggest that bots now account for a large portion of internet traffic.

As generative AI tools improve at imitating human behaviour, it becomes increasingly difficult to distinguish what is real from what is not. While social media companies and browsers claim to be cracking down on the rapid spread of this content, particularly ones containing disinformation, it still makes its way into people’s feeds way more than it should.

How to use Generative AI safely

Generative AI can be very helpful and informative if used correctly and safely. Here are some guidelines for using these tools.

Limit What you Share

Don’t type or upload anything you wouldn’t want made public.

  • Avoid names, passwords, ID numbers, or personal stories.
  • Don’t upload photos unless you fully understand how the tool handles data.

Assume everything you input could be stored or reviewed.

  • Even if the tool says it’s private, company staff may still review samples for training or moderation.
  • Anytime a company has your data, it is vulnerable to privacy breaches/exposure to hackers.

Change Default Privacy Settings

Turn off “chat history” or data-saving features.

  • Example: In ChatGPT, go to Settings → Data Controls → Turn off “Chat history & training.”
  • In Google Gemini, go to myaccount.google.com/activitycontrols → Web & App Activity → Turn off the toggle to stop chat saving → Manage Activity to delete old chats

Regularly delete your activity

  • Clear your prompts or uploaded content, where allowed.

Use Tools With Better Privacy Protections

Choose AI tools that process data locally or allow opt-outs.

  • Use open-source or on-device tools (like some image generators or assistants).

Search for the tool’s privacy policy

  • Look for: “What we collect,” “How we use your data,” and “Can I opt out of training?”

Think Before You Click, post or Share

Review the AI-generated content before copying, pasting, or sharing.

  • Especially if it involves other people, medical and legal info, or sensitive topics.
  • Use external fact-checkers to make sure the information is correct.

Add a disclaimer if using AI for public content

  • “This post includes AI-generated content” is a simple way to stay transparent and avoid issues.

Maintain Good Digital Hygiene

  • Log out of AI tools after using them on shared devices.
  • Use a strong, unique password and MFA for your AI accounts.
  • Update browser extensions or AI plugins regularly.

Stay Informed

  • Sign up for AI tool updates or privacy alerts.
  • Follow reliable technology news sites to stay aware of policy changes or risks.

Tips on Spotting Generative AI content

  • Zoom in: look for warped features, melted glasses, odd hands, or floating objects. 
  • Check symmetry and proportions: extra fingers, misaligned features, or disproportioned limbs are common in lower-quality generative AI. 
  • Notice textures: AI photos often have airbrushed looks, odd backgrounds, or random blurry areas. 
  • Watch for lighting issues: flickering shadows or inconsistent or inaccurate light sources
  • Unnatural movement or body language: stiff or odd body language, no blinking, or robotic gestures
  • Listen carefully: strange pacing, weird intonation, or background sounds that don’t make sense.

How to avoid Generative AI voice cloning scams:

  • Pause before acting: No matter how convincing a phone call or voicemail may sound, hang up or close the message if something doesn’t feel right. Call the person who claimed to have called you directly with the phone number you have saved for them. Don’t call back the number provided by the caller or caller ID. Ask questions that would be hard for an impostor to answer correctly.
  • Don’t send money under pressure: If the caller urgently asks you to send money via a digital wallet payment app or a gift card, that may be a red flag for a scam. If you wire money to someone and later realize it’s a fraud, the police must be alerted.
  • Secure your accounts: Whether at work or home, set up multifactor authentication for email logins and other changes in email settings. At work, verify changes in information about customers, employees, or vendors.

Always…

  • Take a closer look at the content
  • Make sure you know who you are talking to
  • Be careful what you post online
  • Be careful when sharing information digitally with anyone
  • Avoid making financial decisions based on viral videos
  • Be skeptical about everything you see on the internet
  • Use AI-detection tools like reverse image search tools, or validate information on a trusted outlet

Tip Sheet: A Practical Guide to Generative AI

Tip Sheet: How to use generative AI safely