Skip to Main Content

Artificial Intelligence (AI)

Misinformation in AI

Misinformation

Generative AI tools can support users in various aspects of the research process. However, these tools can be unreliable as they often generate false information, or "hallucinations," presented confidently. These hallucinations can include fabricated citations or facts.

AI tools have even been used to create false images or audiovisual recordings to spread misinformation and mislead audiences. Referred to as "deepfakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous. 

AI generated content sometimes lacks currency as some systems do not have access to recent information. Rather, they may be trained on past datasets which generate dated representations of current events and the related information landscape.

Key Tips  

  • Meticulously fact-check all information produced by generative AI, including sources of all citations the AI uses to support its claims.
  • Avoid asking AI to produce a list of sources on a specific topic as this may result in false citations. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Always remember that generative AI tools are not search engines--they use large amounts of data to generate responses that mimic human conversation.
    A deepfake is a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.

    Deepfakes are increasing online making individuals distrust the integrity and veracity of all content posted on social media and other sites, like YouTube. Individuals tend to believe what aligns with their long-held values and beliefs regardless of the authenticity of the words put into videos or images.

    Using deepfakes, bad actors can manipulate others into believing falsehoods. This can profoundly influence individual perceptions and trust. Exposure to manipulated media may erode confidence in visual and audio evidence, fostering a climate of skepticism and cynicism toward digital media.

    Bad actors create deepfakes to:
  • Spread misinformation
  • Harass
  • Incite violence

  • Types of deepfakes:
  • Photo deepfakes i. e. face and body swapping.
  • Audio deepfakes i. e. voice-swapping or text-to-speech.
  • Video deepfakes i. e. face-swapping, face-morphing, or full-body puppetry.
  • Audio & video deepfakes i. e. lip-synching.

How to Spot Deepfakes

4 simple tips to identify deepfakes video cover

How to Spot Deepfake Videos: 4 Simple Tips

5 Ways to Spot Deepfake Videos and Images video cover

The Top 5 Ways to Spot 'Deepfake' Videos and Images

Lie Detector for Deepfakes video cover

Creating a "Lie Detector" for Deepfakes

When AI Can Fake Reality video cover

When AI Can Fake Reality, Who Can You Trust? (Sam Gregory, TED Talk)

The Future of Fakery video cover

The Future of Fakery: Deepfakes, Generative AI & The Fight for Authenticity

Privacy and AI

Breaches of Privacy & Danger of Re-Identification

Generative AI tools pose significant privacy risks because they collect and process a lot of data about users. This data can be misused, and/or sold  by companies without your consent. Even if you don't directly share personal information, the patterns in your data can still reveal sensitive details about you. 

Key Tips

  • Avoid sharing any personal or sensitive information via the AI-powered tools. 
  • Do not download Library materials (i.e., articles, ebooks, infographics, psychographics, or other datasets) into AI as it is prohibited.
  • Be cautious about policies that permit for the inputted data to be freely distributed to third-party vendors and/or other users. 

Like us on Facebook

Follow us on Instagram

Check us out on YouTube