Skip to Main Content

Artificial Intelligence (AI)

Introduction to AI

"Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. As a field of computer science, artificial intelligence encompasses (and is often mentioned together with) machine learning and deep learning. These disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions over time" (IBM, 2024).


Generative AI has rapidly gained popularity, leveraging models to create text, media, code, and more. Tools like ChatGPT, Claude, and Gemini exemplify its use, generating new content by learning patterns from existing data and producing new information with similar characteristics. Other tools are used to aid research and information synthesis.


This guide serves as an informational and educational resource for UT Tyler faculty, students, and staff. The Robert R. Muntz Library does not endorse, nor encourage, the use of any particular AI tool. Students should consult with their instructor(s) regarding the use of AI tools in their coursework. For more information, please see UT Tyler’s AI Syllabus Language. This is a living resource guide that will be updated and evolve as AI tools, information, and resources evolve.

AI Basics

What is Generative AI?

What is Generative AI?

How AI Works

How AI Works

How Chatbots & Large Language Models Work

How Chatbots & Large Language Models Work

How Computer Vision Works

How Computer Vision Works

How Neural Networks Work

How Neural Networks Work

Understanding Generative AI

A.I. tools like ChatGPT seem to think, speak and create like humans. But what are they really doing? From cancer cures to Terminator-style takeovers, leading experts explore what A.I. can--and can’t--do today, and what lies ahead.  Duration-53 minutes.

pbs video thumbnail ai revolution

A-Z Glossary of AI Terms

  • AI - computer system able to perform specific tasks that normally require human intelligence, such as visual perception, speech recognition, basic decision-making, and language translation.

    Algorithm - AI systems rely on algorithms, like the "brains" behind their decisions. These algorithms act as a set of instructions, dictating the actions the AI takes. There are two main types of algorithms: rule-based and machine learning.
    • In rule-based systems, human programmers define the specific rules the AI follows.
    • Machine learning algorithms, on the other hand, can discover and adapt their own rules based on data they're trained on.


    Autonomy - The ability for an AI system to independently learn, make decisions, and take actions to achieve a goal, without requiring constant human intervention. This can occur in physical environments (self-driving cars), virtual worlds (game characters), or digital spaces (personal assistants).

    Black Box - traditional machine learning models are often difficult to understand and interpret. These models are typically black boxes that make predictions based on input data but do not provide any insight into the reasoning behind their predictions.
    • This lack of transparency and interpretability can be a major limitation of traditional machine learning models and can lead to a range of problems and challenges such as biases.


    Chatbot - a software program that simulates intelligent conversations with humans.Chatbots can be:
    • Rule-based - meaning it follows a pre-programmed script or rules. They can be text-based or voice-activated, and are typically integrated into websites, mobile apps, or messaging platforms.
    • AI-powered - using machine learning to respond.


    Chat-Based Generative Pre-Trained Transformer (ChatGPT) - a more advanced chatbot powered by LLM technology developed by OpenAI.It uses a specific type of neural network called a transformer. This model:
    • can generate responses to questions (Generative).
    • was trained in advance on a large amount of the written material available on the web (Pre-trained)
    • can process sentences differently than other types of models (Transformer)


    Computer Vision - a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video).


  •  
  • Deepfake -any images, photo or video produced by AI tools designed to fool people into thinking it's real.

    Data - units of information like numbers, text, images, or sounds, that can be processed and analyzed by AI algorithms.
    • Training data - the data used to teach or train an AI model. The quality of training data directly impacts how well AI performs.


    Deep Learning - a subfield of neural networks with several hidden layers, stacked one on top of another. This deeper structure allows Deep Learning algorithms to tackle even more complex problems. The additional layers provide more processing power, enabling the network to recognize intricate patterns in massive datasets. Think of it as giving the network more tools to analyze the information.

    Encoding - the next step, each token gets a unique numerical identifier. This process, known as encoding, allows the AI to identify patterns, translate languages, answer your questions, and even create new text.



  •  
  • Generative Artificial Intelligence (GAI or genAI) - AI that can generate new or original content like writing poems, composing music, or generating realistic images. It uses machine learning to analyze massive amounts of existing data (text, images, etc.). By studying these patterns, it learns to predict what comes next in a sequence.

    Hallucinations - occur when an AI model generates an inaccurate or faulty output, i.e., the output either does not belong to the training data or is fabricated. In other words, the AI system "hallucinates" information that it has not been explicitly trained on, leading to unreliable or misleading responses. (Marr, 2023)
    • Misinformation and Fabrication:
      AI News Bots: In some instances, AI-powered news bots tasked with generating quick reports on developing emergencies might include fabricated details or unverified information, leading to the spread of misinformation.
    • Misdiagnosis in Healthcare:
      Skin Lesion Analysis: An AI model trained to analyze skin lesions for cancer detection might misclassify a benign mole as malignant, leading to unnecessary biopsies or treatments.


    Interpretable Machine Learning (IML) - (also called Interpretable AI or Explainable AI) describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions.
  • Large Language Model (LLM) - AI program using natural language processing to communicate using normal human language. Virtual assistants like Siri or Alexa utilize large language models to understand and respond to natural language queries.
  • Machine Learning - using sample data to train computer programs to recognize patterns based on algorithms.

    Natural Language Processing - ability to understand speech as well as understand and analyze documents.

    Neural Networks (NN) - computer systems designed to imitate the neurons in a brain. Here is a breakdown of how they work:
    • Layers of Processing: Data is fed into the network layer by layer. Each layer performs calculations and transforms the information.
    • Hidden Workings: The middle layers, called hidden layers, are where the magic happens. These layers contain complex calculations that allow the network to learn and recognize patterns.
    • Output and Refinement: The final layer produces the network's output, like identifying an image or making a prediction. Over time, with training and adjustments, the network refines its ability to process information accurately.


    Prompt - a specific instruction or question given to a generative AI program by a human user. It essentially tells the AI what you want it to do.
  • Tokens - the building blocks that help AI models make sense of our language. They can be whole words, parts of words, or even symbols, depending on the AI model. Since computers are better with numbers than words, breaking text into tokens gives them a kind of code they can understand and work with easily.

    Transformer - Used in ChatGPT (the T stands for Transformer), transformer models are a type of language model. They are neural networks that learn context and meaning by tracking relationships in sequential data like the words of a sentence.


  •  

Like us on Facebook

Follow us on Instagram

Check us out on YouTube