Terminology

  1. Artificial Intelligence (AI): the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
  2. Large Language Models (LLM): large deep learning AI-models that are pre-trained on vast amounts of data, working by predicting the most probable next word in a sequence; understand, generate, and perform various tasks, including answering questions, summarizing information, translating languages, and creating original content.
  3. Multimodal AI: Machine learning model that is capable of processing information from different modalities, including images, videos, and text. Essentially an extension of LLMs, incorporating capabilities beyond just text-based interactions.
  4. Context Window: the working memory of an AI model or how much information it can remember while generating a response to your prompt.
  5. Training Data: “set of information, or inputs, used to teach AI models to make accurate predictions or decisions.” (Jaen, 2024)
  6. “Hallucinations”: “incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.” (Google, 2024)
  7. Prompt: the input provided to a generative AI model to elicit a specific response; can take the form of questions or instructions.
  8. Prompt Engineering: “science of designing and optimizing prompts to guide AI models, particularly LLMs, towards generating the desired responses. By carefully crafting prompts, you provide the model with context, instructions, and examples that help it understand your intent and respond in a meaningful way.” (Google Cloud, 2024)