Skip to Main Content

Faculty and AI - Atla 2024

This guide contains resources used in our presentation at Atla 2024 in Long Beach, CA.

Fear Not

"A human toddler usually requires just a few examples to recognize that a kangaroo is not an elephant, and that both real-world animals are different than, say, pictures of animals on a sippy cup. And yet, the powerful statistical models now driving “artificial intelligence” (AI)—such as the much-discussed large language model ChatGPT—have no such ability.

The human brain evolved over 500 million years to help people make sense of a world of multifarious objects, within the lived contexts that embed their learning in social relations and affective experiences. Deprived of any such biological or social affordances, today’s machine learning models require arsenals of computer power and thousands of examples of each and every object (pictured from many angles against myriad backgrounds) to achieve even modest capabilities to navigate the visual world. 'No silly! The cup is the thing that I drink from. It doesn’t matter that there’s a kangaroo on it–that’s not an animal, it’s a cup!,' said no statistical model ever. But then no toddler will ever 'train on' and effectively memorize—or monetize—the entirety of the scrapable internet."

Lauren M.E. Goodland and Samuel Baker, "Now the Humanities Can Disrupt 'AI'”.

Henrik Kniberg: a short introduction to AI

An AI Glossary

Artificial Intelligence (AI):  AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways; two of the primary ways are (1) through the use of rules provided by a human (rule-based systems) or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision-making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Neural Networks (NN): Neural Networks, also called artificial neural networks (ANN), are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

Large Language Models (LLMs): Form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent. 

Transformer Models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Pre-Training:  In the case of GPT, pre-training means that it was trained both from web content and by humans who fine-tuned the responses by incorporating human feedback as to usefulness and meaning. The transformer used that set of feedback to create policies for itself that it applies with each answer.

By Pati Ruiz and Judi Fusco, 2024. Glossary of Artificial Intelligence Terms for Educators. Educator CIRCLS Blog. Retrieved from htps://circls.org/educatorcircls/ai-glossary. Used under a Creative Commons Attribution 4.0 International License. This content was modified from the original by using only eight of the 27 definitions provided in the glossary and by adding the "pre-training" definition.