Skip to Main Content

AI and the classroom

A faculty guide

Some inspiration

Our challenge is, then, to gather the data from the research provided by AI engines, examine it, test it, and express it with critical thinking and theological reflection. And that process is as old as the written word. Extracting knowledge to isolate truths and lies, things redeemed, and things lost is at the heart of what we do....We educate pastoral students to think critically, and biblically so that they will faithfully shepherd human beings past treacherous ledges and forests of wolves to locate the fair pastures of life abundant and life eternal. Neither artificial or human intelligence can meet such a calling. Only a supernatural source can supply the wisdom our mission demands.

Michael A. Milton (February 26, 2023).On the Use of AI Writers in Seminary, from the blog, Faith for Living. 

Henrik Kniberg: a short introduction to AI

Although a bit long (18 minutes), this video is a master class explaining Generative AI technology. Analogizing the technology to "Einstein in your basement," Kniberg uses jargon-free, simple descriptions of how it works, its uses, risks and limitations, and implications for the future.  

What it all means

Artificial Intelligence (AI):  AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways; two of the primary ways are (1) through the use of rules provided by a human (rule-based systems) or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision-making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Neural Networks (NN): Neural Networks, also called artificial neural networks (ANN), are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

Large Language Models (LLMs): Form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent. 

Transformer Models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Pre-Training:  In the case of GPT, pre-training means that it was trained both from web content and by humans who fine-tuned the responses by incorporating human feedback as to usefulness and meaning. The transformer used that set of feedback to create policies for itself that it applies with each answer.

By Pati Ruiz and Judi Fusco, 2024. Glossary of Artificial Intelligence Terms for Educators. Educator CIRCLS Blog. Retrieved from htps://circls.org/educatorcircls/ai-glossary. Used under a Creative Commons Attribution 4.0 International License. This content was modified from the original by using only eight of the 27 definitions provided in the glossary and by adding the "pre-training" definition.