In his blog post, Teaching AI Ethics, and subsequent posts, Teaching AI Ethics: the series, Leon Furze, a K-12 educational specialist, offers "teaching points" and classroom discussion questions for multiple ethical issues connected to AI use, including those discussed on this page.
Bias: Generative AI was trained on data from the Internet, which often reflects societal prejudice and discrimination. Leon Furze, author of Practical AI Strategies, argues that this algorithmic bias is one of the "most pressing ethical concerns of AI," noting that generative AI is at risk of perpetuating the internet's racist, sexist, ableist, religious, and gendered output (2024, 21). Generative AI is also susceptible to amplifying the bias on the internet depending on how it was trained and can reflect the bias of its human trainers (Bowen & Watson 2024, 18).
Environmental: In her book Atlas of AI, Kate Crawford, a leading AI expert, describes Gen AI as "extractive technology" because it uses large amounts of precious metals, water, and energy. See, Generative AI's Environmental Costs are Soaring — and Mostly Secret. ChatGPT alone has been estimated to use the power of 33,000 homes, and just one AI search uses 4-5 times more energy than a conventional internet search. See also, Anatomy of an AI System, in which Crawford and fellow researcher Vladan Joler recount the human labor, data, and planetary resources of the Amazon Echo.
Copyright: In their training, LLMs used large amounts of data and images, including intellectual property, without obtaining permission from rightsholders. This could result in AI outputs that violate copyright laws, which were enacted to protect creators from the theft of their works. Copyright law was intended to encourage rights holders to share their works, allowing both the creator and the public to benefit from their use. Rampant copyright violations discourage this shared benefit.
Data privacy: Gen AI was trained on massive amounts of unauthorized personal data; more data will be collected whenever something is uploaded into these systems. The lack of transparency regarding how this data is collected and used should raise concerns about potential misuse. The onus is generally on the user to 'opt-out' of this potential misuse, further emphasizing the need for vigilance. The European Union's Artificial Intelligence Act prohibits some "datafication" practices that shed insight into how this technology can be misused: using subliminal, manipulative, or deceptive techniques to influence behavior, exploiting vulnerabilities of the elderly, poor, and other at-risk populations, biometric collection systems which group people by behavior or personal traits, or inferring emotions in workplaces or educational institutions.
Human labor costs: Gen AI requires a surprisingly large amount of low-wage human labor to categorize and label the large amounts of data used in training. As documented in the article "Open AI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic", workers were severely traumatized from viewing content from "the darkest recesses" of the internet, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest to create safety systems which would block this content from Gen AI outputs.
While AI output needs to be rigorously investigated given its propensity for misinformation and its failure to cite sources, Gen AI tools, which are trained by information dumps and edited by humans, require similar evaluation. A case in point is Christian AI.
A well-known method for evaluating misinformation is SIFT, the four moves. The first two moves, STOP and INVESTIGATE THE SOURCE, raise questions about testing the reliability of an AI tool. Before using an AI tool, stop and ask what is known or can be known about it.