"Technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary aggravate inequalities and conflicts, can never count as true progress."
"Developments such as machine learning or deep learning, raise questions that transcend the realms of technology and engineering, and have to do with the deeper understanding of the meaning of human life, the construction of knowledge, and the capacity of the mind to attain truth."
Message of His Holiness Pope Francis, for the 57th World Day of Peace, January 1, 2024
In their op-ed, Deus ex machina: the Dangers of AI God Bots, University of Michigan anthropology professor Webb Keane and Yale Law School law and philosophy professor Scott J. Shapiro argue that religious AI can cause users to delegate ethical decisions to a bot and mislead them into thinking that the bot has divine power.
AI chatbots tap into our desire for magical thinking by speaking in certainties. Even though their large language models employ sophisticated statistics to guess the most likely response to a prompt, the bot replies as if there is just one answer. They imply there’s nothing more to discuss.
The article God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong? highlights other potential moral concerns in addition to the idea of infallibility. These include the risk of missing the spiritual benefits that come from in-depth scriptural study, and the potential for AI to undermine the personal and communal aspects of spiritual practice. People may also trust the bot with sensitive information that they would be embarrassed to discuss with a pastor.
Either of these articles could spark class discussion about spiritual relationships, faith, belief, and the use of religious AI.
Bias: "A machine learning algorithm can only be as good and reliable as the data set it is trained on. If the algorithm is set up to learn from interactions with our real, sinful society, it will naturally come to re-flect the inherent biases of that society. When Microsoft connected Tay, an ML driven chatbot, to Twitter and used its exchanges on the social media platform to “learn” how and what to tweet, within hours Tay was spewing racist and misogynist tweets." ("An Introduction to the Ethics of Artificial Intelligence". Matthew J. Gaudet, Journal of Moral Theology, Vol.11, Special Issue 1 (2022): 1–12)
Friction: "Beyond bias, we also must be cautious about how AI removes from more traditional systems some of the friction inadvertently rendering the system more moral. For example, as deadly as war can be, the amount of destruction is reduced simply because some people refuse to act. When AI is deployed in autonomous weapons systems, it re-moves any hesitancy soldiers might have in killing another human being, thereby eliminating the friction and making warfare more efficient. Is greater efficiency or ruthlessness at killing actually the more moral course? Could there be goodness in the friction?" (p. 8)