Panel: ARTIFICIAL INTELLIGENCE AND (IN)EQUALITIES: CONTRIBUTIONS FROM THEOLOGY AND RELIGION



511.7 - THE AMBIVALENCE OF RELIGIOUS LLMS: OPPORTUNITIES, RISKS, AND REGULATION

AUTHORS:
Tretter M. (Friedrich-Alexander-Universität Erlangen-Nürnberg ~ Erlangen ~ Germany)
Text:
The emergence of large language models (LLMs) is rapidly transforming the landscape of digital religion, particularly those systems fine-tuned on religious text corpora and designed for religious conversation. By enabling users to access religious knowledge without reliance on traditional gatekeepers, these so-called "religious LLMs" can facilitate new forms of engagement with sacred texts and theological traditions and foster dialogue across religious differences. In this sense, reli-gious LLMs represent a significant new infrastructure within digital religion. At the same time, religious LLMs remain deeply ambivalent. Several recent cases show that sys-tems trained on religious corpora may reproduce harmful or exclusionary interpretations. In some instances, such systems—one especially prominent case is GitaGPT—have generated responses that attempt to legitimize hostility or violence toward non-believers or religious others as a per-ceived religious duty. In more subtle ways, several other religious LLMs encode rigid normative assumptions about right and wrong, thereby potentially amplifying existing differences rather than mitigating them. This ambivalence raises urgent political and ethical questions: how to deal with the risks of reli-gious LLMs? Which regulatory frameworks could enable religious LLMs to realize their potential while mitigating their risks? And how can such governance be pursued without introducing ille-gitimate censorship or infringing upon religious freedom? Drawing on recent work at the inter-section of AI, religion, and governance, this paper addresses these questions and aims to develop a constructive proposal for regulating religious LLMs.