Research of predictive and categorizing AI systems has already for decades demonstrated racial, class and gender biases ingrained in the systems. Similar observations have recently been made about generative AI. A growing body of literature also reports findings on religious biases in texts generated by LLMs (e.g. Viscek et al. 2025, Chua et al. 2025, Zhang et al. 2025, Hundt et al. 2025, Lozoya et al. 2023). Although biases in image recognition systems related to religion have been studied (Berg & Valaskivi 2023; Berg & Valaskivi forthcoming), research on religion in AI generated images is still scarce (e.g. Alfano et al. 2024; Abrar et al. 2025) and their focus has not been on stereotypes.
With a representation analysis of the AI generated contents of the AI-led church service organized in March 2025 at the Paavali parish in Helsinki, Finland, this paper, firstly, demonstrates how the AIs generate overtly stereotypical representations of gender, race - and religion. Secondly, the paper discusses these findings in the light of a discourse analysis of the open-ended questionnaire responses (N=54) of participants in the AI church service (Valaskivi, forthcoming).
The paper argues that the GenAI tools function as decontextualization machines removing cultural representations and signs from their original contexts, blending them into pulp and then squeezing out images/text/sound/music that vaguely echo cultural formulas or eidetic schemas (Morgan 2011) but feel misplaced. In the participants, the AI generated contents with its stereotypical representations, commercial tone and decontextualized cultural and religious references invoked schemas that were out of place in a Lutheran church service in Finland. The paper concludes that this misplacement, caused by the functioning logics of the decontextualization machine, is the main reason for the alienation and discomfort expressed by the AI church service participants.