Panel description: Artificial intelligence (AI) is increasingly encountered not only as a technical tool but as a conversational and social presence—one that people interpret, trust, resist, or rely on in shaping how they learn, decide, relate, and make meaning. This panel focuses on perception and interpretation: how individuals and communities experience AI's agency, limits, authority, and bias, and how these perceptions reshape epistemic security, moral responsibility, and everyday as well as institutional practices. It builds an interdisciplinary dialogue among theology and religious studies, philosophy, anthropology, ethics, psychology, and education. Within this horizon, the panel foregrounds religious and theological dimensions of AI's growing influence: its impact on spiritual practice and discernment, religious formation and catechesis, and contemporary imaginaries of transcendence, the divine, and human dignity. It also examines how interaction with AI reconfigures empathy and emotional attachment—especially when AI is experienced as advisor, companion, or substitute for human presence—and how such experiences may alter relationships, vulnerability, and accountability. Overall, the panel investigates how AI is perceived and narrated in ways that co-shape contemporary understandings of the human person, community, and God.
Papers:
EDUCATION IN THE DIGITAL AGE: THE "MIRROR" CHARACTER OF AI, TECHNO-MORAL VIRTUES, AND MORAL UPSKILLING AND DESKILLING
Žalec B. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The speaker develops a normative framework for education in the digital age grounded in the philosophy of technology. The main points are as follows. First, the "mirror" character of contemporary artificial intelligence (AI) does not introduce a new moral authority; rather, it reflects and at the same time refracts (filters, amplifies, distorts) existing human habits, attitudes, prejudices, and aspirations. This calls for education to consciously shape practices of interpretation, recognition, and the delimitation or limitation of these "mirror" effects. Second, an analysis of moral deskilling and moral upskilling shows that different modes of technology use simultaneously undermine and enable the development of moral skills. The task of education is to harness these dynamics for the cultivation of techno-moral virtues. Third, the speaker offers a catalog-like yet dynamic vision of virtues for the 21st century (e.g., prudence, courage, justice, truthfulness, patience, temperance, honesty), which he operationalizes for curricular decisions, assessment, and rules governing the use of AI in teaching. He concludes with seven criteria for the responsible use of AI in education and with a model of how the mirror character of AI can be used as a didactic opportunity for reflection, dialogue, and the strengthening of human practical wisdom.
AI AND THE CLASSICAL UNDERSTANDING OF THE HUMAN BEING
Zichy M. (Speaker)
University of Bonn, Faculty of Catholic Theology ~ Bonn ~ Germany
This paper examines how artificial intelligence challenges and reframes the traditional Western conception of the human being. Beginning with the anthropological foundations articulated by Plato and Aristotle, it reconstructs the central elements of this tradition: human rationality, embodiment, affectivity, sociality, and the intrinsic orientation toward truth, goodness, and beauty. On this basis, the paper discusses three interpretive frameworks for understanding AI in relation to these categories. First, AI can be conceived as an optimized or perfected form of humanity, a transhumanist continuation of the Enlightenment ideal of rational self-improvement. Second, posthumanist perspectives interpret AI as a fundamentally different form of existence that could ultimately supersede the human. Third, and most convincingly from a classical perspective, AI is understood as a tool—powerful yet lacking intrinsic telos, normative orientation, embodied experience, or the capacity for eudaimonia. The paper concludes that AI's limitations help to clarify what remains uniquely and irreducibly human.
THE ETHICS OF CARE AS A FOUNDATION FOR EDUCATION IN THE AGE OF ARTIFICIAL INTELLIGENCE
Vodicar J. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
In the age of artificial intelligence, individuals who have little influence over technological development—most notably young people—are increasingly placed in the position of objects rather than subjects. Competitiveness is embedded at the core of artificial intelligence development, a dynamic that many contemporary thinkers identify as the culmination of broader trajectories within modern society. Educational approaches that are limited to, or primarily oriented toward, preparing individuals for competitive entry into the labor market fail to contribute to the formation of a more humane and empathetic society. As a consequence, young people are becoming increasingly apathetic and are experiencing growing levels of psychological distress. This article addresses education—and catechesis in particular—as a response that can foster hope and contribute to the development of inner strength in both individuals and communities. Its pedagogical framework is drawn from the ethics of care, which is grounded in the fundamental anthropological assumption of original human concern understood as responsibility for the other. These principles are brought into dialogue with key challenges faced by contemporary individuals within digital culture. The core characteristics of an educational approach based on dialogical relationships, attentiveness to learners' concrete needs, and the formation of a responsible community are applied as guiding principles for catechesis. Such an approach aims to enhance resilience against technocratism and apathy. Care-based catechesis, as argued here, is consistent with the Church's magisterial teaching and with the Christian tradition's understanding of the human person.
ARTIFICIAL INTELLIGENCE IN EDUCATION: HUMAN FORMATION IN THE AGE OF ALGORETICS
Kraner D. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Artificial intelligence (AI) is transforming the educational environment and offering broad opportunities for personalization, adaptive feedback, and effective assessment. Furthermore, the impact of AI on education extends far beyond technological efficiency: it raises questions about the meaning of humanity, ethics and values in a digital society, the principles of fairness, responsibility, transparency and respect for human dignity. This article considers education as a space where AI should serve the formation of the person, not merely the information of the individual. This raises concerns about the disappearance of human relationships, critical thinking, and ethical awareness in learning environments. The pedagogical potential of artificial intelligence is best realized when it is used as a complement (rather than a replacement) to "human teachers," supporting hybrid models that combine algorithmic precision with emotional intelligence. Emerging frameworks emphasize the need for teachers and students to understand artificial intelligence, as well as ethical guidelines that ensure transparency and fairness in algorithmic decision-making. The use of AI in the pedagogical process must be based on a human-centered model, where technology does not replace the teacher but supports their relationship with the student. The article proposes Benanti's model of "pedagogical algoretics," which combines digital literacy, ethical sensitivity, and spiritual intelligence, and explores how AI can contribute to education for responsibility, dialogue, and compassion in contemporary educational practice.
HUMAN-AI EMOTIONAL INTERACTION: EMPATHY, EMOTIONAL BONDS, AND INTERPERSONAL RELATIONSHIPS
Simonic B. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The paper examines the impact of interaction with artificial intelligence-based user interfaces on the experience of empathy, the formation of emotional connections, and the transformation of interpersonal relationships. It begins with the observation that artificial intelligence (AI) is increasingly acting as a conversation partner, advisor, or even a substitute for human presence, particularly in contexts of loneliness, emotional distress, and vulnerability. The analysis focuses on the affective and emotional dimensions of human-AI interaction: how users experience the empathy of algorithmic systems, the extent to which AI promotes or inhibits the ability to empathise with others, and how one-way or seemingly mutual emotional bonds with non-human interlocutors are formed. Particular attention is given to whether AI serves as a complement to human relationships, a substitute for them, or contributes to withdrawal from more demanding interpersonal interactions. The paper also critically examines the existential and ethical dimensions of such relationships: feelings of security, control, and trust in interactions with algorithms, and the risks of redefining the concepts of empathy, closeness, and responsibility.
TECHNOLOGY, RELATIONALITY, AND DIVINE ORIGIN: FROM SCRIPTURE TO ARTIFICIAL INTELLIGENCE
Centa Strahovnik M. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Looking at the Bible from the perspective of specific forms of technology described in it, one can approach this domain utilizing a relational framework. Three central relationships can be discerned, namely God's orientation towards technology, people's engagement with technology, and the role of technology within the relationship between God and humanity. All these can be useful when framing ethical and theological questions about current technologies, including artificial intelligence (AI). In this talk, I will examine the aforementioned relationships from the perspective of the distinction between something being begotten or born and something being made, as embedded in early Christian thought. In conclusion, I will draw some consequences of this for our perception of new technologies, including AI.
FROM ARTIFICIAL COMPANIONSHIP TO RELATIONAL EROSION: AI, EMOTIONAL ATTACHMENT, AND THE CRISIS OF HUMAN CONNECTION
Machidon O. (Speaker)
University of Ljubljana, Faculty of Comuter and Information Science ~ Ljubljana ~ Slovenia
Recent advances in conversational artificial intelligence have enabled forms of interaction that simulate empathy, emotional attunement, and personal understanding with unprecedented effectiveness. While these systems are often presented as tools for support, companionship, or even care, emerging empirical research suggests that frequent and emotionally significant interaction with AI may contribute to increased loneliness, reduced empathy, and the weakening of human relational capacities. This contribution examines the growing phenomenon of emotional attachment to AI through an interdisciplinary lens, combining insights from cognitive science, psychology, philosophy, and theological anthropology.
Drawing on recent studies on AI-mediated persuasion, artificial companionship, and emotional dependence, the contribution argues that AI "cleans up" the inherent messiness of human relationships—conflict, vulnerability, and mutual recognition—in ways that risk flattening intimacy rather than fostering it. From an anthropological and ethical perspective, this raises fundamental questions about relationality, moral growth, and the human need for reciprocal recognition. The contribution concludes by suggesting that theological and philosophical accounts of personhood and relation can offer critical resources for assessing the promises and dangers of AI-mediated emotional support, and for rearticulating the value of authentic human presence in the age of algorithms.
WHO SPEAKS OR WHAT SPEAKS? AI AS A CONVERSATIONAL PARTNER
Klun B. (Speaker)
University of Ljubljana ~ Ljubljana ~ Slovenia
Artificial intelligence is increasingly encountered not merely as a technical tool for information retrieval or task performance, but as an entity that converses with human users. AI-based large language models are now presented as coaches, therapists, and companions of various kinds, and new devices are being developed to accompany particular groups—such as elderly people—in order to alleviate loneliness and compensate for the absence of human relationships. Regardless of how one evaluates these developments, they raise a fundamental philosophical question concerning the status of digital conversational partners. Traditionally, the verb "to speak" is attributed to a who rather than to a what. Does this distinction still hold when we say that "AI speaks"? One may argue that AI does not genuinely speak, but merely combines and rearranges words and phrases originally spoken by human beings and incorporated into its training data. From this perspective, the uniqueness of human speech can be preserved: a who speaks, whereas the device itself - the what - "speaks" only in a metaphorical sense.
ACCELERATED AI AND DIGITALIZATION: CRITICAL ECOTHEOLOGICAL PERSPECTIVES
Furlan Štante N. (Speaker)
The Science and Research Centre Koper, Institute for Philosophical and Religious Studies ~ Koper ~ Slovenia
The accelerated adoption of artificial intelligence (AI) and digital technologies is transforming both society and ecosystems. While digitalization is often promoted as a tool for sustainability, critics point to its significant environmental footprint, including energy-intensive data centers, rare resource extraction, and electronic waste. Noreen Herzfeld (2025) emphasizes that AI and digital infrastructures are not ethically neutral: their design, deployment, and scale have profound ecological consequences that require careful scrutiny.
This paper explores how ecotheology can serve as a critical lens for analyzing these developments. Drawing on Herzfeld's insights and relevant papal documents - Laudato si' (2015), Antiqua et nova (2025), and Pope Francis's 2024 World Day of Peace message - the paper argues that ethical and spiritual reflection is essential for evaluating the ecological implications of AI and digitalization. Laudato si' provides a framework of integral ecology, emphasizing the interconnectedness of technological progress, environmental responsibility, and human well-being. Antiqua et nova highlights AI's dual potential: it can support sustainability efforts but also imposes significant environmental costs.
By integrating ecotheological critique with technological analysis, the paper moves beyond polarized narratives - whether digitalization destroys or saves the planet - toward a nuanced understanding that considers relational, ethical, and ecological responsibilities. Ecotheology offers normative criteria and reflective tools to guide sustainable AI practices, fostering alignment between technological innovation and care for creation. Ultimately, this approach underscores the indispensable role of ethical discernment in shaping a digital future that genuinely supports ecological sustainability.
ARTIFICIAL INTELLIGENCE - BETWEEN THE EMPOWERMENT AND THE THREAT TO HUMAN AUTONOMY
Globokar R. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Human beings have always strived to expand their capabilities through technology, thereby increasing their freedom and autonomy. The Enlightenment ideal of liberating humanity from the constraints of nature takes on a new dimension in the age of artificial intelligence (AI). But a fundamental question arises: will AI lead us toward greater personal freedom, or will its development gradually result in a new form of dependence on systems we no longer fully understand or control?
This contribution presents arguments supporting the thesis that AI can enhance individual autonomy: access to knowledge becomes faster and more equal, routine tasks are delegated to machines, allowing for more time for creativity; decision-making processes can become more inclusive, and global communication is facilitated through real-time translation across languages.
On the other hand, we will also highlight critical perspectives that warn against the dangers of mass manipulation, the decline of critical thinking, increased surveillance of individuals, the weakening of interpersonal relationships, and growing technological dependency. This contribution will reflect on the tension between these opposing dynamics and explore their implications for the ethical understanding of human freedom in a digital age.
PERCEPTIONS OF AI IN CLINICAL DECISION-MAKING: ANSWERABILITY AND DEFERENCE
Miklavcic J. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
As artificial intelligence becomes routine in clinical decision support, the ethical challenge is not only what AI can do, but how it is perceived in practice—tool, advisor, or authority. This paper argues that responsibility in AI-assisted medicine should be understood as answerability: the clinician's duty to stand behind decisions with reasons that can be owned in the patient encounter. When AI outputs are treated as decisive, responsibility can be "buffered" ("the system said so"), and a deference paradox emerges: higher accuracy can increase pressure to defer, while full deference weakens moral agency and care. The paper further highlights that many clinical choices cannot be settled by probabilities alone, because they involve values, meaning, and human vulnerability (e.g., threshold decisions, borderline cases, end-of-life contexts). Rather than framing these tensions as a simple "responsibility gap," it proposes an ecology of responsibility that links clinicians, institutions, and developers—while keeping accountability visible at the point of care. Practical implications include reason-giving checkpoints, constrained use conditions, and governance that supports human judgment instead of replacing it.
INTEGRATING ARTIFICIAL INTELLIGENCE INTO SPIRITUAL LIFE: OPPORTUNITIES, LIMITS, AND DISCERNMENT
Platovnjak I. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The increasing presence of artificial intelligence (AI) in spiritual contexts raises critical questions about its capacity to support spiritual engagement and its limitations in relation to transcendence. As a system grounded primarily in analytical, left-hemisphere-oriented processes, AI offers efficiency, personalization, and broad accessibility, yet lacks intuition, embodied presence, relational depth, and spiritual discernment. Using a conceptual and interdisciplinary approach, this paper analyzes AI's impact through the three dimensions of spirituality—personal-experiential, communal-institutional, and rational-reflective—proposed by Platovnjak and Svetelj (2024), in dialogue with Sheldrake's (2014) integrative framework. The analysis draws on selected examples of AI-supported prayer apps, chatbots, and digital religious education tools. The presentation highlights both opportunities and risks, including depersonalization, algorithmic bias, and commodification. It argues that AI can function as a supportive instrument only when embedded within human guidance, communal practice, and embodied spiritual life.
FROM CLINICAL JUDGMENT TO ALGORITHMIC AUTHORITY: WHO HOLDS EPISTEMIC PRIMACY?
Štivic S. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
This paper examines the increasingly widespread tendency to regard AI systems in medicine as epistemically authoritative, often at the expense of physicians' clinical judgment. It argues that this shift stems from a problematic conflation of statistical reliability with normative authority in clinical decision-making. While algorithmic systems can be highly effective at identifying patterns and estimating probabilities, clinical judgment involves far more than mere statistical calculation: it requires contextual understanding, practical reasoning, and the assumption of professional responsibility. By taking into account the differences between algorithmic forms of knowledge and clinical understanding, the paper highlights the risks of automation bias and the danger of a gradual deprofessionalization of medical practice. Particular attention is given to questions of ethical responsibility in situations where clinical decisions are strongly shaped—or even implicitly justified—by algorithmic recommendations. Rather than rejecting artificial intelligence in medicine, the paper advocates an approach in which AI functions as a valuable but supportive epistemic resource, clearly subordinate to physicians' clinical judgment and moral accountability.
RELIGIOUS PARROTING? ON THE DANGERS OF LLMS IN RELIGIOUS APPLICATIONS
Vestrucci A. (Speaker)
University of Bamberg ~ Bamberg ~ Germany
Large language models (LLMs) are increasingly used to draft sermons and prayers, translate sacred texts, and power religious chatbots across Christian, Jewish, Muslim, Hindu, and Buddhist communities. This paper surveys case studies from 2023-2025 to map both the appeal of these systems - e.g. efficiency, creative stimulation, and widened access to religious materials - and their recurring risks, including emotional flatness, doctrinal opacity, embedded gender bias, privacy and accountability gaps, and the possibility of manipulation through authoritative-sounding rhetoric. To connect these practical concerns to technical causes, the paper develops a three-part critique of "religious parroting." First, LLMs lack grounded semantic understanding, so apparent meaning is largely derivative of patterns in training data rather than reliable theological comprehension. Second, because their outputs are not produced by rule-governed inference, they can deliver persuasive religious advise that is logically unstable or doctrinally inconsistent, fostering trust misalignment in sensitive pastoral contexts. Third, as "stochastic parrots," LLMs primarily remix existing linguistic material, limiting genuine theological innovation and risking the conflation of statistical fluency with spiritual authority. The paper argues that LLMs can be valuable as supervised drafting aids, but that responsible religious deployment - especially for conversational agents/chatbots - may require governance-oriented hybrid (e.g. neurosymbolic) architectures that incorporate explicit doctrinal and ethical constraints, alongside institutional oversight, to protect semantic depth, theological adherence, communal trust, and authentic creativity.
LARGE LANGUAGE MODELS, BUT CHRISTIAN? RELIGIOUS AI AND THE POLITICS OF FRAMING
Tretter M. (Speaker)
Friedrich-Alexander-Universität Erlangen-Nürnberg ~ Erlangen ~ Germany
The emergence of large language models (LLMs) is rapidly transforming the landscape of digital religion, particularly those systems fine-tuned on religious text corpora and designed for religious conversation. By enabling users to access religious knowledge without reliance on traditional gatekeepers, these so-called "religious LLMs" can facilitate new forms of engagement with sacred texts and theological traditions and foster dialogue across religious differences. In this sense, reli-gious LLMs represent a significant new infrastructure within digital religion.
At the same time, religious LLMs remain deeply ambivalent. Several recent cases show that sys-tems trained on religious corpora may reproduce harmful or exclusionary interpretations. In some instances, such systems—one especially prominent case is GitaGPT—have generated responses that attempt to legitimize hostility or violence toward non-believers or religious others as a per-ceived religious duty. In more subtle ways, several other religious LLMs encode rigid normative assumptions about right and wrong, thereby potentially amplifying existing differences rather than mitigating them.
This ambivalence raises urgent political and ethical questions: how to deal with the risks of reli-gious LLMs? Which regulatory frameworks could enable religious LLMs to realize their potential while mitigating their risks? And how can such governance be pursued without introducing ille-gitimate censorship or infringing upon religious freedom? Drawing on recent work at the inter-section of AI, religion, and governance, this paper addresses these questions and aims to develop a constructive proposal for regulating religious LLMs.
ARTIFICIAL RELIGIOUS AGENTS AS INSTRUMENTAL PARTNERS: PHYSICAL AI, RELIGIOUS INTEGRATION, AND PHILOSOPHICAL IMPLICATION
Jung D. (Speaker)
Yonsei University ~ Seoul ~ Korea, Republic of
The shift toward "Physical AI" demands a rigorous ontological reappraisal of agency within the religious sphere. Moving beyond the "mere instrument" paradigm, this paper posits that embodied artificial agents occupy a distinct stratum of agency best characterized as an "instrumental partner." I navigate the tension between the standard, consciousness-centric views—such as Swanepoel's—and the non-standard, Floridi's functionalist decoupling of agency from intelligence. By integrating Dung's five-dimensional agency metrics with theological rubrics—specifically McGrath's "TRUST" framework and Herzfeld's Barthian relational criteria—I articulate a nuanced model of artificial religious agency. Synthesizing these with Ihde's "quasi-other" and Turkle's "relational artifact," the study establishes a formal definition for this hybrid partnership. This theoretical groundwork serves as a necessary precursor to determining the ethical scope and responsibility of AI in spiritual practice, providing a vital roadmap for navigating the burgeoning intersection of social robotics and religious life.
THE DIALOGICAL FORMATION OF PERSONAL (FAITH) IDENTITY IN THE AGE OF AI: BETWEEN CRISIS AND OPPORTUNITY
Mc Aleer R. (Speaker)
KU Leuven ~ Leuven ~ Belgium
This paper begins from the premise that dialogue with the other is a deeply anthropological dynamic that is constitutive for shaping personal (faith) identity. Embracing the principle that we do not have identity without dialogue, however, is complicated by contemporary developments in Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs). Not only are LLMs becoming mediating sources of knowledge but these dialogue-based agents (chatbots) designed to predict and generate human-like language introduce a decisive anthropological challenge yet to be fully considered. LLMs replicate the form of dialogue (turn-taking, empathy cues, adaptive response) but without the presence of subjectivity, intentionality, or moral responsibility. As their rapid development blurs the boundaries between the human and the 'machinic', the dialogue through which personal (faith) identity takes shape grows potentially unstable. When algorithms can simulate our voices, styles, and even our responses, dialogue risks being subtly co-authored by systems without interiority, subjectivity, or any stake in the meaning of selves they help construct. At the same time, LLMs can strengthen human-human dialogue by clarifying communication, widening interpretive horizons, reducing barriers to participation, and supporting reflective preparation; in other words, enhance the conditions under which authentic relational encounter becomes possible. The paper tentatively considers whether transcendence could in any way characterise these artificial others, or at least the extent to which GenAI provokes and assists dialogue with the transcendent other (even within the dialogical self). In sum, it explores both the challenges and opportunities of GenAI for human dialogical processes and how one's dialogue with the artificial other might impact upon (faith) identity formation.
THE DEVELOPMENT OF VIRTUES IN THE DIGITAL AGE - EMPATHY, RELATIONSHIPS AND VIRTUAL REALITY
Valenta T. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Digital technologies, including artificial intelligence and immersive media such as virtual reality (VR), are increasingly shaping human perception, interpersonal relationships, and the understanding of oneself and others. Within theological and philosophical anthropology, this raises the question of how such technologies influence the development of fundamental human virtues, particularly empathy, responsibility and care for others. Rather than asking whether technologies can simulate human experiences, it is more important to understand how they are embedded in relational contexts in which virtues are formed and cultivated.
This paper draws on qualitative research conducted within an educational project exploring the use of virtual reality to foster empathy among adolescents (VR4Empathy). The analysis of reflections from both students and teachers revealed that the quality of interpersonal relationships in the classroom was consistently perceived as more important than the technology itself in shaping meaningful learning experiences. Participants emphasized trust, openness and a supportive learning environment as essential conditions for engaging with VR experiences that enable perspective-taking and reflection on the experiences of others.
The findings suggest that immersive technologies can support educational processes that encourage reflection on others' perspectives, but they do not generate empathy independently. From the perspective of relational anthropology and the Judeo-Christian understanding of virtues, empathy emerges primarily within human relationships. Digital technologies therefore do not replace relational experience but may function as tools whose ethical and anthropological significance depends on the relational contexts in which they are used.
FROM EFFICIENCY TO COLLABORATION: RETHINKING HUMAN-AI INTERACTION IN EDUCATION AND PROFESSIONAL DEVELOPMENT
Poljak Lukek S. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Public discourse often frames artificial intelligence (AI) primarily as a tool for efficiency and automation. Such a perspective risks reinforcing the use of AI mainly to accelerate existing practices rather than to rethink the nature of human knowledge, learning and professional collaboration. Recent discussions in education and professional training increasingly highlight the need to move beyond this instrumental view and develop a more reflective and collaborative approach to human-AI interaction. We explore how AI is perceived and understood in the context of education and professional development, particularly regarding its influence on human judgement, collaboration and empathy. Instead of treating AI merely as a technological assistant, the contribution argues for a paradigm shift toward understanding AI as a new relational entity within professional interactions—not only a tool, but a cognitive partner that actively co-creates the context of collaboration.
This perspective raises the question of virtues and competencies required for meaningful interaction with AI. On the one hand professionals need to cultivate skills such as critical thinking, ethical awareness, and the ability to interpret and responsibly use AI-generated insights. On the other hand, it encourages reflection on how educational practices and professional identities may evolve as interaction with intelligent systems becomes an integral part of learning, decision-making, and knowledge creation.
The paper therefore proposes that the responsible integration of AI in education and professional contexts requires not only technological adoption, but also the cultivation of critical digital literacy, ethical awareness, and reflective human-AI collaboration, enabling professionals to engage with AI as a meaningful partner in learning, knowledge creation, and practice.
AI, EPISTEMIC LABOR, AND THE IMPORTANCE OF SELF-DECENTERING
Sandsmark E. (Speaker)
University of Vienna ~ Vienna ~ Austria
Generative AI is often discussed in terms of accuracy, risk, and automation, but its deeper significance may lie in how it alters the division of epistemic labor and the practical conditions under which human beings pursue truth, form judgment, and cultivate lives worth living. This paper argues that AI should be understood not simply as a tool or rival, but as a new participant in already collective processes of knowledge production and interpretation. Even prior to AI, human knowing depended on distributed practices, institutions, traditions, and communities of judgment. Generative AI intensifies and reconfigures this condition by changing how inquiry is delegated, how authority is perceived, and how intellectual labor is shared.
The paper focuses on the anthropological and theological implications of this shift. Modern anxieties about AI often presuppose an overly egocentric picture of human thought, in which intellectual value depends on the visibility and primacy of the individual knower. Against this, I suggest that fruitful human-AI collaboration may require a disciplined form of self-decentering, even self-forgetting: a willingness to subordinate personal intellectual prestige to a broader search for truth and wisdom. On this point religious thought offers important resources. Theological traditions have never regarded human beings as the only possible source of wisdom; they have long insisted that truth may come from beyond the self, and ultimately from God. By placing contemporary debates about AI within this wider theological horizon, the paper explores how AI unsettles modern assumptions about authorship and agency while inviting renewed reflection on humility, discernment, and human flourishing.
FACING THE SHOGGOTH. APOPHATIC LANGUAGE IN THE FACE OF THE OPACITY AND ALIENITY OF ARTIFICIAL INTELLIGENCE
Lipinski T. (Speaker)
PTH Sankt Georgen ~ Frankfurt ~ Germany
That a monster from an H.P. Lovecraft novel of the 1930s should become the "most important meme in A.I." (Roose, 2023) might easily be dismissed as a fleeting internet phenomenon. Yet the Shoggoth is more than an "insider joke" (Clegg, 2025), but revelates the "mysterious, incomprehensible, and monstrous nature of LLMs" (Goujon and Ricci, 2024, p. 3). AI frequently appears as "[a] type of alien that masks an even more radical diversity under a veneer of similarity" (Marchettoni, 2025, p. 96). A crisis of trust emerges here: for AI outputs cannot be held to normative accountability when their reasoning norms remain inaccessible to us (Brandom, 2001)- For a long time this so-called black-box problem seemed like a technical obstacle. Yet current research shows that the opacity of AI is not primarily a technical problem, but rather an expression of the very otherness of artificial rationalisation processes themselves (Lipton, 2018): "some AI systems may remain permanently opaque, defying translation into human categories of reason" (Bailey, forthcoming). AI is not merely opaque - It is other to us.
It is this conjunction of alienity and opacity that constitutes the fundamental irritation in the relationship between the human and AI. Theology in particular can offer a way out here, by pointing to apophatic language (AL) as a mode of articulating this conjunction. In order to avoid the danger of a univocal identification with ontological commitments, This conjunction should be modelled as a "metalinguistic negation" (Michael and Gabriel, 2016, p. 33) - as an expression of the epistemic humility of a subject that relinquishes complete normative attribution, and thereby opens a "second-order discourse" (Williams, 2000, p. 5) on the principled limits of our description. In acknowledging its limits, this theological form of language is capable of recognising AI in its impenetrability and of securing the dignity of the human without surrendering before the incomprehensible.