Panel description: Artificial intelligence (AI) is increasingly encountered not only as a technical tool but as a conversational and social presence—one that people interpret, trust, resist, or rely on in shaping how they learn, decide, relate, and make meaning. This panel focuses on perception and interpretation: how individuals and communities experience AI's agency, limits, authority, and bias, and how these perceptions reshape epistemic security, moral responsibility, and everyday as well as institutional practices. It builds an interdisciplinary dialogue among theology and religious studies, philosophy, anthropology, ethics, psychology, and education. Within this horizon, the panel foregrounds religious and theological dimensions of AI's growing influence: its impact on spiritual practice and discernment, religious formation and catechesis, and contemporary imaginaries of transcendence, the divine, and human dignity. It also examines how interaction with AI reconfigures empathy and emotional attachment—especially when AI is experienced as advisor, companion, or substitute for human presence—and how such experiences may alter relationships, vulnerability, and accountability. Overall, the panel investigates how AI is perceived and narrated in ways that co-shape contemporary understandings of the human person, community, and God.
Papers:
EDUCATION IN THE DIGITAL AGE: THE "MIRROR" CHARACTER OF AI, TECHNO-MORAL VIRTUES, AND MORAL UPSKILLING AND DESKILLING
Žalec B. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The speaker develops a normative framework for education in the digital age grounded in the philosophy of technology. The main points are as follows. First, the "mirror" character of contemporary artificial intelligence (AI) does not introduce a new moral authority; rather, it reflects and at the same time refracts (filters, amplifies, distorts) existing human habits, attitudes, prejudices, and aspirations. This calls for education to consciously shape practices of interpretation, recognition, and the delimitation or limitation of these "mirror" effects. Second, an analysis of moral deskilling and moral upskilling shows that different modes of technology use simultaneously undermine and enable the development of moral skills. The task of education is to harness these dynamics for the cultivation of techno-moral virtues. Third, the speaker offers a catalog-like yet dynamic vision of virtues for the 21st century (e.g., prudence, courage, justice, truthfulness, patience, temperance, honesty), which he operationalizes for curricular decisions, assessment, and rules governing the use of AI in teaching. He concludes with seven criteria for the responsible use of AI in education and with a model of how the mirror character of AI can be used as a didactic opportunity for reflection, dialogue, and the strengthening of human practical wisdom.
AI AND THE CLASSICAL UNDERSTANDING OF THE HUMAN BEING
Zichy M. (Speaker)
University of Bonn, Faculty of Catholic Theology ~ Bonn ~ Germany
This paper examines how artificial intelligence challenges and reframes the traditional Western conception of the human being. Beginning with the anthropological foundations articulated by Plato and Aristotle, it reconstructs the central elements of this tradition: human rationality, embodiment, affectivity, sociality, and the intrinsic orientation toward truth, goodness, and beauty. On this basis, the paper discusses three interpretive frameworks for understanding AI in relation to these categories. First, AI can be conceived as an optimized or perfected form of humanity, a transhumanist continuation of the Enlightenment ideal of rational self-improvement. Second, posthumanist perspectives interpret AI as a fundamentally different form of existence that could ultimately supersede the human. Third, and most convincingly from a classical perspective, AI is understood as a tool—powerful yet lacking intrinsic telos, normative orientation, embodied experience, or the capacity for eudaimonia. The paper concludes that AI's limitations help to clarify what remains uniquely and irreducibly human.
THE ETHICS OF CARE AS A FOUNDATION FOR EDUCATION IN THE AGE OF ARTIFICIAL INTELLIGENCE
Vodicar J. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
In the age of artificial intelligence, individuals who have little influence over technological development—most notably young people—are increasingly placed in the position of objects rather than subjects. Competitiveness is embedded at the core of artificial intelligence development, a dynamic that many contemporary thinkers identify as the culmination of broader trajectories within modern society. Educational approaches that are limited to, or primarily oriented toward, preparing individuals for competitive entry into the labor market fail to contribute to the formation of a more humane and empathetic society. As a consequence, young people are becoming increasingly apathetic and are experiencing growing levels of psychological distress. This article addresses education—and catechesis in particular—as a response that can foster hope and contribute to the development of inner strength in both individuals and communities. Its pedagogical framework is drawn from the ethics of care, which is grounded in the fundamental anthropological assumption of original human concern understood as responsibility for the other. These principles are brought into dialogue with key challenges faced by contemporary individuals within digital culture. The core characteristics of an educational approach based on dialogical relationships, attentiveness to learners' concrete needs, and the formation of a responsible community are applied as guiding principles for catechesis. Such an approach aims to enhance resilience against technocratism and apathy. Care-based catechesis, as argued here, is consistent with the Church's magisterial teaching and with the Christian tradition's understanding of the human person.
ARTIFICIAL INTELLIGENCE IN EDUCATION: HUMAN FORMATION IN THE AGE OF ALGORETICS
Kraner D. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Artificial intelligence (AI) is transforming the educational environment and offering broad opportunities for personalization, adaptive feedback, and effective assessment. Furthermore, the impact of AI on education extends far beyond technological efficiency: it raises questions about the meaning of humanity, ethics and values in a digital society, the principles of fairness, responsibility, transparency and respect for human dignity. This article considers education as a space where AI should serve the formation of the person, not merely the information of the individual. This raises concerns about the disappearance of human relationships, critical thinking, and ethical awareness in learning environments. The pedagogical potential of artificial intelligence is best realized when it is used as a complement (rather than a replacement) to "human teachers," supporting hybrid models that combine algorithmic precision with emotional intelligence. Emerging frameworks emphasize the need for teachers and students to understand artificial intelligence, as well as ethical guidelines that ensure transparency and fairness in algorithmic decision-making. The use of AI in the pedagogical process must be based on a human-centered model, where technology does not replace the teacher but supports their relationship with the student. The article proposes Benanti's model of "pedagogical algoretics," which combines digital literacy, ethical sensitivity, and spiritual intelligence, and explores how AI can contribute to education for responsibility, dialogue, and compassion in contemporary educational practice.
HUMAN-AI EMOTIONAL INTERACTION: EMPATHY, EMOTIONAL BONDS, AND INTERPERSONAL RELATIONSHIPS
Simonic B. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The paper examines the impact of interaction with artificial intelligence-based user interfaces on the experience of empathy, the formation of emotional connections, and the transformation of interpersonal relationships. It begins with the observation that artificial intelligence (AI) is increasingly acting as a conversation partner, advisor, or even a substitute for human presence, particularly in contexts of loneliness, emotional distress, and vulnerability. The analysis focuses on the affective and emotional dimensions of human-AI interaction: how users experience the empathy of algorithmic systems, the extent to which AI promotes or inhibits the ability to empathise with others, and how one-way or seemingly mutual emotional bonds with non-human interlocutors are formed. Particular attention is given to whether AI serves as a complement to human relationships, a substitute for them, or contributes to withdrawal from more demanding interpersonal interactions. The paper also critically examines the existential and ethical dimensions of such relationships: feelings of security, control, and trust in interactions with algorithms, and the risks of redefining the concepts of empathy, closeness, and responsibility.
TECHNOLOGY, RELATIONALITY, AND DIVINE ORIGIN: FROM SCRIPTURE TO ARTIFICIAL INTELLIGENCE
Centa Strahovnik M. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Looking at the Bible from the perspective of specific forms of technology described in it, one can approach this domain utilizing a relational framework. Three central relationships can be discerned, namely God's orientation towards technology, people's engagement with technology, and the role of technology within the relationship between God and humanity. All these can be useful when framing ethical and theological questions about current technologies, including artificial intelligence (AI). In this talk, I will examine the aforementioned relationships from the perspective of the distinction between something being begotten or born and something being made, as embedded in early Christian thought. In conclusion, I will draw some consequences of this for our perception of new technologies, including AI.
FROM ARTIFICIAL COMPANIONSHIP TO RELATIONAL EROSION: AI, EMOTIONAL ATTACHMENT, AND THE CRISIS OF HUMAN CONNECTION
Machidon O. (Speaker)
University of Ljubljana, Faculty of Comuter and Information Science ~ Ljubljana ~ Slovenia
Recent advances in conversational artificial intelligence have enabled forms of interaction that simulate empathy, emotional attunement, and personal understanding with unprecedented effectiveness. While these systems are often presented as tools for support, companionship, or even care, emerging empirical research suggests that frequent and emotionally significant interaction with AI may contribute to increased loneliness, reduced empathy, and the weakening of human relational capacities. This contribution examines the growing phenomenon of emotional attachment to AI through an interdisciplinary lens, combining insights from cognitive science, psychology, philosophy, and theological anthropology.
Drawing on recent studies on AI-mediated persuasion, artificial companionship, and emotional dependence, the contribution argues that AI "cleans up" the inherent messiness of human relationships—conflict, vulnerability, and mutual recognition—in ways that risk flattening intimacy rather than fostering it. From an anthropological and ethical perspective, this raises fundamental questions about relationality, moral growth, and the human need for reciprocal recognition. The contribution concludes by suggesting that theological and philosophical accounts of personhood and relation can offer critical resources for assessing the promises and dangers of AI-mediated emotional support, and for rearticulating the value of authentic human presence in the age of algorithms.
WHO SPEAKS OR WHAT SPEAKS? AI AS A CONVERSATIONAL PARTNER
Klun B. (Speaker)
University of Ljubljana ~ Ljubljana ~ Slovenia
Artificial intelligence is increasingly encountered not merely as a technical tool for information retrieval or task performance, but as an entity that converses with human users. AI-based large language models are now presented as coaches, therapists, and companions of various kinds, and new devices are being developed to accompany particular groups—such as elderly people—in order to alleviate loneliness and compensate for the absence of human relationships. Regardless of how one evaluates these developments, they raise a fundamental philosophical question concerning the status of digital conversational partners. Traditionally, the verb "to speak" is attributed to a who rather than to a what. Does this distinction still hold when we say that "AI speaks"? One may argue that AI does not genuinely speak, but merely combines and rearranges words and phrases originally spoken by human beings and incorporated into its training data. From this perspective, the uniqueness of human speech can be preserved: a who speaks, whereas the device itself - the what - "speaks" only in a metaphorical sense.
ACCELERATED AI AND DIGITALIZATION: CRITICAL ECOTHEOLOGICAL PERSPECTIVES
Furlan Štante N. (Speaker)
The Science and Research Centre Koper, Institute for Philosophical and Religious Studies ~ Koper ~ Slovenia
The accelerated adoption of artificial intelligence (AI) and digital technologies is transforming both society and ecosystems. While digitalization is often promoted as a tool for sustainability, critics point to its significant environmental footprint, including energy-intensive data centers, rare resource extraction, and electronic waste. Noreen Herzfeld (2025) emphasizes that AI and digital infrastructures are not ethically neutral: their design, deployment, and scale have profound ecological consequences that require careful scrutiny.
This paper explores how ecotheology can serve as a critical lens for analyzing these developments. Drawing on Herzfeld's insights and relevant papal documents - Laudato si' (2015), Antiqua et nova (2025), and Pope Francis's 2024 World Day of Peace message - the paper argues that ethical and spiritual reflection is essential for evaluating the ecological implications of AI and digitalization. Laudato si' provides a framework of integral ecology, emphasizing the interconnectedness of technological progress, environmental responsibility, and human well-being. Antiqua et nova highlights AI's dual potential: it can support sustainability efforts but also imposes significant environmental costs.
By integrating ecotheological critique with technological analysis, the paper moves beyond polarized narratives - whether digitalization destroys or saves the planet - toward a nuanced understanding that considers relational, ethical, and ecological responsibilities. Ecotheology offers normative criteria and reflective tools to guide sustainable AI practices, fostering alignment between technological innovation and care for creation. Ultimately, this approach underscores the indispensable role of ethical discernment in shaping a digital future that genuinely supports ecological sustainability.
ARTIFICIAL INTELLIGENCE - BETWEEN THE EMPOWERMENT AND THE THREAT TO HUMAN AUTONOMY
Globokar R. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
Human beings have always strived to expand their capabilities through technology, thereby increasing their freedom and autonomy. The Enlightenment ideal of liberating humanity from the constraints of nature takes on a new dimension in the age of artificial intelligence (AI). But a fundamental question arises: will AI lead us toward greater personal freedom, or will its development gradually result in a new form of dependence on systems we no longer fully understand or control?
This contribution presents arguments supporting the thesis that AI can enhance individual autonomy: access to knowledge becomes faster and more equal, routine tasks are delegated to machines, allowing for more time for creativity; decision-making processes can become more inclusive, and global communication is facilitated through real-time translation across languages.
On the other hand, we will also highlight critical perspectives that warn against the dangers of mass manipulation, the decline of critical thinking, increased surveillance of individuals, the weakening of interpersonal relationships, and growing technological dependency. This contribution will reflect on the tension between these opposing dynamics and explore their implications for the ethical understanding of human freedom in a digital age.
PERCEPTIONS OF AI IN CLINICAL DECISION-MAKING: ANSWERABILITY AND DEFERENCE
Miklavcic J. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
As artificial intelligence becomes routine in clinical decision support, the ethical challenge is not only what AI can do, but how it is perceived in practice—tool, advisor, or authority. This paper argues that responsibility in AI-assisted medicine should be understood as answerability: the clinician's duty to stand behind decisions with reasons that can be owned in the patient encounter. When AI outputs are treated as decisive, responsibility can be "buffered" ("the system said so"), and a deference paradox emerges: higher accuracy can increase pressure to defer, while full deference weakens moral agency and care. The paper further highlights that many clinical choices cannot be settled by probabilities alone, because they involve values, meaning, and human vulnerability (e.g., threshold decisions, borderline cases, end-of-life contexts). Rather than framing these tensions as a simple "responsibility gap," it proposes an ecology of responsibility that links clinicians, institutions, and developers—while keeping accountability visible at the point of care. Practical implications include reason-giving checkpoints, constrained use conditions, and governance that supports human judgment instead of replacing it.
INTEGRATING ARTIFICIAL INTELLIGENCE INTO SPIRITUAL LIFE: OPPORTUNITIES, LIMITS, AND DISCERNMENT
Platovnjak I. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
The increasing presence of artificial intelligence (AI) in spiritual contexts raises critical questions about its capacity to support spiritual engagement and its limitations in relation to transcendence. As a system grounded primarily in analytical, left-hemisphere-oriented processes, AI offers efficiency, personalization, and broad accessibility, yet lacks intuition, embodied presence, relational depth, and spiritual discernment. Using a conceptual and interdisciplinary approach, this paper analyzes AI's impact through the three dimensions of spirituality—personal-experiential, communal-institutional, and rational-reflective—proposed by Platovnjak and Svetelj (2024), in dialogue with Sheldrake's (2014) integrative framework. The analysis draws on selected examples of AI-supported prayer apps, chatbots, and digital religious education tools. The presentation highlights both opportunities and risks, including depersonalization, algorithmic bias, and commodification. It argues that AI can function as a supportive instrument only when embedded within human guidance, communal practice, and embodied spiritual life.
FROM CLINICAL JUDGMENT TO ALGORITHMIC AUTHORITY: WHO HOLDS EPISTEMIC PRIMACY?
Štivic S. (Speaker)
University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia
This paper examines the increasingly widespread tendency to regard AI systems in medicine as epistemically authoritative, often at the expense of physicians' clinical judgment. It argues that this shift stems from a problematic conflation of statistical reliability with normative authority in clinical decision-making. While algorithmic systems can be highly effective at identifying patterns and estimating probabilities, clinical judgment involves far more than mere statistical calculation: it requires contextual understanding, practical reasoning, and the assumption of professional responsibility. By taking into account the differences between algorithmic forms of knowledge and clinical understanding, the paper highlights the risks of automation bias and the danger of a gradual deprofessionalization of medical practice. Particular attention is given to questions of ethical responsibility in situations where clinical decisions are strongly shaped—or even implicitly justified—by algorithmic recommendations. Rather than rejecting artificial intelligence in medicine, the paper advocates an approach in which AI functions as a valuable but supportive epistemic resource, clearly subordinate to physicians' clinical judgment and moral accountability.