Contemporary AI ecosystems constitute an unprecedented regime of attention capture: algorithmic feeds and generative interfaces compete relentlessly for cognitive resources, fragmenting sustained awareness as a structural condition of digital life. Yet mainstream AI ethics frames these harms as technical defects amenable to regulatory patches, overlooking how attention itself is shaped within socio-technical infrastructures.
This paper develops an alternative framework drawing on Yogācāra Buddhist philosophy, particularly its eight-consciousness model and doctrines of dependent co-arising and collective karma. I argue that AI ecosystems function as "karmic mirrors" reflecting and amplifying collective dispositional patterns, producing systemic attentional inequalities: those with fewer resources to resist algorithmic capture bear disproportionate cognitive burdens, while craving, aversion, and delusion become encoded into engagement-maximizing architectures.
The Yogācāra tradition offers diagnostic tools and practical counter-models. Its threefold training—śīla, samādhi, and prajñā—can be reinterpreted as governance levers: data ethics as discipline, metacognitive auditing as stabilization, structural transparency as wisdom. I propose a three-level ethical transformation framework moving beyond individual mindfulness toward ecological redesign of attention infrastructures.
By bringing Yogācāra into dialogue with attention studies and AI ethics, this paper explores how religious traditions offer alternative models—addressing not merely individual attention deficits but structural inequalities in technological environments.
KEYWORDS:
Yogācāra Buddhism; AI ethics; attention economy; collective karma; contemplative practices