Large language models (LLMs) are increasingly used to draft sermons and prayers, translate sacred texts, and power religious chatbots across Christian, Jewish, Muslim, Hindu, and Buddhist communities. This paper surveys case studies from 2023-2025 to map both the appeal of these systems - e.g. efficiency, creative stimulation, and widened access to religious materials - and their recurring risks, including emotional flatness, doctrinal opacity, embedded gender bias, privacy and accountability gaps, and the possibility of manipulation through authoritative-sounding rhetoric. To connect these practical concerns to technical causes, the paper develops a three-part critique of "religious parroting." First, LLMs lack grounded semantic understanding, so apparent meaning is largely derivative of patterns in training data rather than reliable theological comprehension. Second, because their outputs are not produced by rule-governed inference, they can deliver persuasive religious advise that is logically unstable or doctrinally inconsistent, fostering trust misalignment in sensitive pastoral contexts. Third, as "stochastic parrots," LLMs primarily remix existing linguistic material, limiting genuine theological innovation and risking the conflation of statistical fluency with spiritual authority. The paper argues that LLMs can be valuable as supervised drafting aids, but that responsible religious deployment - especially for conversational agents/chatbots - may require governance-oriented hybrid (e.g. neurosymbolic) architectures that incorporate explicit doctrinal and ethical constraints, alongside institutional oversight, to protect semantic depth, theological adherence, communal trust, and authentic creativity.