As conventional topic models rely on word co-occurrence to infer latent topics, topic modeling for Arabic literary abstracts presents unique challenges due to Arabic's morphological complexity and the semantic richness of literary discourse. Large Language Models (LLMs) can potentially overcome these challenges by contextually learning the meanings of words via pretraining. In this paper, we study multiple approaches to using LLMs for Arabic topic modeling: parallel prompting, sequential prompting, hierarchical two-stage prompting, and interactive refinement. To address Arabic specific linguistic characteristics, we investigate three preprocessing strategies, surface forms, root-based extraction using CAMeL Tools, and hybrid enrichment, and evaluate their impact on topic quality. We compare both proprietary models (GPT-4, Claude) and open source Arabic LLMs (Llama, Falcon, Jais-13b, AceGPT-13B) to assess cost effectiveness for Arabic applications. Our experimental results demonstrate that LLM based methods can identify more coherent topics than traditional approaches (BERTopic, LDA) while maintaining topic diversity. We introduce Arabic-specific evaluation metrics including root diversity, diacritic-insensitive coherence, and literary term coverage to provide nuanced assessment beyond standard metrics (C_v, TU). Furthermore, we found that domain aware prompting strategies and hierarchical topic discovery enhance the interpretability of topics in Arabic literary contexts, while document coverage analysis confirms minimal topic manipulation.