This study explores the application of transformer-based models for classifying Hadith texts in Arabic and Urdu, addressing their significance in religious scholarship and their low-resource status in natural language processing. The task involves multi-class classification of Hadith content into seven predefined religious topics based on thematic relevance. Pretrained models such as AraBERT and XLM-RoBERTa are fine-tuned with a classification head utilizing softmax activation. The dataset undergoes comprehensive preprocessing, including tokenization, text normalization, and data balancing, to mitigate data sparsity and enhance model performance. Evaluation metrics such as F1-score, accuracy, and confusion matrices are used to assess classification accuracy. The findings demonstrate both the potential and limitations of transformers in classifying religious texts, offering insights into advancing Hadith studies through modern NLP techniques while addressing challenges unique to low-resource languages.