Machine learning with explainability for suicide ideation detection from social media data
conference contribution
posted on 2024-05-29, 01:30authored byMR Islam, M Kowsar Hossain Sakib, SA Prome, X Wang, Anwaar Ulhaq, C Sanin, D Asirvatham
Suicide is one of the major causes of death globally. Analysis of social media posts and in-depth insights show that some people have suicide ideas. In order to save more lives, it is crucial to comprehend the behavior of suicidal attempters. However, identifying and explaining suicidal thoughts poses a significant challenge in psychiatry. Additionally, analysing suicidal behavior is a complex procedure involving several variables based on the individual's preferences and the data type. Although traditional methods have been utilized to identify clinical factors for suicide ideation detection (SID), these models often lack interpretability and understanding. Therefore, the primary aim of this research is to apply several deep learning (DL) and machine learning (ML) techniques such as BERT, LSTM, BiLSTM, RF, SVM, GaussianNB, LR, and KNeighbors blending with interpretable models such as LIME and SHAP to provide valuable insights into the importance of different features and make models more transparent in the SID process. The experiments were conducted on a publicly available dataset comprising 24,101 posts, categorized as either suicidal or non-suicidal. The implemented method brings about significant enhancements in performance in comparison. A comparison of all performance measures reveals that the LSTM model is particularly good at processing and classifying textual data, with higher accuracy, precision, recall, and AUC scores than the other models tested.