Hierarchical Bayesian Attention: Finegrained Sentiment Classification
DOI:
https://doi.org/10.61173/86rnrr18Keywords:
Hierarchical attention, Bayesian attention mechanism, fine-grained sentiment, prior knowledgeAbstract
In recent years, the application of natural language processing (NLP) technology in the field of mental health has continued to deepen. Among them, fine-grained emotion classification is a core means to capture complex psychological states in texts. The subtle emotional changes hidden in texts are also important signals for early psychological risk identification. The uncertainty modeling of emotion classification has attracted attention. Early work mostly focused on the probabilistic improvement of a single module, making it difficult to capture multi-level semantic uncertainties. Meanwhile, knowledge enhancement strategies have been widely applied in few-shot learning. For instance, the initialization method based on sentiment dictionaries has improved the performance of models in low-resource scenarios. However, most existing studies introduce knowledge as static features, which limits the guiding role of prior information in quantifying uncertainty. To address the above issues, this study builds a hierarchical Bayesian attention model. By adopting probabilistic modeling in the three key links of word embedding, attention mechanism, and classification decision-making, the uncertainty quantification of the entire process from text encoding to sentiment prediction can be achieved. A domain knowledge-driven prior distribution is constructed based on the sentiment dictionary to enhance the efficiency in small sample scenarios, too. This method can accurately identify the fine-grained emotional tendencies of the text, providing a technical path for the demand of mental health monitoring and facilitating clinical risk assessment.