Comparison of Sarcasm Detection Models Based on Lightweight BERT Model - Differences between ALBERT-Chinese-tiny and TinyBERT in Small Sample Scenarios
DOI:
https://doi.org/10.61173/6jpm4n27Keywords:
Chinese Sarcasm Detection, Lightweight BERT model, ALBERT-Chinese-tiny, TinyBERTAbstract
Large Language models have excellent performance in processing NLP tasks, but there are not many references for lightweight models to process NLP tasks due to performance differences. In this article, I selected two lightweight BERT models, ALBERT-Chinese-tiny and TinyBERT, to process the Chinese sarcasm detection task, using annotated public Chinese sarcasm detection datasets, collecting F1 indicators, training time and other data for comparative analysis, and verifying the performance of TinyBERT in processing Chinese sentiment analysis tasks, and supplementing the training data of lightweight BERT models in Chinese sarcasm detection tasks.