Comparison of Sarcasm Detection Models Based on Lightweight BERT Model - Differences between ALBERT-Chinese-tiny and TinyBERT in Small Sample Scenarios

Authors

  • Mingyu Li Author

DOI:

https://doi.org/10.61173/6jpm4n27

Keywords:

Chinese Sarcasm Detection, Lightweight BERT model, ALBERT-Chinese-tiny, TinyBERT

Abstract

Large Language models have excellent performance in processing NLP tasks, but there are not many references for lightweight models to process NLP tasks due to performance differences. In this article, I selected two lightweight BERT models, ALBERT-Chinese-tiny and TinyBERT, to process the Chinese sarcasm detection task, using annotated public Chinese sarcasm detection datasets, collecting F1 indicators, training time and other data for comparative analysis, and verifying the performance of TinyBERT in processing Chinese sentiment analysis tasks, and supplementing the training data of lightweight BERT models in Chinese sarcasm detection tasks.

Downloads

Published

2025-06-17

Issue

Section

Articles