Download PDFOpen PDF in browserCurrent version

Effective Intended Sarcasm Detection Using Fine-Tuned Llama 2 Large Language Models

EasyChair Preprint 14730, version 1

Versions: 12history
6 pagesDate: September 6, 2024

Abstract

Detecting sarcasm in English text is a significant challenge in sentiment analysis due to the discrepancy between implied and explicit meanings. Previous studies using Transformer-based models for intended sarcasm detection show room for improvement, and the development of large language models (LLMs) presents a substantial opportunity to enhance this area. This research leverages the open-source Llama 2 LLM, released by Meta, fine-tuned to develop an effective sarcasm detection model. Our proposed system design generalizes the use of Llama 2 for text classification but is specifically designed for sarcasm detection, sarcasm category classification, and pairwise sarcasm identification. Data from the iSarcasmEval dataset and additional sources, totaling 21,599 samples for sarcasm detection, 3,457 for sarcasm category classification, and 868 for pairwise sarcasm identification, were used. Methods include prompt development, fine-tuning using Parameter Efficient Fine-tuning (PEFT) with Quantized Low Rank Adaptation (QLoRA), and zero-shot approach. Our model demonstrates significant improvements, sarcasm detection model and pairwise sarcasm identification model are surpassing top models on previous study: F1-score of 0.6867 for sarcasm detection, Macro-F1 of 0.1388 for sarcasm category classification, and accuracy of 0.9 for pairwise sarcasm identification. Results demonstrate that Llama 2, combined with external datasets and effective prompt engineering, enhances intended sarcasm detection. The PEFT technique with QLoRA reduces memory requirements without compromising performance, enabling model development on devices with limited computational resources. This research underscores the importance of context and intention in intended sarcasm detection, with dataset labeling discrepancies remaining a significant challenge.

Keyphrases: Detecting sarcasm, Fine-tuned models, Fine-tuning process, Large Language Models LLMs, Llama 2, Low-Rank Adaptation, Natural Language Processing NLP, Parameter-Efficient Fine-Tuning, Prompt Engineering, Sarcasm Detection, category classification and pairwise sarcasm identification, chain of thought cot, chat models, fine tuned llama, fine-tune, fine-tuned model, intended sarcasm detection, large language models, model fine tuned, pairwise sarcasm identification, prompt development fine tuning, sarcasm detection model, study on intended sarcasm detection, text classification, zero-shot

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14730,
  author    = {Fachry Dennis Heraldi and Fariska Zakhralativa Ruskanda},
  title     = {Effective Intended Sarcasm Detection Using Fine-Tuned Llama 2 Large Language Models},
  howpublished = {EasyChair Preprint 14730},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browserCurrent version