Peer-reviewed publications*


  Hyewon Jang & Diego Frassinelli, Generalizable Sarcasm Detection is Just Around the Corner, of Course!, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2024).

Summary

In this paper, we test the generalizability of sarcasm detection models by comparing three different language models finetuned on several datasets of sarcasm, including a new dataset we release (CSC), collected from multiple psycholinguistic experiments. We show that all language models finetuned on one dataset perform a lot worse on the other datasets, but that (CSC) can handle generalizable sarcasm detection relatively well. We discuss the reasons for the results in terms of the varied domains, styles, and label sources of sarcasm.


  Hyewon Jang, Moritz Jakob, Diego Frassinelli, Context vs. Human Disagreement in Sarcasm Detection, Proceedings of the 4th Workshop on Figurative Language Processing (FigLang) @ NAACL 2024, Association for Computational Linguistics.

Summary

In this paper, we examine what amount of context is beneficial in detecting sarcasm for humans and language models. We also show how the subjectivity of sarcasm and the amount of context interact with each other in sarcasm detection.


  Hyewon Jang, Bettina Braun, Diego Frassinelli, Contextual Factors that Trigger Sarcasm, under review at Metaphor and Symbol.

Summary

In this paper, we discuss the contextual factors that trigger sarcasm based on three connected experiments. We confirm our hypothesis from these experiments that certain contextual factors motivate speakers to convey communicative functions traditionally associated with sarcasm, hence triggering the use of sarcasm. We also investigate interlocutor dynamics connected to the use of sarcasm.


  Hyewon Jang*, Qi Yu*, Diego Frassinelli, Figurative Language Processing: A Linguistically Informed Feature Analysis of the Behavior of Language Models and Humans, Findings of the Association for Computational Linguistics: ACL 2023 (Findings @ ACL 2023).
    *equal contribution

Summary

In this paper, we investigate what happens under the hood when Transformer models and traditional machine learning models perform figurative language classification (sarcasm, simile, idiom, and metaphor). We perform feature analyses on these models to compare general behaviors of different models, and to identify general tendencies of all models for different types of figurative language. We further compare model behavior with human behavior and provide insight into the different levels of complexity for different types of figurative language.


  Hyewon Jang, Bettina Braun, Diego Frassinelli, Intended and Perceived Sarcasm Between Close Friends: What Triggers Sarcasm and What Gets Conveyed? , Proceedings of the 45th Annual Conference of the Cognitive Science Society (CogSci 2023).

Summary

In this paper, we investigate what factors trigger sarcasm between close friends and whether the intentions behind a sarcastic comment are also conveyed to the listener. We answer our research questions using two connected experiments and analyze the results with the linear mixed-effects model.


* In computational linguistics, proceedings papers at top-tier conferences (ACL, NAACL, EMNLP, etc.) are considered as high-quality publications!