Businesses occasionally seek out and listen to feedback from their customers. Sometimes customer comments are sarcastic, which confuses artificial intelligences tasked with gauging verbal sentiments. That's why computer scientists Ramya Akula and Ivan Garibay, with funding from DARPA, created a program that can reliably detect sarcasm. From the abstract of their article in the journal Entropy:
Inherent ambiguity in sarcastic expressions make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of our model.
I'm sure it works well.