Multi-Head Knowledge Attention

Social Commonsense Reasoning with Multi-Head Knowledge Attention

Abstract: Social Commonsense Reasoning requires an understanding of the text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills. In this work, we propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell. We assess the model’s performance on two tasks that require different reasoning skills: Abductive Natural Language Inference and Counterfactual Invariance Prediction as a new task. Our proposed model improves performance over strong state-of-the-art models (i.e., RoBERTa) across both reasoning tasks. Notably, we are, to the best of our knowledge, the first to demonstrate that a model that learns to perform counterfactual reasoning helps predict the best explanation in an abductive reasoning task. We validate the robustness of the model’s reasoning capabilities by perturbing the knowledge and providing qualitative analysis of the model’s knowledge incorporation capabilities.

TL;DR: In this paper, we investigate social commonsense reasoning in narrative contexts. Specifically, we address two different reasoning tasks: language-based abductive reasoning and counterfactual invariance prediction.

Slide