Understanding narrative and argumentative text often requires knowledge beyond the text. While reading such texts, as humans, we are competent in distilling and performing inference by applying commonsense knowledge. Although there has been significant progress in neural machine reading and understanding (using powerful language models), there is still a performance gap between machines and humans, especially when it requires implicit knowledge. One of the reasons being that commonsense knowledge is not often explicitly stated in natural language text. We aim to solve this issue by leveraging knowledge from resources such as ConceptNet. However, identifying contextually relevant information from such a large knowledge base is a non-trivial task. In this work, we propose an effective two-step method to extract relevant multi-hop knowledge paths from the chosen knowledge resource that associate concepts in a given textual sequence: (i) collect all potentially relevant knowledge relations among concepts that appear in the text in a subgraph; (ii) rank, filter and select high-quality multi-hop paths using graph-based local measures and graph centrality algorithms. Further, we propose a neural model that uses a gated attention mechanism to incorporate relevant multi-hop commonsense knowledge paths. We evaluate our model on two different tasks: (i) Argumentation relation classification (the task of determining support or attack relations that hold between two arguments), (ii) determining the human needs (multi-label classification task) of story characters given a narrative story. We show considerable improvement above strong knowledge-agnostic baselines. Our model offers interpretability through the learned attention map over commonsense knowledge paths.

Read More