Sebastian Pado & Tae-Gil Noh

Language & Computation

Week One - 11.00-12.30 - Level: I

Room: 7E02

Abstract:

Textual Entailment captures a common sense notion of entailment between two natural language texts, P (premise) and H (hypothesis). It is defined to be true if "a human reading P would infer that H is most likely true" (Dagan et al. 2005). The relevance of Textual Entailment lies in its promise to subsume a substantial chunk of the semantic processing in a range of NLP tasks including IE, QA, MT Evaluation, and Summarization; provide a notion of entailment that is not tied to a particular representation but provides a "common ground" for comparing and contrasting semantic processing mechanisms in an end-to-end setting. The goal of this course is to give participants a detailed knowledge of the state-of-the-art theories and concepts in textual entailment recognition. This includes the typology of the major algorithmic approaches, relevant linguistic phenomena, applying textual entailment to NLP applications, and acquiring inference knowledge from various sources.