FANDOM


As summarized by Gardent and Webber (2001), there are two main current approaches to employing automated reasoning in natural language understanding.

The first approach is to filter out unwanted text interpretations, for example, see Bos (2009). The main criterion used to identify and reject unwanted readings is consistency. A set of formulas Φ in first-order logic is consistent if and only if there is no formula φ such that Φ entails φ and Φ entails ¬φ.

Another approach to integrating automated deduction into NLU implies that a more specific representation is constructed in the course of proving the underspecified one (see Bos, 2003; Cimiano, 2003). This approach is based on a model builder – a program that takes a set of logical formulas Φ and tries to build a model that satisfies Φ. This approach has the advantage of providing a consistency check “for free”, since the only models that will be built are both logically consistent and consistent with world knowledge. (As it has been noticed by several researchers, model building is similar to the use of abduction for NLU as introduced in the next section (Baumgartner and Ku ̈hn, 2000; Gardent and Konrad, 2000).)

Blackburn and Bos (2005) show that state-of-the-art automated deduction can be of practical use in checking consistency for at least some classes of linguistic problems if theorem provers and model builders are used in parallel. In order to check the consistency of φ, one simultaneously asks the theorem prover to prove ¬φ and the model builder to construct a model that satisfies φ . If the theorem prover succeeds to prove inconsistency of ¬φ then φ is valid. If the model builder succeeds to build a model of φ , then φ is satisfiable and therefore consistent.