Natural Language Understanding Wiki
Advertisement

From Cambria & White (2014)[1], lexical affinity:

assigns to arbitrary words a probabilistic ‘affinity’ for a particular category. [...] For example, ‘accident’ might be assigned a 75% probability of indicating a negative event, as in ‘car accident’ or ‘hurt in an accident’.

Drawbacks[]

First, lexical affinity operating solely on the word-level can easily be tricked by sentences such as “I avoided an accident” (negation) and “I met my girlfriend by accident” (connotation of unplanned but lovely surprise).

Second, lexical affinity probabilities are often biased toward text of a particular genre, dictated by the source of the linguistic corpora. This makes it difficult to develop a re-usable, domain-independent model.

Application[]

(where was it applied?)

References[]

  1. Cambria, E., & White, B. (2014). Jumping NLP curves: A review of natural language processing research. IEEE Computational Intelligence Magazine, 9(2), 48-57.
  2. Stevenson, R. A., Mikels, J. A., & James, T. W. (2007). Characterization of the affective norms for English words by discrete emotional categories. Behavior Research Methods, 39(4), 1020-1024.
Advertisement