FANDOM


It happens that in NLP, tasks that are defined within sentence (e.g. syntactic parsing, explicit semantic role labeling, named-entity recognition) are also performed within sentence (i.e. systems perform the same if a sentence is within a document or in isolation). On the other hand, tasks that are inherently global (e.g. coreference resolution) are solved with a very small contribution (if any) of local context. Some lexical tasks such as word sense disambiguation and entity linking are solved with either global or local information but not both.

Some minor exceptions which model topical information to help local tasks:

  • Le and Mikolov (2014)[1] use paragraph vectors with feed-forward NNs to improve sentiment classification.
  • Rei (2015)[2] uses document vectors with recurrent NNs to improve language modeling.

References Edit

  1. Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In ICML, volume 32.
  2. Marek Rei. 2015. Online Representation Learning in Recurrent Neural Language Models In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) Lisbon, Portugal. PDF