Natural Language Understanding Wiki
Advertisement

English[]

System

Syntax feature?

[note 1]

Semantic role feature?

Semantic type feature?

[note 2]

Word-window feature? Mention-pair? Entity-mention? Mention-ranking? Cluster-ranking? Cluster-pair? Rule-based? Base ML model Integer linear programming? Reference Notes
cort Yes

[note 3]

No Yes

[note 4]

Yes Yes Yes No No No perceptron No Martschat and Strube (2015)[1]
nn_coref Yes

[note 5]

No Yes

[note 6]

No Yes No No No No neural net

(RNN for encoding clusters)

No Wiseman et al. (2016)[2]
huggingface's neural coref Yes No Yes

[note 7]

Yes

[note 8]

No No Yes No No No neural net No medium post impl. of Clark and Manning (2016)[3]
deep-coref(CoreNLP) Yes No Yes

[note 7]

Yes

[note 8]

Yes No Yes Yes No No neural net No Clark and Manning (2016)[4]
hcoref (Hybrid Coref) Yes

[note 9]

No Yes

[note 10]

No No Yes? No No Yes? Yes

[note 11]

random forest

[note 11]

No Lee et al. (2017a)[5]
dcoref(Stanford sieve) Yes No Yes

[note 12]

No No No No No Yes Yes None No Lee et al. (2013)[6] part of Stanford CoreNLP
Berkeley CR No No Yes No No Yes

[note 13]

Yes

[note 14]

No No No log-linear No Durrett and Klein (2013)[7]
Illinois CR No Yes
xrenner Yes None No Zeldes and Zhang (2016)[8]
e2e-coref No No No No No No Yes No No No feed-forward NN + LSTM + attention No Lee et al. (2017b)[9]
allennlp No No No No No No Yes No No No feed-forward NN + LSTM + attention No a reimplementation of e2e-coref with some changes

German[]

System

Syntax feature?

[note 15]

Semantic role feature?

Semantic type feature?

[note 16]

Word-window feature? Mention-pair? Entity-mention? Mention-ranking? Cluster-ranking? Cluster-pair? Rule-based? Base ML model Integer linear programming? Reference Notes
CorZu University of Zurich
HotCoref University of Stuttgart

Notes[]

  1. Meaning the syntactic relation between mentions or between mention and surrounding words. Head word features (which might come from a parser) is not considered a syntactic feature.
  2. Different from semantic role features, this includes features about mentions alone: semantic type (person/object/number), NER type (person/location/organization), or other taxonomies.
  3. deprel: dependency relation of a mention to its governor
  4. sem_class: one of 'PERSON', 'OBJECT', 'NUMERIC' and 'UNKNOWN' and head_ner: named entity tag of the mention's head word
  5. From syntactic ancestry features in BASIC+ (Wiseman et al. 2015)
  6. From entity type features in BASIC+ (Wiseman et al. 2015)
  7. 7.0 7.1 In Clark and Manning (2016): "The type of the mention (pronoun, nominal, proper, or list)"
  8. 8.0 8.1 From Clark and Manning (2016): "first word, last word, two preceding words, and two following words of the mention. Averaged word embed- dings of the five preceding words, five following words, all words in the mention, all words in the mention’s sentence, and all words in the mention’s document."
  9. Feature: "The path in the parse tree from the root to the (antecedent/anaphor)"
  10. Feature: "named entity type attributes of (antecedent/anaphor)"
  11. 11.0 11.1 They combine rule-based and statistical classifiers.
  12. "NER label – fromthe Stanford NER"
  13. TRANSITIVE model: "each mention to maintain its own distributions over values for a number of proper- ties; these properties could include gender, named- entity type, or semantic class. Then, we will require each anaphoric mention to agree with its antecedent on the value of each of these properties"
  14. BASIC model: "This approach is similar to the mention- ranking model of Rahman and Ng (2009)."
  15. Meaning the syntactic relation between mentions or between mention and surrounding words. Head word features (which might come from a parser) is not considered a syntactic feature.
  16. Different from semantic role features, this includes features about mentions alone: semantic type (person/object/number), NER type (person/location/organization), or other taxonomies.

References[]

  1. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. TACL, 3:405– 418.
  2. Wiseman, Sam, Alexander M. Rush, and Stuart M. Shieber. "Learning Global Features for Coreference Resolution." arXiv preprint arXiv:1604.03035(2016).
  3. Clark, K., & Manning, C. D. (2016). Deep Reinforcement Learning for Mention-Ranking Coreference Models. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16), 2256–2262.
  4. Clark, K., & Manning, C. D. (2016a). Improving Coreference Resolution by Learning Entity-Level Distributed Representations. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 643–653. http://doi.org/10.18653/v1/P16-1061
  5. LEE, HEEYOUNG, MIHAI SURDEANU, and DAN JURAFSKY. "A scaffolding approach to coreference resolution integrating statistical and rule-based models." Natural Language Engineering (2017a): 1-30.
  6. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu and Dan Jurafsky. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics 39(4), 2013.
  7. Durrett, G., & Klein, D. (2013). Easy victories and uphill battles in coreference resolution. EMNLP ’13, (October), 1971–1982.
  8. Zeldes, A., & Zhang, S. (2016). When Annotation Schemes Change Rules Help: A Configurable Approach to Coreference beyond OntoNotes. Workshop on Coreference Resolution Beyond OntoNotes at NAACL, (Corbon), 92–101.
  9. Lee, K., He, L., Lewis, M., & Zettlemoyer, L. S. (2017b). End-to-end Neural Coreference Resolution. In EMNLP.
Advertisement