FANDOM


Prevalence: Gerber and Chai (2012)[1]: "Implicit arguments are frequent. Given the predicates in a document, there exist a fixed number of possible arguments that can be filled according to NomBank’s predicate role sets. Role coverage is defined as the fraction of these roles that are actually filled by constituents in the text. Using NomBank as a baseline, the study found that role coverage increases by 71% when implicit arguments are taken into consideration."

Definition Edit

Gerber & Chai (2012)[2] "any argument that is not annotated by NomBank" that is:

  • arguments in a different sentence than the predicate
  • arguments in a syntactic structure that is not covered by NomBank annotation ("NomBank annotates arguments in the noun phrase headed by the predicate as well as arguments brought in by so-called support verb structures. See Meyers (2007) for details.")

Some work only cares about arguments beyond sentence boundary, for example Laparra & Rigau (2013)[3], SEMEVAL-2010 (Ruppenhofer et al., 2010)[4]

Example Edit

"A SEC proposal to ease [arg1 reporting] [predicate requirements] [arg2 for some company executives] would undermine the usefulness of information on insider trades, professional money managers contend."

TODO: Laparra & Rigau (2013)[3]: "Traditionally, Semantic Role Labeling (SRL) systems have focused in searching the fillers of those explicit roles appearing within sentence boundaries (Gildea and Jurafsky, 2000, 2002; Carreras and Marquez, 2005; Surdeanu et al., 2008; Hajiˇc et al., 2009). These systems limited their search-space to the elements that share a syntactical relation with the predicate. However, when the participants of a predicate are implicit this approach obtains incomplete predicative structures with null arguments. The following example includes the gold-standard annotations for a traditional SRL process:

(1) [arg0 The network] had been expected to have [np losses] [arg1 of as much as $20 million] [arg3 on base-ball this year]. It isn’t clear how much those [np losses] may widen because of the short Series.

The previous analysis includes annotations for the nominal predicate loss based on the NomBank structure (Meyers et al., 2004). In this case the annotator identifies, in the first sentence, the arguments arg0, the entity losing something, arg1, the thing lost, and arg3, the source of that loss. However, in the second sentence there is another instance of the same predicate, loss, but in this case no argument has been associated with it. Traditional SRL systems facing this type of examples are not able to fill the arguments of a predicate because their fillers are not in the same sentence of the predicate. Moreover, these systems also let unfilled arguments occurring in the same sentence, like in the following example:

(2) Quest Medical Inc said it adopted [arg1 a shareholders’ rights] [np plan] in which rights to purchase shares of common stock will be distributed as a dividend to shareholders of record as of Oct 23.

For the predicate plan in the previous sentence, a traditional SRL process only returns the filler for the argument arg1, the theme of the plan. However, in both examples, a reader could easily infer the missing arguments from the surrounding context of the predicate, and determine that in (1) both instances of the predicate share the same arguments and in (2) the missing argument corresponds to the subject of the verb that dominates the predicate, Quest Medical Inc. Obviously, this additional annotations could contribute positively to its semantic analysis. In fact, Gerber and Chai (2010) pointed out that implicit arguments can increase the coverage of argument structures in NomBank by 71%. However, current automatic systems require large amounts of manually annotated training data for each predicate. The effort required for this manual annotation explains the absence of generally applicable tools. This problem has become a main concern for many NLP tasks. This fact explains a new trend to develop accurate unsupervised systems that exploit simple but robust linguistic principles (Raghunathan et al., 2010)."

Datasets Edit

There are two main datasets for iSRL: SemEval-2010 and Beyond NormBank, both are small.

SemEval-2010 Edit

Screen Shot 2016-07-06 at 14.10.17

Statistics of some corpora (Feizabadi & Pado, 2015).

From Feizabadi & Pado (2015)[5] (also see photo):

"Ruppenhofer et al. Arguably the first corpus with a substantial set of annotations for implicit roles was created for SemEval 2010 Task 10 (Ruppenhofer et al., 2010). This dataset covers a number of chapters from Arthur Conan Doyle short stories and provides full-text annotation of both explicit and implicit se- mantic roles. The texts were annotated manually with FrameNet roles. This dataset is a de-facto standard benchmark for implicit SRL.

Gerber and Chai (2012) Edit

A study by Gerber and Chai (2012) investigated implicit arguments of NomBank nominalizations. They extended a part of the Prop- Bank corpus with implicit roles for 10 nominal predicates, of which they annotated all instances.

ON5V Edit

Further Corpora with Implicit Role Annotation. Moor et al. (2013)[6] created a corpus with all annotated instances for five verbs with the goal of focused improvement of implicit SRL. Feizabadi & Pado (2014) investigated the use of crowdsourcing to create annotations for implicit roles. Both corpora are more restricted in size and scope than the first two."

The corpus wasn't released immediately after the publication of its reference paper. Starting from around 2017, it can be downloaded here.

Other corpora Edit

Freely available iSRL corpus in Spanish: Taulé et al. (2016)[7]

Kilicoglu 16 table 1

From Kilicoglu (2016)[8] "CDR corpus that was used in the BioCreative V CID task (Wei et al., 2016)[9]"

Approaches Edit

iSRL as anaphora resolution Edit

TODO: From Silberer and Frank (2012)[10]: "Computational treatments of zero anaphora (e.g., Imamura et al. (2009)) are in fact employing techniques well-known from SRL."

The consensus is that iSRL should be treated as a special case of coreference resolution (TODO: what do people say about bridging reference and zero anaphora?). List of papers that commit to this view: Dahl et al. (1987)[11], Laparra & Rigau (2013)[3], Tonelli & Delmonte (2010, p. 298)[12],

TODO: Ruppenhofer et al. (2011)[13], Laparra and Rigau (2012)[14], Gorinski et al. (2013)[15] and Laparra and Rigau (2013)[16].

Actually, this view is used even when annotating the SemEval-2010 dataset: "We adopted ideas from the annotation of co-reference information, linking locally unrealized roles to all mentions of the referents in the surrounding discourse, where available." (Ruppenhofer et al, 2010)[4]

Some exception: "SEMAFOR (Chen et al., 2010) is a supervised system that extended an existing semantic role labeler to enlarge the search window to other sentences, replacing the features defined for regular arguments with two new semantic features." (Laparra & Rigau, 2013)[3] An important observation is that "DNI identification suffers from low recall" (Chen et al., 2010, p. 267)[17], more precisely, only 21 cases were predicted as DNI compared to 1053 INI while the gold data contains roughly equal numbers of DNIs and INIs (348 and 353) (Chen et al., 2010, p. 266)[17].

Machine learning models Edit

  • Naïve Bayes: Feizabadi & Pado (2015)[5]
  • BayesNet: Silberer and Frank (2012)[10], Roth and Frank (2013)[18], Roth and Frank (2015)[19]
  • Memory-based: Schenk et al. (2015)[20]
  • No ML, just prototypical vectors: Schenk and Chiarcos (2016)[21]
  • No ML, just compute observed frequency: Laparra and Rigau (2012)[14]
  • No ML, just heuristics: Laparra & Rigau (2013)[3], ensemble of heuristics including vector-based ones: Gorinski et al. (2013)[22]

Dealing with data sparsity Edit

Silberer and Frank (2012)[10] converted OntoNotes 3.0 into FrameNet formalism (using SemLink 1.1). They also use Semafor to annotate ACE-2 and MUC-6 with semantic role labels. They further use some heuristics to create implicit roles.

Features Edit

Coherence Edit

Laparra and Rigau (2013)[3] use a simplified concept of coherence: "in a coherent document the different occurrences of a predicate, including both verbal and nominal forms, tend to be mentions of the same event, and thus, they share the same argument fillers".

Selectional preferences Edit

Laparra and Rigau (2013)[3]: "First, we have designed a list of very general semantic categories. Second, we have semi-automatically assigned one of them to every predicate argument argn in PropBank and Nom-Bank [...] check if the candidate belongs to the expected semantic category of the implicit argument to be filled."

Do et al. (2017)[23]: using LSTM to model multi-way selectional preference. They adapted Laparra and Rigau's approach and got 1% improvement.

Distance Edit

Used in Laparra and Rigau (2013)[3], Feizabadi and Pado (2015)[5], ???

I did an ablation analysis of Feizabadi and Pado's model and find distance to be not so useful (+0.04% F1) which is surprising.

History Edit

First paper: Palmer et al. (1986)[24]

Early papers: Whittemore, Macpherson, and Carlson (1991)[25]. Nielsen (2004[26], 2005[27]) worked on ellipsis identification and resolution --> uncover implicit predicate --> need implicit arguments.

iSRL for noun predicates: early work: Gerber et al. (2009)[28]

Evaluation Edit

See also Implicit semantic role labelling (State-of-the-art)

TODO: NI, INI, DNI

Metrics Edit

Strict span/head matching is too hard so evaluation is done with some kinds of relaxation. There are at least two ways to evaluate this task:

  1. Relax P/R/F1 and report overlap (Dice coefficient) separately, and
  2. Embed overlap (Dice coefficient) into P/R/F1

From Ruppenhofer et al (2013)[29]: "we scored an automatic annotation as correct if it included the head of the gold standard filler in the predicted filler." They provide Dice coefficient next to F1 to reveal systems that predict too big spans.

From Gerber and Chai (2012)[2] (Footnote 11): "Our evaluationmethodology differs slightly from that of Ruppenhofer et al. (2010) in that we use the Dice metric to compute precision and recall, whereas Ruppenhofer et al. reported the Dice metric separately fromexact-match precision and recall."

Cross-validation Edit

Performed by Gerber and Chai (2012)[2] (Section 5.1) and Moor et al. (2013)[6] (Section 5). They split predicates into folds (instead of documents). Gerber and Chai explain this to "remove any confounding factors caused by specific documents" but it may actually introduce confounding factors because predicates from the same document will appear in both training and testing.

References Edit

  1. Gerber, Matthew, and Joyce Y. Chai. "Semantic role labeling of implicit arguments for nominal predicates." Computational Linguistics 38.4 (2012): 755-798.
  2. 2.0 2.1 2.2 Gerber, M. and J. Chai (2012, December). Semantic role labeling of implicit arguments for nominal predicates. Computational Linguistics 38(4), 755–798.
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 Laparra, E., & Rigau, G. (2013). ImpAr: A Deterministic Algorithm for Implicit Semantic Role Labelling. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1180–1189.
  4. 4.0 4.1 Ruppenhofer, J., Sporleder, C., Morante, R., Baker, C., & Palmer, M. (2010). SemEval-2010 Task 10: Linking Events and Their Participants in Discourse. In Proceedings of the 5th International Workshop on Semantic Evaluation, ACL 2010 (pp. 45–50). Uppsala, Sweden.
  5. 5.0 5.1 5.2 Feizabadi, P. S., & Pado, S. (2015). Combining Seemingly Incompatible Corpora for Implicit Semantic Role Labeling. Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics (* SEM 2015 ), 40–50.
  6. 6.0 6.1 Moor, T., Roth, M., & Frank, A. (2013). Predicate-specific Annotations for Implicit Role Binding: Corpus Annotation, Data Analysis and Evaluation Experiments. In Proceedings of the 10th International Conference on Computational Semantics (pp. 369–375).
  7. Taulé, M., Peris, A., & Rodríguez, H. (2016). Iarg-AnCora: Spanish corpus annotated with implicit arguments. Language Resources and Evaluation, 1–36. doi:10.1007/s10579-015-9334-3
  8. Kilicoglu, H. (2016). Inferring Implicit Causal Relationships in Biomedical Literature. In Proceedings of the 15th Workshop on Biomedical Natural Language Processing (pp. 46–55). Berlin, Germany: Association for Computational Linguistics.
  9. Chih-Hsuan Wei, Yifan Peng, Robert Leaman, Al- lan Peter Davis, Carolyn J. Mattingly, Jiao Li, Thomas C. Wiegers, and Zhiyong Lu. 2016. Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical- disease relation (CDR) task. Database, 2016.
  10. 10.0 10.1 10.2 Silberer, C., & Frank, A. (2012). Casting Implicit Role Linking As an Anaphora Resolution Task. In Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (pp. 1–10). Stroudsburg, PA, USA: Association for Computational Linguistics.
  11. Dahl, D. A., M. S. Palmer, and R. J. Passonneau (1987). Nominalizations in pundit. In In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, ACL ’87, Stanford, California, USA, pp. 131–139.
  12. Tonelli, S. and R. Delmonte (2010). Venses++: Adapting a deep semantic processing system to the identification of null instantiations. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, Los Ange- les, California, USA, pp. 296–299.
  13. Ruppenhofer, J., P. Gorinski, and C. Sporleder (2011). In search of missing arguments: A linguistic approach. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, RANLP ’11, Hissar, Bulgaria, pp. 331–338.
  14. 14.0 14.1 Laparra, E. and G. Rigau (2012). Exploiting explicit annotations and semantic types for implicit argument resolution. In 6th IEEE International Conference on Semantic Computing, ICSC ’12, Palermo, Italy, pp. 75–78.
  15. Gorinski, P., J. Ruppenhofer, and C. Sporleder (2013). Towards weakly supervised resolution of null instantiations. In Proceedings of the 10th International Conference on Computational Se- mantics, IWCS ’13, Potsdam, Germany, pp. 119–130.
  16. Laparra, E. and G. Rigau (2013). Sources of evidence for implicit argument resolution. In Proceedings of the 10th International Conference on Computational Semantics, IWCS ’13, Pots- dam, Germany, pp. 155–166.
  17. 17.0 17.1 Chen, D., Schneider, N., Das, D., & Smith, N. A. (2010). SEMAFOR: Frame argument resolution with log-linear models. In Proceedings of the 5th International Workshop on Semantic Evaluation (pp. 264–267). Uppsala, Sweden: Association for Computational Linguistics.
  18. Roth, M., & Frank, A. (2013). Automatically Identifying Implicit Arguments to Improve Argument Linking and Coherence Modeling. Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM), 1, 306–316. Retrieved from http://www.aclweb.org/anthology/S13-1043%5Cnhttps://www.aclweb.org/anthology-new/S/S13/S13-1043.pdf
  19. Roth, M., & Frank, A. (2015). Inducing Implicit Arguments from Comparable Texts: A Framework and its Applications. Computational Linguistics, 41(4), 625–664.
  20. Schenk, N., Chiarcos, C., & Sukhareva, M. (2015). Towards the Unsupervised Acquisition of Implicit Semantic Roles. In RANLP 2015 (pp. 570–578).
  21. Schenk, N., & Chiarcos, C. (2016). Unsupervised Learning of Prototypical Fillers for Implicit Semantic Role Labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1473–1479). San Diego, California: Association for Computational Linguistics.
  22. Gorinski, P., Ruppenhofer, J., & Sporleder, C. (2013). Towards Weakly Supervised Resolution of Null Instantiations. Proceedings of IWCS, 1–11.
  23. Do, Quynh Ngoc Thi, Steven Bethard, and Marie-Francine Moens. "Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments." arXiv preprint arXiv:1704.02709 (2017).
  24. Palmer, M. S., D. A. Dahl, R. J. Schiffman, L. Hirschman, M. Linebarger, and J. Dowding (1986). Recovering implicit information. In Proceedings of the 24th annual meeting on Association for Computational Linguistics, ACL ’86, New York, New York, USA, pp. 10–19.
  25. Whittemore, Greg, Melissa Macpherson, and Greg Carlson. 1991. Event-building through role-filling and anaphora resolution. In Proceedings of the 29th AnnualMeeting on Association for Computational Linguistics, pages 17–24, Morristown, NJ.
  26. Nielsen, Leif Arda. 2004. Verb phrase ellipsis detection using automatically parsed text. In COLING ’04: Proceedings of the 20th international conference on Computational Linguistics, pages 1093–1099, Geneva.
  27. Nielsen, Leif Arda. 2005. A corpus-based study of Verb Phrase Ellipsis Identification and Resolution. Ph.D. thesis, King’s College, London.
  28. Gerber, Matt, Joyce Y. Chai, and Adam Meyers. "The role of implicit argumentation in nominal SRL." Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2009.
  29. Ruppenhofer, J., Lee-Goldman, R., Sporleder, C., & Morante, R. (2013). Beyond sentence-level semantic role labeling: Linking argument structures in discourse. Language Resources and Evaluation, 47(3), 695–721. http://doi.org/10.1007/s10579-012-9201-4
This article is in an early stage. Action is needed to make it more useful.