FANDOM


TODO: neighborhood mixture model (Nguyen et al., 2016)[1]

Link (argument) prediction Edit

Model WordNet FB13K FB15K (type-constrained) FB15K (none-constrained) Desc. ref. Perf. ref.
MRR HIT@10 MRR HIT@10 MRR HIT@10 MRR HIT@10
E - - - - 22.7 34.0 21.8 33.6 Riedel et al., 2013[2] Toutanova & Chen (2015)[3]
DistMult - - - - 63.1 79.0 55.5 79.7 Toutanova & Chen (2015) Toutanova & Chen (2015)
E+DistMult - - - - 65.9 81.0 56.2 78.3 Toutanova & Chen (2015)
Observed (Node+LinkFeat) - - - - 82.1 86.1 82.2 87.0 Toutanova & Chen (2015)
DistMult - - - - 36.0 57.7 Yang et al. (2015) Yang et al. (2015)
DISTMULT-tanh-EV-init - - - - 42 73.2 Yang et al. (2015)[4] Yang et al. (2015)
Unstructured 315/304 35.3/38.2 - - - - 1,074/979 4.5/6.3 Bordes et al. 2012[5] Lin et al. 2015[6]
RESCAL 1,180/1,163 37.2/52.8 - - - - 828/683 28.4/44.1 Nickel, Tresp, and Kriegel 2011[7] Lin et al. 2015
SE 1,011/985 68.5/80.5 - - - - 273/162 28.8/39.8 Bordes et al. 2011[8] Lin et al. 2015
SME (linear) 545/533 65.1/74.1 - - - - 274/154 30.7/40.8 Bordes et al. 2012[9] Lin et al. 2015
SME (bilinear) 526/509 54.7/61.3 - - - - 284/158 31.3/41.3 Bordes et al. 2012 Lin et al. 2015
LFM 469/456 71.4/81.6 - - - - 283/164 26.0/33.1 Jenatton et al. 2012[10] Lin et al. 2015
TransE 263/251 75.4/89.2 - - - - 243/125 34.9/47.1 Bordes et al. 2013 Lin et al. 2015
TransH (unif) 318/303 75.4/86.7 - - - - 211/84 42.5/58.5 Wang et al. 2014[11] Lin et al. 2015
TransH (bern) 401/388 73.0/82.3 - - - - 212/87 45.7/64.4 Wang et al. 2014 Lin et al. 2015
TransR (unif) 232/219 78.3/91.7 - - - - 226/78 43.8/65.5 Lin et al. 2015
TransR (bern) 238/225 79.8/92.0 - - - - 198/77 48.2/68.7 Lin et al. 2015
CTransR (unif) 243/230 78.9/92.3 - - - - 233/82 44/66.3 Lin et al. 2015
CTransR (bern) 231/218 79.4/92.3 - - - - 199/75 48.4/70.2 Lin et al. 2015

Triple classification Edit

Data Sets WN11 FB13 FB15K Desc. Ref. Perf. Ref.
SE 53.0 75.2 - Lin et al. 2015[6]
SME (bilinear) 70.0 63.7 - Lin et al. 2015
SLM 69.9 85.3 - Lin et al. 2015
LFM 73.8 84.3 - Lin et al. (2015)
NTN 70.4 87.1 68.5 Lin et al. (2015)
TransE (unif) 75.9 70.9 79.6 Lin et al. (2015)
TransE (bern) 75.9 81.5 79.2 Lin et al. (2015)
TransH (unif) 77.7 76.5 79.0 Lin et al. (2015)
TransH (bern) 78.8 83.3 80.2 Lin et al. (2015)
TransR (unif) 85.5 74.7 81.7 Lin et al. (2015)
TransR (bern) 85.9 82.5 83.9 Lin et al. (2015)
CTransR (bern) 85.7 - 84.5 Lin et al. (2015)
Bilinear COMP (initialized randomly) 77.6 87.6 - Guu et al. (2015)[12]
Bilinear COMP (word vector) 86.1 89.4 - Guu et al. (2015)
TransE COMP (initialized randomly) 80.3 84.9 - Guu et al. (2015)
TransE COMP (word vector) 87.6 89.6 - Guu et al. (2015)

Relation prediction Edit

see Nguyen et al. (2016)[1]

Notes Edit

  • Datasets:
    • FB15K: Bordes et al. (2014)[13]
    • FB13: Socher et al. (2013)[14]
  • Numbers from Lin et al. (2015)[6]: we have two number, first is "raw", second is "filter". As described in the paper:
    "a corrupted triple may also exist in knowledge graphs, which should be also considered as correct. However, the above evaluation may under-estimate those systems that rank these corrupted but correct triples high. Hence, before ranking we may filter out these corrupted triples which have appeared in knowledge graph. We name the first evaluation setting as “Raw” and the latter one as “Filter”."

References Edit

  1. 1.0 1.1 Nguyen, D. Q., Sirts, K., Qu, L., & Johnson, M. (2016). Neighborhood Mixture Model for Knowledge Base Completion. Retrieved from http://arxiv.org/abs/1606.06461
  2. Sebastian Riedel, Limin Yao, Benjamin M. Marlin, and Andrew McCallum. 2013. Relation extraction with matrix factorization and universal schemas. In North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT).
  3. Toutanova, Kristina, and Danqi Chen. "Observed versus latent features for knowledge base and text inference." Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. 2015.
  4. Yang, B., Yih, W., He, X., Gao, J., & Deng, L. (2014). Embedding Entities and Relations for Learning and Inference in Knowledge Bases, 12. Computation and Language.
  5. Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In Proceedings of AISTATS, 127–135. Bordes,
  6. 6.0 6.1 6.2 Lin, Y., Liu, Z., Sun, M., Liu, Y., & Zhu, X. (2015). Learning Entity and Relation Embeddings for Knowledge Graph Completion. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning, 2181–2187.
  7. Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2011. A three- way model for collective learning on multi-relational data. In Proceedings of ICML, 809–816.
  8. Bordes, A.;Weston, J.; Collobert, R.; Bengio,Y.; et al. 2011. Learning structured embeddings of knowledge bases. In Proceedings of AAAI, 301–306.
  9. Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In Proceedings of AISTATS, 127–135.
  10. Jenatton, R.; Roux, N. L.; Bordes, A.; and Obozinski, G. R. 2012. A latent factor model for highly multi-relational data. In Proceedings of NIPS, 3167–3175.
  11. Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowl- edge graph embedding by translating on hyperplanes. In Proceedings of AAAI, 1112–1119.
  12. Guu, K., Miller, J., & Liang, P. (2015). Traversing knowledge graphs in vector space. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP ’15), (September), 318–327.
  13. Bordes, A.; Glorot, X.;Weston, J.; and Bengio, Y. 2014. A semantic matching energy function for learning with multi- relational data. Machine Learning 94(2):233–259.
  14. Socher, R.; Chen, D.; Manning, C. D.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of NIPS, 926–934.