FANDOM


Attention mechanism was initially invented for machine translation but quickly found applications in many other tasks. It works whenever one needs to "translate" from one structure (images, sequences, trees) to another. Ilya Sutskever, Research Director at OpenAI (as of 2015), said in an interview: "[attention models] are here to stay, and that they will play a very important role in the future of deep learning."

The basic idea is to read the input structure twice: once to encode the gist and another time (at each step while decoding) to "pay attention" to certain details.

However, Press and Smith (2018)[1] show that similar performance in machine translation can be achieved using an eager model without attention.

Machine translationEdit

TODO: Luong et al. (2015)[2]

Text processing/understandingEdit

Natural language inference: Parikh et al. (2016)[3]

Abstractive summarization: Chopra et al. (2016)[4]: "The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation."

Question answering: TODO Dhingra et al. (2016)[5]

VisualEdit

Mnih, V., Heess, N., Graves, A., & Kavukcuoglu, K. (2014). Recurrent Models of Visual Attention, 1–12. Retrieved from http://arxiv.org/abs/1406.6247

Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple Object Recognition with Visual Attention. arXiv Preprint arXiv:1412.7755.

AudioEdit

http://arxiv.org/pdf/1508.01211.pdf

ReferencesEdit

  1. Press, O., & Smith, N. A. (2018). You May Not Need Attention. EMNLP.
  2. Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective Approaches to Attention-based Neural Machine Translation. Emnlp, (September), 11. Retrieved from http://arxiv.org/abs/1508.04025
  3. Parikh, A. P., Täckström, O., Das, D., & Uszkoreit, J. (2016). A Decomposable Attention Model for Natural Language Inference. Retrieved from http://arxiv.org/abs/1606.01933
  4. Chopra, S., Auli, M., & Rush, A. M. (2016). Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 93–98). San Diego, California: Association for Computational Linguistics. Retrieved from http://www.aclweb.org/anthology/N16-1012
  5. Dhingra, B., Liu, H., Cohen, W. W., & Salakhutdinov, R. (2016). Gated-Attention Readers for Text Comprehension. Retrieved from http://arxiv.org/abs/1606.01549