Natural Language Understanding Wiki
Advertisement

Work by Ben-David et al. (2007; 2009) uses Vapnik-Chervonenkis (VC) theory to prove theoretical bounds on an open-domain learning machine’s performance. Their analysis shows that the choice of representation is crucial to open-domain learning. As is customary in VC theory, a good choice of representation must allow a learning machine to achieve low error rates during training. Just as important, however, is that the representation must simultaneously make the source and target domains look as similar to one another as possible.

Data sets[]

External links[]

Advertisement