Hacker News new | past | comments | ask | show | jobs | submit login

One of my students tried something along these lines for Natural Language Inference (NLI) last year. [1] The results where not conclusive, but perhaps Machine Translation is a better target? My reason for believing this is that the specific dataset for NLI most likely does not require multiple steps of inference for most cases (you can get away with simple token overlap), while the decoder in MT does so since it is constrained to output a single token at each step.

[1]: https://arxiv.org/abs/1610.07647




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: