machine learning - How to apply RNN to sequence-to-sequence NLP task? -
i'm quite confused sequence-to-sequence rnn on nlp tasks. previously, have implemented neural models of classification tasks. in tasks, models take word embeddings input , use softmax layer @ end of networks classification. how neural models seq2seq tasks? if input word embedding, output of neural model? examles of these tasks include question answering, dialogue systems , machine translation.
you can use encoder-decoder architecture. encoder part encodes input fixed-length vector, , decoder decodes vector output sequence, whatever be. encoding , decoding layers can learned jointly against objective function (which can still involve soft-max). check out this paper shows how model can used in neural machine translation. decoder here emits words 1 one in order generate correct translation.
Comments
Post a Comment