为了账号安全,请及时绑定邮箱和手机立即绑定

[读论文] Copying Mechanism in Sequence-to-Sequence

标签:
机器学习

Paper Today:

'Incorporating Copying Mechanism in Sequence-to-Sequence Learning'

This paper develops a model called COPYNET which performs well in an important mechanism called 'copy mechanism'.

In human language communication, there are many situations that we will use 'copy mechanism', such as in a dialogue:

In order to make machine generate such dialogue, there are two things to do.

  • First, to identify what should be copied.

  • Second, to decide where the copy part should be addressed.

Currently there are some popular models like seq2seq, and adding Attention Mechanism to seq2seq.
COPYNET is also an encoder-decoder model, but a different strategy in neural network based models.
RNN and Attention Mechanism requires more 'understanding', but COPYNET requires high 'literal fidelity'.

There are mainly 3 improvements in the decoder part.
Prediction:
Based on the mix of two probabilistic modes, generate mode and copy mode, the model can pick the proper subsentence and generate some OOV words.

State Update:
There's a minor change that they designed a selective read for copy mode, which enables the model to notice the location information.

Reading M:
This model can get a hybrid of content based addressing and location based addressing.

In the experiment, this model did very well in tasks like text summarization.


点击查看更多内容
TA 点赞

若觉得本文不错,就分享一下吧!

评论

作者其他优质文章

正在加载中
  • 推荐
  • 评论
  • 收藏
  • 共同学习,写下你的评论
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦
今天注册有机会得

100积分直接送

付费专栏免费学

大额优惠券免费领

立即参与 放弃机会
意见反馈 帮助中心 APP下载
官方微信

举报

0/150
提交
取消