为了账号安全,请及时绑定邮箱和手机立即绑定

了解 Trax 中变压器的介绍性示例

了解 Trax 中变压器的介绍性示例

HUWWW 2023-06-06 17:36:07
我的目标是理解 Trax 中变压器的介绍性示例,import trax# Create a Transformer model.# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.ginmodel = trax.models.Transformer(    input_vocab_size=33300,    d_model=512, d_ff=2048,    n_heads=8, n_encoder_layers=6, n_decoder_layers=6,    max_len=2048, mode='predict')# Initialize using pre-trained weights.model.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',                     weights_only=True)# Tokenize a sentence.sentence = 'It is nice to learn new things today!'tokenized = list(trax.data.tokenize(iter([sentence]),  # Operates on streams.                                    vocab_dir='gs://trax-ml/vocabs/',                                    vocab_file='ende_32k.subword'))[0]# Decode from the Transformer.tokenized = tokenized[None, :]  # Add batch dimension.tokenized_translation = trax.supervised.decoding.autoregressive_sample(    model, tokenized, temperature=0.0)  # Higher temperature: more diverse results.# De-tokenize,tokenized_translation = tokenized_translation[0][:-1]  # Remove batch and EOS.translation = trax.data.detokenize(tokenized_translation,                                   vocab_dir='gs://trax-ml/vocabs/',                                   vocab_file='ende_32k.subword')print(translation)这个例子工作得很好。但是,当我尝试使用初始化模型翻译另一个示例时,例如sentence = 'I would like to try another example.'tokenized = list(trax.data.tokenize(iter([sentence]),                                    vocab_dir='gs://trax-ml/vocabs/',                                    vocab_file='ende_32k.subword'))[0]tokenized = tokenized[None, :]!我在本地机器和 Google Colab 上都得到了输出。其他示例也会发生同样的情况。当我构建并初始化一个新模型时,一切正常。这是一个错误吗?如果不是,这里发生了什么,我怎样才能避免/修复这种行为?Tokenization 和 detokenization 似乎运行良好,我对此进行了调试。. 中的事情似乎出了问题/出乎意料trax.supervised.decoding.autoregressive_sample。
查看完整描述

1 回答

?
猛跑小猪

TA贡献1858条经验 获得超8个赞

我自己发现的……需要重置模型的state. 所以下面的代码对我有用:


def translate(model, sentence, vocab_dir, vocab_file):

    empty_state = model.state # save empty state

    tokenized_sentence = next(trax.data.tokenize(iter([sentence]), vocab_dir=vocab_dir,

                                                 vocab_file=vocab_file))

    tokenized_translation = trax.supervised.decoding.autoregressive_sample(

        model, tokenized_sentence[None, :], temperature=0.0)[0][:-1]

    translation = trax.data.detokenize(tokenized_translation, vocab_dir=vocab_dir,

                                       vocab_file=vocab_file)

    model.state = empty_state # reset state

    return translation


# Create a Transformer model.

# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.gin

model = trax.models.Transformer(input_vocab_size=33300, d_model=512, d_ff=2048, n_heads=8,

                                n_encoder_layers=6, n_decoder_layers=6, max_len=2048,

                                mode='predict')

# Initialize using pre-trained weights.

model.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',

                     weights_only=True)


print(translate(model, 'It is nice to learn new things today!',

                vocab_dir='gs://trax-ml/vocabs/', vocab_file='ende_32k.subword'))

print(translate(model, 'I would like to try another example.',

                vocab_dir='gs://trax-ml/vocabs/', vocab_file='ende_32k.subword'))



查看完整回答
反对 回复 2023-06-06
  • 1 回答
  • 0 关注
  • 128 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
意见反馈 帮助中心 APP下载
官方微信