1 回答
TA贡献1876条经验 获得超7个赞
最简单的方法是使用 2 种 LSTM。
准备玩具数据集
xi = [
# Input features at timestep 1
[1, 48, 91, 0],
# Input features at timestep 2
[20, 5, 17, 32],
# Input features at timestep 3
[12, 18, 0, 0],
# Input features at timestep 4
[0, 0, 0, 0],
# Input features at timestep 5
[0, 0, 0, 0]
]
yi = 1
x = torch.tensor([xi, xi])
y = torch.tensor([yi, yi])
print(x.shape)
# torch.Size([2, 5, 4])
print(y.shape)
# torch.Size([2])
然后,x是输入的批次。这里batch_size= 2。
嵌入输入
vocab_size = 1000
embed_size = 100
hidden_size = 200
embed = nn.Embedding(vocab_size, embed_size)
# shape [2, 5, 4, 100]
x = embed(x)
第一个词-LSTM是将每个序列编码成一个向量
# convert x into a batch of sequences
# Reshape into [2, 20, 100]
x = x.view(bs * 5, 4, 100)
wlstm = nn.LSTM(embed_size, hidden_size, batch_first=True)
# get the only final hidden state of each sequence
_, (hn, _) = wlstm(x)
# hn shape [1, 10, 200]
# get the output of final layer
hn = hn[0] # [10, 200]
第二个seq-LSTM是将序列编码成单个向量
# Reshape hn into [bs, num_seq, hidden_size]
hn = hn.view(2, 5, 200)
# Pass to another LSTM and get the final state hn
slstm = nn.LSTM(hidden_size, hidden_size, batch_first=True)
_, (hn, _) = slstm(hn) # [1, 2, 200]
# Similarly, get the hidden state of the last layer
hn = hn[0] # [2, 200]
添加一些分类层
pred_linear = nn.Linear(hidden_size, 1)
# [2, 1]
output = torch.sigmoid(pred_linear(hn))
添加回答
举报