nn.LSTM()
model = nn.LSTM(2, 2, 10, batch_first=True)
"""参数说明:
- feature_len:特征的维度
- hidden_len:隐藏层的个数
- layer_num:每个时间步所对应的模型层数
- batch_first:用来指示数据应该以什么形式来给,默认为False,数据形状(seq_len,batch,feature_len);否则形状为(batch,seq_len,feature_len)
"""
output, (h, c) = model(x, state)
"""参数说明:
- x:输入特征
- state:隐藏层和细胞特征
- output:模型最后一层的输出
- h:最后一步所有层的隐藏状态
- c:最后一部所有层的细胞状态"""
import torch
from torch import nn
lstm = nn.LSTM(input_size=1, hidden_size=20, num_layers=4)
x = torch.randn(10, 3, 1)
out, (h, c) = lstm(x)
print(out.shape)
print(h.shape)
print(c.shape)
注意:最终的输出out同样是在所有时刻最后一层上的输出(特别注意,LSTM中的输出是 h不是 C),因此是将feature_len规约成了hidden_len那么长,所以其shape是 [seq_len,batch,hidden_len]。
nn.LSTMCell()
- 该模块构建LSTM中的一个Cell,同一层会共享这一个Cell,但要手动处理每个时刻的迭代计算过程。如果要建立多层的LSTM,就要建立多个nn.LSTMCell。
- 构造函数
lstm_c = nn.LSTMCell(10, 20)
"""参数说明:
- feature_len:特征维度
- hidden_len:隐藏层维度"""
ht?=rnncell(x,ht?1?)
input = torch.randn(3, 10)
hx = torch.randn(3, 20)
cx = torch.randn(3, 20)
output = []
for i in range(3):
hx, cx = lstm_c(input[i], (hx, cx))
output.append(hx)
"""
注意:输入xt只是t时刻的输入,不涉及seq_len,所以其shape是 [batch,feature]。
而ht?和Ct?在这里只是t时刻本层的隐藏单元和记忆单元,不涉及num_layers,所以其shape是[batch,hidden_len]。
"""
import torch
from torch import nn
cell = nn.LSTMCell(input_size=100, hidden_size=20)
h = torch.zeros(3, 20)
C = torch.zeros(3, 20)
xs = [torch.randn(3, 100) for _ in range(10)]
for xt in xs:
h, C = cell(xt, (h, C))
print(h.shape)
print(C.shape)
import torch
from torch import nn
cell_l0 = nn.LSTMCell(input_size=100, hidden_size=30)
cell_l1 = nn.LSTMCell(input_size=30, hidden_size=20)
h_l0 = torch.zeros(3, 30)
C_l0 = torch.zeros(3, 30)
h_l1 = torch.zeros(3, 20)
C_l1 = torch.zeros(3, 20)
xs = [torch.randn(3, 100) for _ in range(10)]
for xt in xs:
h_l0, C_l0 = cell_l0(xt, (h_l0, C_l0))
h_l1, C_l1 = cell_l1(h_l0, (h_l1, C_l1))
print(h_l0.shape)
print(C_l0.shape)
print(h_l1.shape)
print(C_l1.shape)
参考:https://blog.csdn.net/SHU15121856/article/details/104448734
|