RNN生成名字教程NLP FROM SCRATCH: GENERATING NAMES WITH A CHARACTER-LEVEL RNN是一篇不错的RNN入门教程,为了方便理解,这里将里面容易困惑的地方记录下。 这个教程会教你如何从头训练一个RNN模型,这个模型能够根据输入的国家名称和名字首字母来生成对应国家的名字。
python sample.py Russian G
python sample.py Chinese C
python sample.py Chinese H
网络结构:
数据集结构
训练数据结构如下:
Archive: data.zip
inflating: data/eng-fra.txt
inflating: data/names/Arabic.txt
inflating: data/names/Chinese.txt
inflating: data/names/Czech.txt
inflating: data/names/Dutch.txt
inflating: data/names/English.txt
inflating: data/names/French.txt
inflating: data/names/German.txt
inflating: data/names/Greek.txt
inflating: data/names/Irish.txt
inflating: data/names/Italian.txt
inflating: data/names/Japanese.txt
inflating: data/names/Korean.txt
inflating: data/names/Polish.txt
inflating: data/names/Portuguese.txt
inflating: data/names/Russian.txt
inflating: data/names/Scottish.txt
inflating: data/names/Spanish.txt
inflating: data/names/Vietnamese.txt
每个.txt文件中存储的都是对应国家的人名,总计有18个国家的类别名字,来看下Chinese.txt存了些啥
! cat data/names/Chinese.txt
Chin
Chong
Chou
Chu
Cui
Dai
Deng
Ding
Dong
Dou
Duan
Eng
Fan
Fei
...
在加载了这些字符后,需要将每个ascii字符转换为一个one-hot编码的向量。
import torch
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
self.o2o = nn.Linear(hidden_size + output_size, output_size)
self.dropout = nn.Dropout(0.1)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, category, input, hidden):
input_combined = torch.cat((category, input, hidden), 1)
hidden = self.i2h(input_combined)
output = self.i2o(input_combined)
output_combined = torch.cat((hidden, output), 1)
output = self.o2o(output_combined)
output = self.dropout(output)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
all_letters = string.ascii_letters + " .,;'-"
n_letters = len(all_letters) + 1
推理
先看下推理是怎么做的,rnn的输入是类别category和首字母,category要转换成18个国家类别的one-hot vector, 首字母start_letter要转换为59个字母表对应的onehot,每次rnn预测出来的一个字符会成为下个迭代中rnn的输入。 需要注意的是,正常我们做模型推理的时候会加上model.eval 来避免dropout/bn的随机性,但是下面的sample示例并没有加,原因就是想通过模型中的dropout增加一些随机,同一个起始字符能够输出不同的名字。
def sample(category, start_letter='A'):
category_tensor = categoryTensor(category)
input = inputTensor(start_letter)
hidden = rnn.initHidden()
output_name = start_letter
for i in range(max_length):
output, hidden = rnn(category_tensor, input[0], hidden)
topv, topi = output.topk(1)
topi = topi[0][0]
if topi == n_letters - 1:
break
else:
letter = all_letters[topi]
output_name += letter
input = inputTensor(letter)
return output_name
sample('Chinese', 'L')
sample('Chinese', 'LW')
训练
训练与推理不同的是训练要加入额外gt标签target_line_tensor。 以名字yang 为例,input_line_tensor为yang 的onehot,比如: [[[0,0,...1,0,0...],[0,0,...1...],...]],shape:(4,1,59) shape 满足(seq_len, batch_size, n_letters), target_line_tensor为input每个字符的下一个字符,也就是ang<eos> 的id列表,注意这里没有对其做onehot,因为这里用的loss是NLLoss,他需要的是类别的index,不需要onehot,target_line_tensor打印可能是下面的结果:[8, 19, 4, 58], shape:(4,) 简化的训练代码:
criterion = nn.NLLLoss()
learning_rate = 0.0005
def train(category_tensor, input_line_tensor, target_line_tensor):
target_line_tensor.unsqueeze_(-1)
hidden = rnn.initHidden()
rnn.zero_grad()
loss = 0
for i in range(input_line_tensor.size(0)):
output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
l = criterion(output, target_line_tensor[i])
loss += l
loss.backward()
for p in rnn.parameters():
p.data.add_(p.grad.data, alpha=-learning_rate)
return output, loss.item() / input_line_tensor.size(0)
|