torchtext 文档页面:https://pytorch.org/text/stable/index.html torchtext github页面:https://github.com/pytorch/text
torchtext 0.9是一个API的大更新,0.12又是一个大更新
但是官网教程门槛太高了,这里参考文档,举几个小例子,把学习门槛降下来:
官网教程:SST-2 BINARY TEXT CLASSIFICATION WITH XLM-ROBERTA MODEL
注意:如果大家有学习过transfomer 里的类,那么需要注意的是,torchtext能一个函数做成的只有很少很少的基本功能,没法自由把token <-> index或是token <-> embedding这种,需要我们代码中手动转换
学习教程:
1. word 转 token
首先构造一个自定义的vocab :
from torchtext.vocab import vocab
from collections import Counter, OrderedDict
sentence = "Natural language processing strives to build machines that understand and respond to text or voice data and respond with text or speech of their own in much the same way humans do."
sentences_list = sentence.split(" ")
counter = Counter(sentences_list)
sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)
ordered_dict = OrderedDict(sorted_by_freq_tuples)
my_vocab = vocab(ordered_dict, specials=["<UNK>", "<SEP>"])
最后使用:
print("单词and的token", my_vocab['and'])
即可得到token,为3
2. token 与 word 的对应关系
word对应的token可以通过:
print("word->token:", my_vocab.get_stoi())
同样token->word:
print("token->word:", my_vocab.get_itos())
3. 句子 -> token
这里需要引入一个新的类VocabTransform ,然后使用如下示例即可:
from torchtext.transforms import VocabTransform
vocab_transform = VocabTransform(my_vocab)
trans_token = vocab_transform([
["language", "processing"],
["second", "understand", "and", "respond", "to", "text"],
["wa", "ka", "ka", "ka", "ka", "ka"]])
print("转换后的token:", trans_token)
4. 句子定长截断
同样引入新的类Truncate
from torchtext.transforms import Truncate
truncate = Truncate(max_seq_len=3)
truncate_token = truncate(trans_token)
print("截断后的token(最长为3)", truncate_token)
5. 批量修改句子的token
使用AddToken 可以在开头或结尾为每个句子都添加一个想要的token
from torchtext.transforms import AddToken
begin_token = AddToken(token=1000, begin=True)
end_token = AddToken(token=-10000, begin=False)
add_end_token = end_token(begin_token(truncate_token))
print('动态修改token后的结果(收尾添加特殊字符):', add_end_token)
完整示例代码
from torchtext.vocab import vocab
from collections import Counter, OrderedDict
sentence = "Natural language processing strives to build machines that understand and respond to text or voice data and respond with text or speech of their own in much the same way humans do."
sentences_list = sentence.split(" ")
counter = Counter(sentences_list)
sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)
ordered_dict = OrderedDict(sorted_by_freq_tuples)
my_vocab = vocab(ordered_dict, specials=["<UNK>", "<SEP>"])
my_vocab.set_default_index(-1)
print("单词 and 的token:", my_vocab['and'])
print("单词 apple 的token(语料中木有这个词):", my_vocab['apple'])
from torchtext.transforms import VocabTransform
vocab_transform = VocabTransform(my_vocab)
trans_token = vocab_transform([
["language", "processing"],
["second", "understand", "and", "respond", "to", "text"],
["wa", "ka", "ka", "ka", "ka", "ka"]])
print("转换后的token:", trans_token)
print("token->word:", my_vocab.get_itos())
print("word->token:", my_vocab.get_stoi())
from torchtext.transforms import Truncate
truncate = Truncate(max_seq_len=3)
truncate_token = truncate(trans_token)
print("截断后的token(最长为3)", truncate_token)
from torchtext.transforms import AddToken
begin_token = AddToken(token=1000, begin=True)
end_token = AddToken(token=-10000, begin=False)
add_end_token = end_token(begin_token(truncate_token))
print('动态修改token后的结果(收尾添加特殊字符):', add_end_token)
|