一、官网
huggingface.co
二、模型下载
环境中安装 transformers包
conda install -n conda虚拟环境名称 transformers
模型自动下载 引号中是模型名称
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('bert-base-chinese', output_hidden_states = True,)
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
模型自动下载位置
/home/用户名/.cache/huggingface/transformers
手动下载 网页最上方搜索模型名字 点击Model card右侧的Files and Versions 传入模型保存本地路径
model = BertModel.from_pretrained('./model', output_hidden_states = True,)
tokenizer = BertTokenizer.from_pretrained('./model/vocab.txt')
注意,BertModel.from_pretrained里面输入的是文件夹的路径
BertTokenizer.from_pretrained里面输入的是vocab.txt,而不是tokenizer.json。
加速下载
model = BertModel.from_pretrained('bert-base-chinese', mirror='tuna')
三、管道pipeline
直接使用模型
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
classifier("We are very happy to show you the Transformers library.")
'''返回列表(含有一个字典 字典键 为label 和 score)'''
'''多个可以采用列表输入'''
results = classifier(["We are very happy to show you the Transformers library.", "We hope you don't hate it."])
'''返回多字典列表'''
for result in results:
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
从数据集加载
pip install datasets
'''指定分类和模型(语音识别) 如果只指定分类就会随机挑选模型'''
speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
files = dataset["file"]
speech_recognizer(files[:4])
四、标记tokenizer
'''用于容纳模型'''
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
'''打印'''
classifier("Nous sommes très heureux de vous présenter la bibliothèque Transformers.")
五、自动类AutoClass
'''预先训练的模型的名称或路径中自动检索该模型的体系结构
关联 AutoTokenizer'''
六、自动词汇AutoTokenizer
文本拆分为多个单词 到文本可理解程度
from transformers import AutoTokenizer
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
tokenizer = AutoTokenizer.from_pretrained(model_name)
encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.")
print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
|