Pytorch Softmax用法
pytorch中的softmax主要存在于两个包中分别是: torch.nn.Softmax(dim=None) torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None) 下面分别介绍其用法: torch.nn.Softmax torch.nn.Softmax中只要一个参数:来制定归一化维度如果是dim=0指代的是行,dim=1指代的是列。
import torch
import torch.nn as nn
input_0 = torch.Tensor([1,2,3,4])
input_1 = torch.Tensor([[1,2,3,4],[5,6,7,8]])
softmax_0 = nn.Softmax(dim=0)
softmax_1 = nn.Softmax(dim=1 )
output_0 = softmax_0(input_0)
output_1 = softmax_1(input_1)
output_2 = softmax_0(input_1)
print(output_0)
print(output_1)
print(output_2)
输出:
tensor([0.0321, 0.0871, 0.2369, 0.6439])
tensor([[0.0321, 0.0871, 0.2369, 0.6439],
[0.0321, 0.0871, 0.2369, 0.6439]])
tensor([[0.0180, 0.0180, 0.0180, 0.0180],
[0.9820, 0.9820, 0.9820, 0.9820]])
torch.nn.functional.softmax 与上面介绍不同的是torch.nn.Softmax,多了一个参数(input:输入的张量)
import torch
import torch.nn.functional as F
input_0 = torch.Tensor([1,2,3,4])
input_1 = torch.Tensor([[1,2,3,4],[5,6,7,8]])
output_0 = F.softmax(input_0)
output_1 = F.softmax(input_1,dim=0)
output_2 = F.softmax(input_1,dim=1)
print(output_0)
print(output_1)
print(output_2)
输出
tensor([0.0321, 0.0871, 0.2369, 0.6439])
tensor([[0.0180, 0.0180, 0.0180, 0.0180],
[0.9820, 0.9820, 0.9820, 0.9820]])
tensor([[0.0321, 0.0871, 0.2369, 0.6439],
[0.0321, 0.0871, 0.2369, 0.6439]])
对于log_softmax和softmax用法一模一样,但是输出结果不一样本 对于一些较大的数可以采取log_softmax,来防止溢出
|