pytorch softmax_损失函数总结(pytorch)

  • Post author:
  • Post category:其他


pytorch的损失函数

1.nn.L1Loss

Examples::
    >>> loss = nn.L1Loss(reduction='sum')
    >>> input = torch.tensor([1., 2, 3, 4])
    >>> target = torch.tensor([4., 5, 6, 7])
    >>> output = loss(input, target)
    >>> print(output)

两个输入类型必须一致,reduction是损失函数一个参数,有三个值:‘none’,返回的是一个向量

(batch_size, )

。‘sum’,返回的是求和,‘elementwise_mean’,返回的是求均值。上面例子用不同的参数的话返回分别为:tensor([3., 3., 3., 3.]),tensor(3.),tensor(12.)。

2.nn.SmoothL1Loss

求导

import torch
import torch.nn as nn
import torch.nn.functional as F

a = torch.tensor([1., 2, 3, 4])
b = torch.tensor([1.1, 5, 6, 7])
loss_fn = nn.SmoothL1Loss(reduction='none')
loss = loss_fn(a, b)
print(loss)
#out
tensor([0.0050, 2.5000, 2.5000, 2.5000])


3.nn.MSELoss

loss(

x

i,

y

i)=(

x

i−

y

i)2

两个输入类型必须一致,

a = torch.tensor([1., 2, 3, 4])
b = torch.tensor([4., 5, 6, 7])
loss_fn = nn.MSEL