问题:什么是logits,softmax和softmax_cross_entropy_with_logits?

我正在这里浏览tensorflow API文档。在tensorflow文档中,他们使用了名为的关键字logits。它是什么?API文档中的许多方法都将其编写为

tf.nn.softmax(logits, name=None)

如果写的是什么是那些logitsTensors,为什么保持一个不同的名称,如logits

另一件事是,我无法区分两种方法。他们是

tf.nn.softmax(logits, name=None)
tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)

它们之间有什么区别?这些文档对我来说还不清楚。我知道是什么tf.nn.softmax呢。但是没有其他。一个例子将非常有帮助。

I was going through the tensorflow API docs here. In the tensorflow documentation, they used a keyword called logits. What is it? In a lot of methods in the API docs it is written like

tf.nn.softmax(logits, name=None)

If what is written is those logits are only Tensors, why keeping a different name like logits?

Another thing is that there are two methods I could not differentiate. They were

tf.nn.softmax(logits, name=None)
tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)

What are the differences between them? The docs are not clear to me. I know what tf.nn.softmax does. But not the other. An example will be really helpful.


回答 0

Logits只是意味着函数在较早的图层的未缩放输出上运行,并且理解单位的相对缩放是线性的。特别是,这意味着输入的总和可能不等于1,这意味着值不是概率(输入可能为5)。

tf.nn.softmax仅产生将softmax函数应用于输入张量的结果。softmax“压缩”输入,以便sum(input) = 1:这是一种规范化方法。softmax的输出形状与输入相同:它只是将值标准化。softmax的输出可以解释为概率。

a = tf.constant(np.array([[.1, .3, .5, .9]]))
print s.run(tf.nn.softmax(a))
[[ 0.16838508  0.205666    0.25120102  0.37474789]]

相比之下,tf.nn.softmax_cross_entropy_with_logits在应用softmax函数之后计算结果的交叉熵(但以数学上更仔细的方式将其全部合并在一起)。它类似于以下结果:

sm = tf.nn.softmax(x)
ce = cross_entropy(sm)

交叉熵是一个汇总指标:跨元素求和。tf.nn.softmax_cross_entropy_with_logits形状[2,5]张量的输出是一定形状的[2,1](将第一维视为批处理)。

如果要进行优化以最小化交叉熵,并且要在最后一层之后进行软最大化,则应使用tf.nn.softmax_cross_entropy_with_logits而不是自己进行处理,因为它以数学上正确的方式涵盖了数值不稳定的拐角情况。否则,您最终会在这里和那里添加少量epsilon,从而对其进行破解。

编辑于2016-02-07: 如果您具有单类标签,而一个对象只能属于一个类,则现在可以考虑使用tf.nn.sparse_softmax_cross_entropy_with_logits,这样就不必将标签转换为密集的一键热阵列。在0.6.0版本之后添加了此功能。

Logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are not probabilities (you might have an input of 5).

tf.nn.softmax produces just the result of applying the softmax function to an input tensor. The softmax “squishes” the inputs so that sum(input) = 1: it’s a way of normalizing. The shape of output of a softmax is the same as the input: it just normalizes the values. The outputs of softmax can be interpreted as probabilities.

a = tf.constant(np.array([[.1, .3, .5, .9]]))
print s.run(tf.nn.softmax(a))
[[ 0.16838508  0.205666    0.25120102  0.37474789]]

In contrast, tf.nn.softmax_cross_entropy_with_logits computes the cross entropy of the result after applying the softmax function (but it does it all together in a more mathematically careful way). It’s similar to the result of:

sm = tf.nn.softmax(x)
ce = cross_entropy(sm)

The cross entropy is a summary metric: it sums across the elements. The output of tf.nn.softmax_cross_entropy_with_logits on a shape [2,5] tensor is of shape [2,1] (the first dimension is treated as the batch).

If you want to do optimization to minimize the cross entropy AND you’re softmaxing after your last layer, you should use tf.nn.softmax_cross_entropy_with_logits instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you’ll end up hacking it by adding little epsilons here and there.

Edited 2016-02-07: If you have single-class labels, where an object can only belong to one class, you might now consider using tf.nn.sparse_softmax_cross_entropy_with_logits so that you don’t have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.


回答 1

简洁版本:

假设您有两个张量,其中y_hat包含每个类的计算得分(例如,来自y = W * x + b),并且y_true包含一个热编码的真实标签。

y_hat  = ... # Predicted label, e.g. y = tf.matmul(X, W) + b
y_true = ... # True label, one-hot encoded

如果将分数解释y_hat为未归一化的对数概率,则它们为logits

此外,以这种方式计算的总交叉熵损失为:

y_hat_softmax = tf.nn.softmax(y_hat)
total_loss = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_hat_softmax), [1]))

基本上等于用函数计算的总交叉熵损失softmax_cross_entropy_with_logits()

total_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true))

长版:

在神经网络的输出层中,您可能会计算一个数组,其中包含每个训练实例的类分数,例如来自计算y_hat = W*x + b。作为示例,我在下面创建了y_hat一个2 x 3的数组,其中行对应于训练实例,列对应于类。因此,这里有2个训练实例和3个类。

import tensorflow as tf
import numpy as np

sess = tf.Session()

# Create example y_hat.
y_hat = tf.convert_to_tensor(np.array([[0.5, 1.5, 0.1],[2.2, 1.3, 1.7]]))
sess.run(y_hat)
# array([[ 0.5,  1.5,  0.1],
#        [ 2.2,  1.3,  1.7]])

请注意,这些值未规范化(即,各行的总和不等于1)。为了对其进行归一化,我们可以应用softmax函数,该函数将输入解释为归一化的对数概率(aka logits),并输出归一化的线性概率。

y_hat_softmax = tf.nn.softmax(y_hat)
sess.run(y_hat_softmax)
# array([[ 0.227863  ,  0.61939586,  0.15274114],
#        [ 0.49674623,  0.20196195,  0.30129182]])

完全了解softmax输出在说什么很重要。下面我显示了一个表格,可以更清楚地表示上面的输出。可以看出,例如,训练实例1为“等级2”的概率为0.619。每个训练实例的类概率均已标准化,因此每行的总和为1.0。

                      Pr(Class 1)  Pr(Class 2)  Pr(Class 3)
                    ,--------------------------------------
Training instance 1 | 0.227863   | 0.61939586 | 0.15274114
Training instance 2 | 0.49674623 | 0.20196195 | 0.30129182

因此,现在我们有了每个训练实例的类概率,在这里我们可以采用每一行的argmax()来生成最终分类。从上面可以生成训练实例1属于“类别2”,训练实例2属于“类别1”。

这些分类正确吗?我们需要根据训练集中的真实标签进行衡量。您将需要一个一次性编码的y_true数组,其中行又是训练实例,列又是类。下面,我创建了y_true一个单热点数组示例,其中训练实例1的真实标签为“ Class 2”,训练实例2的真实标签为“ Class 3”。

y_true = tf.convert_to_tensor(np.array([[0.0, 1.0, 0.0],[0.0, 0.0, 1.0]]))
sess.run(y_true)
# array([[ 0.,  1.,  0.],
#        [ 0.,  0.,  1.]])

概率分布是否y_hat_softmax接近的概率分布y_true?我们可以使用交叉熵损失来测量误差。

交叉熵损失的公式

我们可以逐行计算交叉熵损失并查看结果。在下面我们可以看到训练实例1的损失为0.479,而训练实例2的损失为1.200。该结果之所以有意义,是因为在上面的示例中y_hat_softmax,训练实例1的最高机率是“类别2”,它与中的训练实例1相匹配y_true;但是,针对训练实例2的预测显示出“类别1”的最高概率,这与真实的类别“类别3”不匹配。

loss_per_instance_1 = -tf.reduce_sum(y_true * tf.log(y_hat_softmax), reduction_indices=[1])
sess.run(loss_per_instance_1)
# array([ 0.4790107 ,  1.19967598])

我们真正想要的是所有训练实例的总损失。因此我们可以计算:

total_loss_1 = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_hat_softmax), reduction_indices=[1]))
sess.run(total_loss_1)
# 0.83934333897877944

使用softmax_cross_entropy_with_logits()

相反,我们可以使用tf.nn.softmax_cross_entropy_with_logits()函数来计算总交叉熵损失,如下所示。

loss_per_instance_2 = tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true)
sess.run(loss_per_instance_2)
# array([ 0.4790107 ,  1.19967598])

total_loss_2 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true))
sess.run(total_loss_2)
# 0.83934333897877922

请注意,total_loss_1total_loss_2在非常最后的数字有些小的差异产生几乎相同的结果。但是,您也可以使用第二种方法:它减少了一行代码,并减少了数值误差,因为softmax是在中完成的softmax_cross_entropy_with_logits()

Short version:

Suppose you have two tensors, where y_hat contains computed scores for each class (for example, from y = W*x +b) and y_true contains one-hot encoded true labels.

y_hat  = ... # Predicted label, e.g. y = tf.matmul(X, W) + b
y_true = ... # True label, one-hot encoded

If you interpret the scores in y_hat as unnormalized log probabilities, then they are logits.

Additionally, the total cross-entropy loss computed in this manner:

y_hat_softmax = tf.nn.softmax(y_hat)
total_loss = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_hat_softmax), [1]))

is essentially equivalent to the total cross-entropy loss computed with the function softmax_cross_entropy_with_logits():

total_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true))

Long version:

In the output layer of your neural network, you will probably compute an array that contains the class scores for each of your training instances, such as from a computation y_hat = W*x + b. To serve as an example, below I’ve created a y_hat as a 2 x 3 array, where the rows correspond to the training instances and the columns correspond to classes. So here there are 2 training instances and 3 classes.

import tensorflow as tf
import numpy as np

sess = tf.Session()

# Create example y_hat.
y_hat = tf.convert_to_tensor(np.array([[0.5, 1.5, 0.1],[2.2, 1.3, 1.7]]))
sess.run(y_hat)
# array([[ 0.5,  1.5,  0.1],
#        [ 2.2,  1.3,  1.7]])

Note that the values are not normalized (i.e. the rows don’t add up to 1). In order to normalize them, we can apply the softmax function, which interprets the input as unnormalized log probabilities (aka logits) and outputs normalized linear probabilities.

y_hat_softmax = tf.nn.softmax(y_hat)
sess.run(y_hat_softmax)
# array([[ 0.227863  ,  0.61939586,  0.15274114],
#        [ 0.49674623,  0.20196195,  0.30129182]])

It’s important to fully understand what the softmax output is saying. Below I’ve shown a table that more clearly represents the output above. It can be seen that, for example, the probability of training instance 1 being “Class 2” is 0.619. The class probabilities for each training instance are normalized, so the sum of each row is 1.0.

                      Pr(Class 1)  Pr(Class 2)  Pr(Class 3)
                    ,--------------------------------------
Training instance 1 | 0.227863   | 0.61939586 | 0.15274114
Training instance 2 | 0.49674623 | 0.20196195 | 0.30129182

So now we have class probabilities for each training instance, where we can take the argmax() of each row to generate a final classification. From above, we may generate that training instance 1 belongs to “Class 2” and training instance 2 belongs to “Class 1”.

Are these classifications correct? We need to measure against the true labels from the training set. You will need a one-hot encoded y_true array, where again the rows are training instances and columns are classes. Below I’ve created an example y_true one-hot array where the true label for training instance 1 is “Class 2” and the true label for training instance 2 is “Class 3”.

y_true = tf.convert_to_tensor(np.array([[0.0, 1.0, 0.0],[0.0, 0.0, 1.0]]))
sess.run(y_true)
# array([[ 0.,  1.,  0.],
#        [ 0.,  0.,  1.]])

Is the probability distribution in y_hat_softmax close to the probability distribution in y_true? We can use cross-entropy loss to measure the error.

Formula for cross-entropy loss

We can compute the cross-entropy loss on a row-wise basis and see the results. Below we can see that training instance 1 has a loss of 0.479, while training instance 2 has a higher loss of 1.200. This result makes sense because in our example above, y_hat_softmax showed that training instance 1’s highest probability was for “Class 2”, which matches training instance 1 in y_true; however, the prediction for training instance 2 showed a highest probability for “Class 1”, which does not match the true class “Class 3”.

loss_per_instance_1 = -tf.reduce_sum(y_true * tf.log(y_hat_softmax), reduction_indices=[1])
sess.run(loss_per_instance_1)
# array([ 0.4790107 ,  1.19967598])

What we really want is the total loss over all the training instances. So we can compute:

total_loss_1 = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_hat_softmax), reduction_indices=[1]))
sess.run(total_loss_1)
# 0.83934333897877944

Using softmax_cross_entropy_with_logits()

We can instead compute the total cross entropy loss using the tf.nn.softmax_cross_entropy_with_logits() function, as shown below.

loss_per_instance_2 = tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true)
sess.run(loss_per_instance_2)
# array([ 0.4790107 ,  1.19967598])

total_loss_2 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true))
sess.run(total_loss_2)
# 0.83934333897877922

Note that total_loss_1 and total_loss_2 produce essentially equivalent results with some small differences in the very final digits. However, you might as well use the second approach: it takes one less line of code and accumulates less numerical error because the softmax is done for you inside of softmax_cross_entropy_with_logits().


回答 2

tf.nn.softmax计算通过softmax层的前向传播。计算模型输出的概率时,可以在评估模型时使用它。

tf.nn.softmax_cross_entropy_with_logits计算softmax层的成本。仅在训练期间使用。

logits是模型输出的未归一化对数概率(将softmax归一化之前对它们输出的值)。

tf.nn.softmax computes the forward propagation through a softmax layer. You use it during evaluation of the model when you compute the probabilities that the model outputs.

tf.nn.softmax_cross_entropy_with_logits computes the cost for a softmax layer. It is only used during training.

The logits are the unnormalized log probabilities output the model (the values output before the softmax normalization is applied to them).


回答 3

以上答案对所提问题有足够的描述。

除此之外,Tensorflow还优化了应用激活函数的操作,然后使用其自身的激活以及成本函数来计算成本。因此,它是一个很好的做法,使用:tf.nn.softmax_cross_entropy()tf.nn.softmax(); tf.nn.cross_entropy()

您可以在资源密集型模型中找到它们之间的显着差异。

Above answers have enough description for the asked question.

Adding to that, Tensorflow has optimised the operation of applying the activation function then calculating cost using its own activation followed by cost functions. Hence it is a good practice to use: tf.nn.softmax_cross_entropy() over tf.nn.softmax(); tf.nn.cross_entropy()

You can find prominent difference between them in a resource intensive model.


回答 4

曾经发生过的softmax就是logit,这就是J. Hinton一直在Coursera视频中重复的内容。

What ever goes to softmax is logit, this is what J. Hinton repeats in coursera videos all the time.


回答 5

Tensorflow 2.0兼容答案:的解释dga,并stackoverflowuser2010有很详细的关于Logits和相关的功能。

所有这些功能Tensorflow 1.x都可以正常使用,但是如果您从1.x (1.14, 1.15, etc)2.x (2.0, 2.1, etc..),则使用这些功能会导致错误。

因此,如果我们从迁移,请为上面讨论的所有功能指定2.0兼容的调用。 1.x to 2.x为社区的利益。

1.x中的功能

  1. tf.nn.softmax
  2. tf.nn.softmax_cross_entropy_with_logits
  3. tf.nn.sparse_softmax_cross_entropy_with_logits

从1.x迁移到2.x的相应功能

  1. tf.compat.v2.nn.softmax
  2. tf.compat.v2.nn.softmax_cross_entropy_with_logits
  3. tf.compat.v2.nn.sparse_softmax_cross_entropy_with_logits

有关从1.x迁移到2.x的更多信息,请参阅此迁移指南

Tensorflow 2.0 Compatible Answer: The explanations of dga and stackoverflowuser2010 are very detailed about Logits and the related Functions.

All those functions, when used in Tensorflow 1.x will work fine, but if you migrate your code from 1.x (1.14, 1.15, etc) to 2.x (2.0, 2.1, etc..), using those functions result in error.

Hence, specifying the 2.0 Compatible Calls for all the functions, we discussed above, if we migrate from 1.x to 2.x, for the benefit of the community.

Functions in 1.x:

  1. tf.nn.softmax
  2. tf.nn.softmax_cross_entropy_with_logits
  3. tf.nn.sparse_softmax_cross_entropy_with_logits

Respective Functions when Migrated from 1.x to 2.x:

  1. tf.compat.v2.nn.softmax
  2. tf.compat.v2.nn.softmax_cross_entropy_with_logits
  3. tf.compat.v2.nn.sparse_softmax_cross_entropy_with_logits

For more information about migration from 1.x to 2.x, please refer this Migration Guide.


回答 6

我肯定要强调的一件事是logit仅仅是原始输出,通常是最后一层的输出。这也可以是负值。如果我们将其用于“交叉熵”评估,如下所述:

-tf.reduce_sum(y_true * tf.log(logits))

那就行不通了。由于-ve的日志未定义。因此,使用o softmax激活将克服此问题。

这是我的理解,如果我错了,请纠正我。

One more thing that I would definitely like to highlight as logit is just a raw output, generally the output of last layer. This can be a negative value as well. If we use it as it’s for “cross entropy” evaluation as mentioned below:

-tf.reduce_sum(y_true * tf.log(logits))

then it wont work. As log of -ve is not defined. So using o softmax activation, will overcome this problem.

This is my understanding, please correct me if Im wrong.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。