问题:如何在TensorFlow中添加正则化?
我在使用TensorFlow实现的许多可用神经网络代码中发现,正则化项通常是通过在损耗值上手动添加一个附加项来实现的。
我的问题是:
是否有比手动进行更优雅或推荐的正规化方法?
我也发现
get_variable
有一个论点regularizer
。应该如何使用?根据我的观察,如果我们向其传递正则化器(例如tf.contrib.layers.l2_regularizer
,将计算表示正则化项的张量并将其添加到名为的图集合中tf.GraphKeys.REGULARIZATOIN_LOSSES
,该集合是否会被TensorFlow自动使用(例如,训练时由优化器使用)?期望我自己使用该收藏集吗?
回答 0
如第二点regularizer
所述,建议使用参数。您可以在中使用它get_variable
,也可以在其中设置一次,variable_scope
并对所有变量进行正则化。
损失收集在图中,您需要像这样将它们手动添加到成本函数中。
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
reg_constant = 0.01 # Choose an appropriate one.
loss = my_normal_loss + reg_constant * sum(reg_losses)
希望有帮助!
回答 1
现有答案的几个方面对我来说还不是很清楚,所以这里是一个循序渐进的指南:
定义一个正则化器。在这里可以设置正则化常量,例如:
regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)
通过以下方式创建变量:
weights = tf.get_variable( name="weights", regularizer=regularizer, ... )
等效地,可以通过常规
weights = tf.Variable(...)
构造函数创建变量,然后通过tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, weights)
。定义一些
loss
术语并添加正则化术语:reg_variables = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_term = tf.contrib.layers.apply_regularization(regularizer, reg_variables) loss += reg_term
注意:看起来
tf.contrib.layers.apply_regularization
是作为实现的AddN
,所以大致等同于sum(reg_variables)
。
回答 2
由于找不到答案,我将提供一个简单正确的答案。您需要两个简单的步骤,其余步骤由tensorflow magic完成:
在创建变量或图层时添加正则化器:
tf.layers.dense(x, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.001)) # or tf.get_variable('a', regularizer=tf.contrib.layers.l2_regularizer(0.001))
在定义损失时添加正则项:
loss = ordinary_loss + tf.losses.get_regularization_loss()
回答 3
contrib.learn
基于Tensorflow网站上的Deep MNIST教程,对该库执行此操作的另一种方法如下。首先,假设您已经导入了相关的库(例如import tensorflow.contrib.layers as layers
),则可以使用单独的方法定义网络:
def easier_network(x, reg):
""" A network based on tf.contrib.learn, with input `x`. """
with tf.variable_scope('EasyNet'):
out = layers.flatten(x)
out = layers.fully_connected(out,
num_outputs=200,
weights_initializer = layers.xavier_initializer(uniform=True),
weights_regularizer = layers.l2_regularizer(scale=reg),
activation_fn = tf.nn.tanh)
out = layers.fully_connected(out,
num_outputs=200,
weights_initializer = layers.xavier_initializer(uniform=True),
weights_regularizer = layers.l2_regularizer(scale=reg),
activation_fn = tf.nn.tanh)
out = layers.fully_connected(out,
num_outputs=10, # Because there are ten digits!
weights_initializer = layers.xavier_initializer(uniform=True),
weights_regularizer = layers.l2_regularizer(scale=reg),
activation_fn = None)
return out
然后,在主要方法中,您可以使用以下代码段:
def main(_):
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
# Make a network with regularization
y_conv = easier_network(x, FLAGS.regu)
weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'EasyNet')
print("")
for w in weights:
shp = w.get_shape().as_list()
print("- {} shape:{} size:{}".format(w.name, shp, np.prod(shp)))
print("")
reg_ws = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES, 'EasyNet')
for w in reg_ws:
shp = w.get_shape().as_list()
print("- {} shape:{} size:{}".format(w.name, shp, np.prod(shp)))
print("")
# Make the loss function `loss_fn` with regularization.
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
loss_fn = cross_entropy + tf.reduce_sum(reg_ws)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss_fn)
为了使它起作用,您需要遵循我之前链接的MNIST教程并导入相关的库,但是学习TensorFlow是一个不错的练习,并且很容易看到正则化如何影响输出。如果将正则化用作参数,则可以看到以下内容:
- EasyNet/fully_connected/weights:0 shape:[784, 200] size:156800
- EasyNet/fully_connected/biases:0 shape:[200] size:200
- EasyNet/fully_connected_1/weights:0 shape:[200, 200] size:40000
- EasyNet/fully_connected_1/biases:0 shape:[200] size:200
- EasyNet/fully_connected_2/weights:0 shape:[200, 10] size:2000
- EasyNet/fully_connected_2/biases:0 shape:[10] size:10
- EasyNet/fully_connected/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0
- EasyNet/fully_connected_1/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0
- EasyNet/fully_connected_2/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0
请注意,基于可用项目,正则化部分为您提供了三项。
使用0、0.0001、0.01和1.0的正则化,我得到的测试精度值分别为0.9468、0.9476、0.9183和0.1135,显示了高正则项的危险。
回答 4
如果有人还在寻找,我想在tf.keras中添加它,您可以通过将其作为参数传递到图层中来添加权重正则化。从Tensorflow Keras Tutorials站点批发获得的添加L2正则化的示例:
model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
据我所知,无需使用此方法手动添加正则化损失。
参考: https //www.tensorflow.org/tutorials/keras/overfit_and_underfit#add_weight_regularization
回答 5
我进行了测试tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
,tf.losses.get_regularization_loss()
并l2_regularizer
在图中使用了一个,发现它们返回相同的值。通过观察值的数量,我猜想reg_constant通过设置的参数已经对值有意义tf.contrib.layers.l2_regularizer
。
回答 6
如果您有CNN,则可以执行以下操作:
在您的模型函数中:
conv = tf.layers.conv2d(inputs=input_layer,
filters=32,
kernel_size=[3, 3],
kernel_initializer='xavier',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-5),
padding="same",
activation=None)
...
在损失函数中:
onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=num_classes)
loss = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=logits)
regularization_losses = tf.losses.get_regularization_losses()
loss = tf.add_n([loss] + regularization_losses)
回答 7
一些答案让我更加困惑。在这里,我给出两种方法来使它变得清晰。
#1.adding all regs by hand
var1 = tf.get_variable(name='v1',shape=[1],dtype=tf.float32)
var2 = tf.Variable(name='v2',initial_value=1.0,dtype=tf.float32)
regularizer = tf.contrib.layers.l1_regularizer(0.1)
reg_term = tf.contrib.layers.apply_regularization(regularizer,[var1,var2])
#here reg_term is a scalar
#2.auto added and read,but using get_variable
with tf.variable_scope('x',
regularizer=tf.contrib.layers.l2_regularizer(0.1)):
var1 = tf.get_variable(name='v1',shape=[1],dtype=tf.float32)
var2 = tf.get_variable(name='v2',shape=[1],dtype=tf.float32)
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
#here reg_losses is a list,should be summed
然后,可以将其添加到总损失中
回答 8
cross_entropy = tf.losses.softmax_cross_entropy(
logits=logits, onehot_labels=labels)
l2_loss = weight_decay * tf.add_n(
[tf.nn.l2_loss(tf.cast(v, tf.float32)) for v in tf.trainable_variables()])
loss = cross_entropy + l2_loss
回答 9
tf.GraphKeys.REGULARIZATION_LOSSES
不会自动添加,但是有一种简单的添加方法:
reg_loss = tf.losses.get_regularization_loss()
total_loss = loss + reg_loss
tf.losses.get_regularization_loss()
用于tf.add_n
对tf.GraphKeys.REGULARIZATION_LOSSES
逐个元素的项求和。tf.GraphKeys.REGULARIZATION_LOSSES
通常是使用正则化函数计算的标量列表。它从tf.get_variable
具有regularizer
指定参数的调用中获取条目。您也可以手动添加到该集合。这在使用时tf.Variable
以及在指定活动正则器或其他自定义正则器时将很有用。例如:
#This will add an activity regularizer on y to the regloss collection
regularizer = tf.contrib.layers.l2_regularizer(0.1)
y = tf.nn.sigmoid(x)
act_reg = regularizer(y)
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, act_reg)
(在此示例中,对x进行正则可能会更有效,因为y对于大x而言确实变平了。)