张量流的tf.nn.max_pool中的’SAME’和’VALID’填充有什么区别?

问题:张量流的tf.nn.max_pool中的’SAME’和’VALID’填充有什么区别?

什么是“相同”和“有效”填充之间的区别tf.nn.max_pooltensorflow

我认为,“有效”表示在进行最大池化时,边缘外部不会出现零填充。

根据深度学习卷积算法指南,它说池运算符中将没有填充,即仅使用的“ VALID” tensorflow。但是最大池的“相同”填充是tensorflow什么?

What is the difference between ‘SAME’ and ‘VALID’ padding in tf.nn.max_pool of tensorflow?

In my opinion, ‘VALID’ means there will be no zero padding outside the edges when we do max pool.

According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use ‘VALID’ of tensorflow. But what is ‘SAME’ padding of max pool in tensorflow?


回答 0

我将举一个例子使其更加清晰:

  • x:输入形状为[2,3],1通道的图像
  • valid_pad:具有2×2内核,步幅2和有效填充的最大池。
  • same_pad:具有2×2内核,步幅2和相同填充的最大池(这是经典的处理方式)

输出形状为:

  • valid_pad:这里,没有填充,因此输出形状为[1,1]
  • same_pad:在这里,我们将图像填充为[2,4]形状(使用-inf,然后应用最大池),因此输出形状为[1、2]

x = tf.constant([[1., 2., 3.],
                 [4., 5., 6.]])

x = tf.reshape(x, [1, 2, 3, 1])  # give a shape accepted by tf.nn.max_pool

valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')

valid_pad.get_shape() == [1, 1, 1, 1]  # valid_pad is [5.]
same_pad.get_shape() == [1, 1, 2, 1]   # same_pad is  [5., 6.]

I’ll give an example to make it clearer:

  • x: input image of shape [2, 3], 1 channel
  • valid_pad: max pool with 2×2 kernel, stride 2 and VALID padding.
  • same_pad: max pool with 2×2 kernel, stride 2 and SAME padding (this is the classic way to go)

The output shapes are:

  • valid_pad: here, no padding so the output shape is [1, 1]
  • same_pad: here, we pad the image to the shape [2, 4] (with -inf and then apply max pool), so the output shape is [1, 2]

x = tf.constant([[1., 2., 3.],
                 [4., 5., 6.]])

x = tf.reshape(x, [1, 2, 3, 1])  # give a shape accepted by tf.nn.max_pool

valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')

valid_pad.get_shape() == [1, 1, 1, 1]  # valid_pad is [5.]
same_pad.get_shape() == [1, 1, 2, 1]   # same_pad is  [5., 6.]


回答 1

如果您喜欢ascii艺术:

  • "VALID" =不带填充:

       inputs:         1  2  3  4  5  6  7  8  9  10 11 (12 13)
                      |________________|                dropped
                                     |_________________|
  • "SAME" =零填充:

                   pad|                                      |pad
       inputs:      0 |1  2  3  4  5  6  7  8  9  10 11 12 13|0  0
                   |________________|
                                  |_________________|
                                                 |________________|

在此示例中:

  • 输入宽度= 13
  • 滤镜宽度= 6
  • 步幅= 5

笔记:

  • "VALID" 只删除最右边的列(或最下面的行)。
  • "SAME" 尝试左右均匀填充,但是如果要添加的列数是奇数,它将在右边添加额外的列,如本示例中的情况(相同的逻辑在垂直方向上适用:可能会有额外的行底部的零)。

编辑

关于名字:

  • 使用"SAME"填充时,如果跨度为1,则图层的输出将具有与其输入相同的空间尺寸。
  • 使用"VALID"填充时,没有“虚构的”填充输入。该图层仅使用有效的输入数据。

If you like ascii art:

  • "VALID" = without padding:

       inputs:         1  2  3  4  5  6  7  8  9  10 11 (12 13)
                      |________________|                dropped
                                     |_________________|
    
  • "SAME" = with zero padding:

                   pad|                                      |pad
       inputs:      0 |1  2  3  4  5  6  7  8  9  10 11 12 13|0  0
                   |________________|
                                  |_________________|
                                                 |________________|
    

In this example:

  • Input width = 13
  • Filter width = 6
  • Stride = 5

Notes:

  • "VALID" only ever drops the right-most columns (or bottom-most rows).
  • "SAME" tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).

Edit:

About the name:

  • With "SAME" padding, if you use a stride of 1, the layer’s outputs will have the same spatial dimensions as its inputs.
  • With "VALID" padding, there’s no “made-up” padding inputs. The layer only uses valid input data.

回答 2

stride为1时(卷积比池化更典型),我们可以想到以下区别:

  • "SAME":输出大小输入大小相同。这要求过滤器窗口滑到输入图的外部,因此需要填充。
  • "VALID":过滤器窗口停留在输入地图内的有效位置,因此输出大小缩小filter_size - 1。没有填充发生。

When stride is 1 (more typical with convolution than pooling), we can think of the following distinction:

  • "SAME": output size is the same as input size. This requires the filter window to slip outside input map, hence the need to pad.
  • "VALID": Filter window stays at valid position inside input map, so output size shrinks by filter_size - 1. No padding occurs.

回答 3

所述TensorFlow卷积示例给出关于之间的差的概述SAMEVALID

  • 对于SAME填充,输出高度和宽度的计算公式如下:

    out_height = ceil(float(in_height) / float(strides[1]))
    out_width  = ceil(float(in_width) / float(strides[2]))

  • 对于VALID填充,输出高度和宽度的计算公式如下:

    out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
    out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))

The TensorFlow Convolution example gives an overview about the difference between SAME and VALID :

  • For the SAME padding, the output height and width are computed as:

    out_height = ceil(float(in_height) / float(strides[1]))
    out_width  = ceil(float(in_width) / float(strides[2]))
    

And

  • For the VALID padding, the output height and width are computed as:

    out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
    out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))
    

回答 4

填充是增加输入数据大小的操作。如果是一维数据,则只需在数组上附加/添加常量,在二维中,将这些常量包围在矩阵中。在n-dim中,将常数包围在n-dim超立方体中。在大多数情况下,此常数为零,称为零填充。

这是一个p=1应用于2-d张量的零填充示例:


您可以对内核使用任意填充,但是某些填充值的使用频率要比其他填充值高:

  • 有效填充。最简单的情况是完全没有填充。只需保持数据不变即可。
  • 相同的填充有时也称为HALF填充。之所以称为SAME,是因为对于步幅= 1的卷积(或对于池化),它应产生与输入大小相同的输出。之所以称为HALF,是因为对于一个大小的内核k
  • 全填充是最大填充,不会导致仅填充元素的卷积。对于内核大小k,此填充等于k - 1

要在TF中使用任意填充,您可以使用 tf.pad()

Padding is an operation to increase the size of the input data. In case of 1-dimensional data you just append/prepend the array with a constant, in 2-dim you surround matrix with these constants. In n-dim you surround your n-dim hypercube with the constant. In most of the cases this constant is zero and it is called zero-padding.

Here is an example of zero-padding with p=1 applied to 2-d tensor:


You can use arbitrary padding for your kernel but some of the padding values are used more frequently than others they are:

  • VALID padding. The easiest case, means no padding at all. Just leave your data the same it was.
  • SAME padding sometimes called HALF padding. It is called SAME because for a convolution with a stride=1, (or for pooling) it should produce output of the same size as the input. It is called HALF because for a kernel of size k
  • FULL padding is the maximum padding which does not result in a convolution over just padded elements. For a kernel of size k, this padding is equal to k - 1.

To use arbitrary padding in TF, you can use tf.pad()


回答 5

快速说明

VALID:不要应用任何填充,即假设所有尺寸均有效,以便输入图像完全被您指定的过滤器和步幅覆盖。

SAME:对输入使用填充(如果需要),以使输入图像完全被滤镜覆盖,并跨步指定。对于第1步,这将确保输出图像大小相同输入。

笔记

  • 这同样适用于转换层和最大池层
  • 术语“有效”有点用词不当,因为如果您删除部分图像,事情不会变得“无效”。有时您甚至可能想要那样。这应该被称为NO_PADDING改为。
  • 术语“相同”也是错误的称呼,因为只有当输出尺寸与输入尺寸相同时,步幅为1才有意义。例如,对于跨度为2的步,输出尺寸将为一半。应该应该AUTO_PADDING改为调用它。
  • SAME(即自动填充模式)下,Tensorflow将尝试在左右两侧均匀地填充填充。
  • VALID(即无填充模式)下,如果您的过滤器和步幅未完全覆盖输入图像,Tensorflow将在右边和/或底部单元格下降。

Quick Explanation

VALID: Don’t apply any padding, i.e., assume that all dimensions are valid so that input image fully gets covered by filter and stride you specified.

SAME: Apply padding to input (if needed) so that input image gets fully covered by filter and stride you specified. For stride 1, this will ensure that output image size is same as input.

Notes

  • This applies to conv layers as well as max pool layers in same way
  • The term “valid” is bit of a misnomer because things don’t become “invalid” if you drop part of the image. Sometime you might even want that. This should have probably be called NO_PADDING instead.
  • The term “same” is a misnomer too because it only makes sense for stride of 1 when output dimension is same as input dimension. For stride of 2, output dimensions will be half, for example. This should have probably be called AUTO_PADDING instead.
  • In SAME (i.e. auto-pad mode), Tensorflow will try to spread padding evenly on both left and right.
  • In VALID (i.e. no padding mode), Tensorflow will drop right and/or bottom cells if your filter and stride doesn’t full cover input image.

回答 6

我引用官方tensorflow文档https://www.tensorflow.org/api_guides/python/nn#Convolution的答案 对于’SAME’填充,输出高度和宽度的计算方式如下:

out_height = ceil(float(in_height) / float(strides[1]))
out_width  = ceil(float(in_width) / float(strides[2]))

和顶部和左侧的填充计算为:

pad_along_height = max((out_height - 1) * strides[1] +
                    filter_height - in_height, 0)
pad_along_width = max((out_width - 1) * strides[2] +
                   filter_width - in_width, 0)
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left

对于“有效”填充,输出高度和宽度的计算公式如下:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))

填充值始终为零。

I am quoting this answer from official tensorflow docs https://www.tensorflow.org/api_guides/python/nn#Convolution For the ‘SAME’ padding, the output height and width are computed as:

out_height = ceil(float(in_height) / float(strides[1]))
out_width  = ceil(float(in_width) / float(strides[2]))

and the padding on the top and left are computed as:

pad_along_height = max((out_height - 1) * strides[1] +
                    filter_height - in_height, 0)
pad_along_width = max((out_width - 1) * strides[2] +
                   filter_width - in_width, 0)
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left

For the ‘VALID’ padding, the output height and width are computed as:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))

and the padding values are always zero.


回答 7

填充有三种选择:有效(无填充),相同(或一半),完整。您可以在以下位置找到说明(在Theano中):http : //deeplearning.net/software/theano/tutorial/conv_arithmetic.html

  • 有效或无填充:

有效填充不涉及零填充,因此它仅覆盖有效输入,不包括人工生成的零。如果步幅s = 1,则对于内核大小k,输出的长度为((输入的长度)-(k-1))。

  • 相同或一半填充:

当s = 1时,相同的填充使输出的大小与输入的大小相同。如果s = 1,则填充的零数为(k-1)。

  • 全填充:

完全填充意味着内核将在整个输入上运行,因此,在最后,内核可能会遇到唯一的一个输入,而其他输入可能为零。如果s = 1,则填充的零数为2(k-1)。如果s = 1,则输出长度为((输入长度​​)+(k-1))。

因此,填充数:(有效)<=(相同)<=(满)

There are three choices of padding: valid (no padding), same (or half), full. You can find explanations (in Theano) here: http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html

  • Valid or no padding:

The valid padding involves no zero padding, so it covers only the valid input, not including artificially generated zeros. The length of output is ((the length of input) – (k-1)) for the kernel size k if the stride s=1.

  • Same or half padding:

The same padding makes the size of outputs be the same with that of inputs when s=1. If s=1, the number of zeros padded is (k-1).

  • Full padding:

The full padding means that the kernel runs over the whole inputs, so at the ends, the kernel may meet the only one input and zeros else. The number of zeros padded is 2(k-1) if s=1. The length of output is ((the length of input) + (k-1)) if s=1.

Therefore, the number of paddings: (valid) <= (same) <= (full)


回答 8

启用/禁用填充。确定输入的有效大小。

VALID:没有填充。卷积运算等操作仅在“有效”的位置执行,即不太靠近张量的边界。
使用3×3的内核和10×10的图像,您将在边界内的8×8区域执行卷积。

SAME:提供填充。每当您的操作引用邻域(无论大小)时,当该邻域超出原始张量时,都会提供零值,以使该操作也可以处理边界值。
使用3×3的内核和10×10的图像,您将在整个10×10区域上进行卷积。

Padding on/off. Determines the effective size of your input.

VALID: No padding. Convolution etc. ops are only performed at locations that are “valid”, i.e. not too close to the borders of your tensor.
With a kernel of 3×3 and image of 10×10, you would be performing convolution on the 8×8 area inside the borders.

SAME: Padding is provided. Whenever your operation references a neighborhood (no matter how big), zero values are provided when that neighborhood extends outside the original tensor to allow that operation to work also on border values.
With a kernel of 3×3 and image of 10×10, you would be performing convolution on the full 10×10 area.


回答 9

有效填充:这是零填充。希望没有混乱。

x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
print (valid_pad.get_shape()) # output-->(1, 2, 1, 1)

相同的 填充:首先要理解这有点棘手,因为我们必须分别考虑两个条件,如官方文档中所述

让我们将输入设为,将输出设为,将填充设为,将步幅设为,将内核大小设为(仅考虑一个维度)

情况01 ::

情况02 ::

计算出使得填充可用的最小值。由于的值是已知的,因此可以使用此公式找到的值

让我们算出这个例子:

x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
print (same_pad.get_shape()) # --> output (1, 2, 2, 1)

x的维数为(3,4)。然后,如果采取水平方向(3):

如果采用垂直方向(4):

希望这将有助于理解SAME填充在TF中的实际作用。

VALID padding: this is with zero padding. Hope there is no confusion.

x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
print (valid_pad.get_shape()) # output-->(1, 2, 1, 1)

SAME padding: This is kind of tricky to understand in the first place because we have to consider two conditions separately as mentioned in the official docs.

Let’s take input as , output as , padding as , stride as and kernel size as (only a single dimension is considered)

Case 01: :

Case 02: :

is calculated such that the minimum value which can be taken for padding. Since value of is known, value of can be found using this formula .

Let’s work out this example:

x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
print (same_pad.get_shape()) # --> output (1, 2, 2, 1)

Here the dimension of x is (3,4). Then if the horizontal direction is taken (3):

If the vertial direction is taken (4):

Hope this will help to understand how actually SAME padding works in TF.


回答 10

根据此处的说明以及Tristan的回答,我通常使用这些快速功能进行健全性检查。

# a function to help us stay clean
def getPaddings(pad_along_height,pad_along_width):
    # if even.. easy..
    if pad_along_height%2 == 0:
        pad_top = pad_along_height / 2
        pad_bottom = pad_top
    # if odd
    else:
        pad_top = np.floor( pad_along_height / 2 )
        pad_bottom = np.floor( pad_along_height / 2 ) +1
    # check if width padding is odd or even
    # if even.. easy..
    if pad_along_width%2 == 0:
        pad_left = pad_along_width / 2
        pad_right= pad_left
    # if odd
    else:
        pad_left = np.floor( pad_along_width / 2 )
        pad_right = np.floor( pad_along_width / 2 ) +1
        #
    return pad_top,pad_bottom,pad_left,pad_right

# strides [image index, y, x, depth]
# padding 'SAME' or 'VALID'
# bottom and right sides always get the one additional padded pixel (if padding is odd)
def getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):
    if padding == 'SAME':
        out_height = np.ceil(float(inputHeight) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth) / float(strides[2]))
        #
        pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)
        pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)
        #
        # now get padding
        pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'total pad along height' , pad_along_height
        print 'total pad along width' , pad_along_width
        print 'pad at top' , pad_top
        print 'pad at bottom' ,pad_bottom
        print 'pad at left' , pad_left
        print 'pad at right' ,pad_right

    elif padding == 'VALID':
        out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'no padding'


# use like so
getOutputDim (80,80,4,4,[1,1,1,1],'SAME')

Based on the explanation here and following up on Tristan’s answer, I usually use these quick functions for sanity checks.

# a function to help us stay clean
def getPaddings(pad_along_height,pad_along_width):
    # if even.. easy..
    if pad_along_height%2 == 0:
        pad_top = pad_along_height / 2
        pad_bottom = pad_top
    # if odd
    else:
        pad_top = np.floor( pad_along_height / 2 )
        pad_bottom = np.floor( pad_along_height / 2 ) +1
    # check if width padding is odd or even
    # if even.. easy..
    if pad_along_width%2 == 0:
        pad_left = pad_along_width / 2
        pad_right= pad_left
    # if odd
    else:
        pad_left = np.floor( pad_along_width / 2 )
        pad_right = np.floor( pad_along_width / 2 ) +1
        #
    return pad_top,pad_bottom,pad_left,pad_right

# strides [image index, y, x, depth]
# padding 'SAME' or 'VALID'
# bottom and right sides always get the one additional padded pixel (if padding is odd)
def getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):
    if padding == 'SAME':
        out_height = np.ceil(float(inputHeight) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth) / float(strides[2]))
        #
        pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)
        pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)
        #
        # now get padding
        pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'total pad along height' , pad_along_height
        print 'total pad along width' , pad_along_width
        print 'pad at top' , pad_top
        print 'pad at bottom' ,pad_bottom
        print 'pad at left' , pad_left
        print 'pad at right' ,pad_right

    elif padding == 'VALID':
        out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'no padding'


# use like so
getOutputDim (80,80,4,4,[1,1,1,1],'SAME')

回答 11

综上所述,“有效”填充表示没有填充。卷积层的输出大小根据输入大小和内核大小而缩小。

相反,“相同”填充表示使用填充。当跨度设置为1时,在计算卷积时,通过在输入数据周围附加一定数量的“ 0边界”,将卷积层的输出大小保持为输入大小。

希望这种直观的描述有所帮助。

To sum up, ‘valid’ padding means no padding. The output size of the convolutional layer shrinks depending on the input size & kernel size.

On the contrary, ‘same’ padding means using padding. When the stride is set as 1, the output size of the convolutional layer maintains as the input size by appending a certain number of ‘0-border’ around the input data when calculating convolution.

Hope this intuitive description helps.


回答 12

此处,W和H是输入的宽度和高度,F是过滤器尺寸,P是填充大小(即,要填充的行数或列数)

对于相同的填充:

对于有效填充:

Here, W and H are width and height of input, F are filter dimensions, P is padding size (i.e., number of rows or columns to be padded)

For SAME padding:

For VALID padding:


回答 13

补充YvesgereY的好答案,我发现此可视化非常有用:

填充有效 ”是第一个数字。过滤器窗口停留在图像内部。

填充“ 相同 ”是第三个数字。输出大小相同。


本文上找到它。

Complementing YvesgereY’s great answer, I found this visualization extremely helpful:

Padding ‘valid‘ is the first figure. The filter window stays inside the image.

Padding ‘same‘ is the third figure. The output is the same size.


Found it on this article.


回答 14

Tensorflow 2.0兼容答案:上面已经提供了有关“有效”和“相同”填充的详细说明。

但是,Tensorflow 2.x (>= 2.0)为了社区的利益,我将在中指定不同的Pooling Function及其各自的Command 。

1.x中的功能

tf.nn.max_pool

tf.keras.layers.MaxPool2D

Average Pooling => None in tf.nn, tf.keras.layers.AveragePooling2D

2.x中的功能

tf.nn.max_pool如果在2.x和tf.compat.v1.nn.max_pool_v2或中使用tf.compat.v2.nn.max_pool,则从1.x迁移到2.x。

tf.keras.layers.MaxPool2D 如果在2.x和

tf.compat.v1.keras.layers.MaxPool2Dtf.compat.v1.keras.layers.MaxPooling2Dtf.compat.v2.keras.layers.MaxPool2Dtf.compat.v2.keras.layers.MaxPooling2D(如果从1.x迁移到2.x)。

Average Pooling => tf.nn.avg_pool2d或者tf.keras.layers.AveragePooling2D在TF 2.x中使用

tf.compat.v1.nn.avg_pool_v2tf.compat.v2.nn.avg_pooltf.compat.v1.keras.layers.AveragePooling2Dtf.compat.v1.keras.layers.AvgPool2Dtf.compat.v2.keras.layers.AveragePooling2Dtf.compat.v2.keras.layers.AvgPool2D,如果从1.x中迁移到2.x版本

有关从Tensorflow 1.x迁移到2.x的更多信息,请参阅此迁移指南

Tensorflow 2.0 Compatible Answer: Detailed Explanations have been provided above, about “Valid” and “Same” Padding.

However, I will specify different Pooling Functions and their respective Commands in Tensorflow 2.x (>= 2.0), for the benefit of the community.

Functions in 1.x:

tf.nn.max_pool

tf.keras.layers.MaxPool2D

Average Pooling => None in tf.nn, tf.keras.layers.AveragePooling2D

Functions in 2.x:

tf.nn.max_pool if used in 2.x and tf.compat.v1.nn.max_pool_v2 or tf.compat.v2.nn.max_pool, if migrated from 1.x to 2.x.

tf.keras.layers.MaxPool2D if used in 2.x and

tf.compat.v1.keras.layers.MaxPool2D or tf.compat.v1.keras.layers.MaxPooling2D or tf.compat.v2.keras.layers.MaxPool2D or tf.compat.v2.keras.layers.MaxPooling2D, if migrated from 1.x to 2.x.

Average Pooling => tf.nn.avg_pool2d or tf.keras.layers.AveragePooling2D if used in TF 2.x and

tf.compat.v1.nn.avg_pool_v2 or tf.compat.v2.nn.avg_pool or tf.compat.v1.keras.layers.AveragePooling2D or tf.compat.v1.keras.layers.AvgPool2D or tf.compat.v2.keras.layers.AveragePooling2D or tf.compat.v2.keras.layers.AvgPool2D , if migrated from 1.x to 2.x.

For more information about Migration from Tensorflow 1.x to 2.x, please refer to this Migration Guide.