无法分配具有形状和数据类型的数组

问题:无法分配具有形状和数据类型的数组

我在Ubuntu 18上在numpy中分配大型数组时遇到了一个问题,而在MacOS上却没有遇到同样的问题。

我想一个numpy的阵列形状分配内存(156816, 36, 53806) 使用

np.zeros((156816, 36, 53806), dtype='uint8')

当我在Ubuntu OS上遇到错误时

>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8

我没有在MacOS上得到它:

>>> import numpy as np 
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       ...,

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)

我读过某处np.zeros不应该真正分配数组所需的全部内存,而只分配了非零元素。即使Ubuntu计算机具有64gb的内存,而我的MacBook Pro却只有16gb。

版本:

Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0

mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0

PS:在Google Colab上也失败

I’m facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.

I am trying to allocate memory for a numpy array with shape (156816, 36, 53806) with

np.zeros((156816, 36, 53806), dtype='uint8')

and while I’m getting an error on Ubuntu OS

>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8

I’m not getting it on MacOS:

>>> import numpy as np 
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       ...,

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]],

       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)

I’ve read somewhere that np.zeros shouldn’t be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.

versions:

Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0

mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0

PS: also failed on Google Colab


回答 0

这可能是由于系统的过量使用处理模式所致。

在默认模式下0

启发式过量使用处理。明显的地址空间过量使用被拒绝。用于典型的系统。它确保严重的野生分配失败,同时允许过量使用以减少交换使用。在此模式下,允许root分配更多的内存。这是默认值。

此处没有很好地解释所使用的确切启发式方法,但是在Linux上,在提交启发式方法本页上对此进行了更多讨论 。

您可以通过运行以下命令检查当前的过量使用模式

$ cat /proc/sys/vm/overcommit_memory
0

在这种情况下,您要分配

>>> 156816 * 36 * 53806 / 1024.0**3
282.8939827680588

约282 GB,并且内核说的很清楚,我无法将这么多物理页提交给它,并且它拒绝分配。

如果(以root用户身份)运行:

$ echo 1 > /proc/sys/vm/overcommit_memory

这将启用“始终过量使用”模式,并且您会发现,无论系统有多大(至少在64位内存寻址中),该系统的确允许您进行分配。

我自己在具有32 GB RAM的计算机上进行了测试。在过量提交模式下,0我还得到了一个MemoryError,但是将其更改回1它可以工作:

>>> import numpy as np
>>> a = np.zeros((156816, 36, 53806), dtype='uint8')
>>> a.nbytes
303755101056

然后,您可以继续写入阵列中的任何位置,并且只有在您明确写入物理页面时,系统才会分配物理页面。因此,您可以谨慎地将其用于稀疏数组。

This is likely due to your system’s overcommit handling mode.

In the default mode, 0,

Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default.

The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page.

You can check your current overcommit mode by running

$ cat /proc/sys/vm/overcommit_memory
0

In this case you’re allocating

>>> 156816 * 36 * 53806 / 1024.0**3
282.8939827680588

~282 GB, and the kernel is saying well obviously there’s no way I’m going to be able to commit that many physical pages to this, and it refuses the allocation.

If (as root) you run:

$ echo 1 > /proc/sys/vm/overcommit_memory

This will enable “always overcommit” mode, and you’ll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least).

I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works:

>>> import numpy as np
>>> a = np.zeros((156816, 36, 53806), dtype='uint8')
>>> a.nbytes
303755101056

You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays.


回答 1

我在Window上遇到了同样的问题,并遇到了这个解决方案。因此,如果有人在Windows中遇到此问题,那么对我来说,解决方案是增加页面文件的大小,因为这对我来说也是内存过量使用的问题。

Windows 8

  1. 在键盘上按WindowsKey + X,然后在弹出菜单中单击“系统”。
  2. 点击或单击高级系统设置。系统可能会要求您输入管理员密码或确认选择
  3. 在“高级”选项卡上的“性能”下,点击或单击“设置”。
  4. 点击或单击“高级”选项卡,然后在“虚拟内存”下,单击或单击“更改”
  5. 清除“自动管理所有驱动器的页面文件大小”复选框。
  6. 在驱动器[卷标签]下,点击或单击包含要更改的页面文件的驱动器
  7. 点击或单击“自定义大小”,在“初始大小(MB)”或“最大大小(MB)”框中输入新的大小(以兆字节为单位),单击或单击“设置”,然后单击或单击“确定”
  8. 重新启动系统

Windows 10

  1. 按Windows键
  2. 类型SystemPropertiesAdvanced
  3. 单击以管理员身份运行
  4. 在性能下,单击设置
  5. 选择高级选项卡
  6. 选择更改…
  7. 取消选中“自动管理所有驱动器的页面文件大小”
  8. 然后选择自定义尺寸并填写适当的尺寸
  9. 按设置,然后按确定,然后从“虚拟内存”,“性能选项”和“系统属性”对话框退出
  10. 重新启动系统

注意:在此示例中,我的系统上没有足够的内存供〜282GB使用,但对于我的特殊情况,此方法有效。

编辑

这里建议的页面文件大小建议:

有一个公式可以计算正确的页面文件大小。初始大小是系统总内存的一半(1.5)x。最大大小为三(3)x初始大小。因此,假设您有4 GB(1 GB = 1,024 MB x 4 = 4,096 MB)的内存。初始大小为1.5 x 4,096 = 6,144 MB,最大大小为3 x 6,144 = 18,432 MB。

这里要记住一些事情:

但是,这没有考虑到计算机可能特有的其他重要因素和系统设置。同样,让Windows选择要使用的内容,而不是依赖于在另一台计算机上工作的任意公式。

也:

页面文件大小的增加可能有助于防止Windows中的不稳定和崩溃。但是,硬盘驱动器的读/写时间比数据存储在计算机内存中的情况要慢得多。页面文件较大将增加硬盘驱动器的工作量,从而导致其他所有文件的运行速度变慢。仅当遇到内存不足错误时才应增加页面文件的大小,并且仅作为临时解决方案。更好的解决方案是向计算机添加更多的内存。

I had this same problem on Window’s and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the pagefile size, as it was a Memory overcommitment problem for me too.

Windows 8

  1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
  2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
  3. On the Advanced tab, under Performance, tap or click Settings.
  4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
  5. Clear the Automatically manage paging file size for all drives check box.
  6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
  7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
  8. Reboot your system

Windows 10

  1. Press the Windows key
  2. Type SystemPropertiesAdvanced
  3. Click Run as administrator
  4. Click Settings
  5. Select the Advanced tab
  6. Select Change…
  7. Uncheck Automatically managing paging file size for all drives
  8. Then select Custom size and fill in the appropriate size
  9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
  10. Reboot your system

Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.

EDIT

From here the suggested recommendations for page file size:

There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let’s say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.

Some things to keep in mind from here:

However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.

Also:

Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.


回答 2

我也在Windows上遇到了这个问题。对我来说,解决方案是从32位版本的Python切换到64位版本的Python。的确,像32位CPU这样的32位软件最多可以分配4 GB的RAM(2 ^ 32)。因此,如果您拥有超过4 GB的RAM,则32位版本将无法利用它。

使用64位版本的Python(在下载页面中标记为x86-64的版本),问题就消失了。

您可以通过输入解释器来检查您拥有哪个版本。我具有64位版本,现在具有: Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)],其中[MSC v.1916 64位(AMD64)]表示“ 64位Python”。

:为写这篇文章(2020年5)时,matplotlib是不可用的python39,所以我安装推荐python37,64位。

资料来源:

I came across this problem on Windows too. The solution for me was to switch from a 32-bit to a 64-bit version of Python. Indeed, a 32-bit software, like a 32-bit CPU, can adress a maximum of 4 GB of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it.

With a 64-bit version of Python (the one labeled x86-64 in the download page), the issue disappeared.

You can check which version you have by entering the interpreter. I, with a 64-bit version, now have: Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)], where [MSC v.1916 64 bit (AMD64)] means “64-bit Python”.

Note : as of the time of this writing (May 2020), matplotlib is not available on python39, so I recommand installing python37, 64 bits.

Sources :


回答 3

在我的情况下,添加dtype属性会将数组的dtype更改为较小的类型(从float64到uint8),减小数组的大小足以不会在Windows(64位)中引发MemoryError。

mask = np.zeros(edges.shape)

mask = np.zeros(edges.shape,dtype='uint8')

In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).

from

mask = np.zeros(edges.shape)

to

mask = np.zeros(edges.shape,dtype='uint8')

回答 4

有时,由于内核已达到极限,会弹出此错误。尝试重新启动内核,然后重做必要的步骤。

Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps.


回答 5

将数据类型更改为另一种使用较少内存的数据。对我来说,我将数据类型更改为numpy.uint8:

data['label'] = data['label'].astype(np.uint8)

change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:

data['label'] = data['label'].astype(np.uint8)