标签归档:Python

如何仅列出Python中的顶级目录?

问题:如何仅列出Python中的顶级目录?

我希望仅列出某个文件夹内的目录。这意味着我既不需要列出文件名,也不需要其他子文件夹。

让我们看看一个例子是否有帮助。在当前目录中,我们有:

>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
 'Tools', 'w9xpopen.exe']

但是,我不想列出文件名。我也不需要子文件夹,例如\ Lib \ curses。本质上,我想要的东西适用于以下情况:

>>> for root, dirnames, filenames in os.walk('.'):
...     print dirnames
...     break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']

但是,我想知道是否有一种更简单的方法来获得相同的结果。我得到的印象是仅使用os.walk返回顶级是无效的/太多了。

I want to be able to list only the directories inside some folder. This means I don’t want filenames listed, nor do I want additional sub-folders.

Let’s see if an example helps. In the current directory we have:

>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
 'Tools', 'w9xpopen.exe']

However, I don’t want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:

>>> for root, dirnames, filenames in os.walk('.'):
...     print dirnames
...     break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']

However, I’m wondering if there’s a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much.


回答 0

使用os.path.isdir()过滤结果(并使用os.path.join()获得真实路径):

>>> [ name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name)) ]
['ctypes', 'distutils', 'encodings', 'lib-tk', 'config', 'idlelib', 'xml', 'bsddb', 'hotshot', 'logging', 'doc', 'test', 'compiler', 'curses', 'site-packages', 'email', 'sqlite3', 'lib-dynload', 'wsgiref', 'plat-linux2', 'plat-mac']

Filter the result using os.path.isdir() (and use os.path.join() to get the real path):

>>> [ name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name)) ]
['ctypes', 'distutils', 'encodings', 'lib-tk', 'config', 'idlelib', 'xml', 'bsddb', 'hotshot', 'logging', 'doc', 'test', 'compiler', 'curses', 'site-packages', 'email', 'sqlite3', 'lib-dynload', 'wsgiref', 'plat-linux2', 'plat-mac']

回答 1

步行

os.walknext项目功能一起使用:

next(os.walk('.'))[1]

对于Python <= 2.5,请使用:

os.walk('.').next()[1]

如何运作

os.walk是一个生成器,调用next将以3元组(目录路径,目录名,文件名)的形式获取第一个结果。因此,[1]索引仅返回dirnames该元组的。

os.walk

Use os.walk with next item function:

next(os.walk('.'))[1]

For Python <=2.5 use:

os.walk('.').next()[1]

How this works

os.walk is a generator and calling next will get the first result in the form of a 3-tuple (dirpath, dirnames, filenames). Thus the [1] index returns only the dirnames from that tuple.


回答 2

使用os.path.isdir筛选列表以检测目录。

filter(os.path.isdir, os.listdir(os.getcwd()))

Filter the list using os.path.isdir to detect directories.

filter(os.path.isdir, os.listdir(os.getcwd()))

回答 3

directories=[d for d in os.listdir(os.getcwd()) if os.path.isdir(d)]
directories=[d for d in os.listdir(os.getcwd()) if os.path.isdir(d)]

回答 4

请注意,os.listdir(os.getcwd())最好不要这样做,而要这样做os.listdir(os.path.curdir)。少调用一个函数,它具有可移植性。

因此,要完成答案,请获取文件夹中的目录列表:

def listdirs(folder):
    return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))]

如果您希望使用完整路径名,请使用以下功能:

def listdirs(folder):
    return [
        d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder))
        if os.path.isdir(d)
    ]

Note that, instead of doing os.listdir(os.getcwd()), it’s preferable to do os.listdir(os.path.curdir). One less function call, and it’s as portable.

So, to complete the answer, to get a list of directories in a folder:

def listdirs(folder):
    return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))]

If you prefer full pathnames, then use this function:

def listdirs(folder):
    return [
        d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder))
        if os.path.isdir(d)
    ]

回答 5

这似乎也起作用(至少在Linux上):

import glob, os
glob.glob('*' + os.path.sep)

This seems to work too (at least on linux):

import glob, os
glob.glob('*' + os.path.sep)

回答 6

只是要补充一点,使用os.listdir()不会“比非常简单的os.walk()。next()[1]花费更多的处理时间”)。这是因为os.walk()在内部使用os.listdir()。实际上,如果您一起测试它们:

>>>> import timeit
>>>> timeit.timeit("os.walk('.').next()[1]", "import os", number=10000)
1.1215229034423828
>>>> timeit.timeit("[ name for name in os.listdir('.') if os.path.isdir(os.path.join('.', name)) ]", "import os", number=10000)
1.0592019557952881

os.listdir()的过滤非常快。

Just to add that using os.listdir() does not “take a lot of processing vs very simple os.walk().next()[1]”. This is because os.walk() uses os.listdir() internally. In fact if you test them together:

>>>> import timeit
>>>> timeit.timeit("os.walk('.').next()[1]", "import os", number=10000)
1.1215229034423828
>>>> timeit.timeit("[ name for name in os.listdir('.') if os.path.isdir(os.path.join('.', name)) ]", "import os", number=10000)
1.0592019557952881

The filtering of os.listdir() is very slightly faster.


回答 7

一种非常简单而优雅的方法是使用此方法:

 import os
 dir_list = os.walk('.').next()[1]
 print dir_list

在需要文件夹名称的同一文件夹中运行此脚本,它将仅为您提供直接的文件夹名称(也没有文件夹的完整路径)。

A very much simpler and elegant way is to use this:

 import os
 dir_list = os.walk('.').next()[1]
 print dir_list

Run this script in the same folder for which you want folder names.It will give you exactly the immediate folders name only(that too without the full path of the folders).


回答 8

使用列表理解

[a for a in os.listdir() if os.path.isdir(a)]

我认为这是最简单的方法

Using list comprehension,

[a for a in os.listdir() if os.path.isdir(a)]

I think It is the simplest way


回答 9

作为一个新手,我还不能直接发表评论,但这是我想补充到ΤζΩΤζΙΟΥ的以下部分的一个小更正:

如果您希望使用完整路径名,请使用以下功能:

def listdirs(folder):  
  return [
    d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder))
    if os.path.isdir(d)
]

对于仍然使用python <2.4的用户:内部构造需要是列表而不是元组,因此应如下所示:

def listdirs(folder):  
  return [
    d for d in [os.path.join(folder, d1) for d1 in os.listdir(folder)]
    if os.path.isdir(d)
  ]

否则会出现语法错误。

being a newbie here i can’t yet directly comment but here is a small correction i’d like to add to the following part of ΤΖΩΤΖΙΟΥ’s answer :

If you prefer full pathnames, then use this function:

def listdirs(folder):  
  return [
    d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder))
    if os.path.isdir(d)
]

for those still on python < 2.4: the inner construct needs to be a list instead of a tuple and therefore should read like this:

def listdirs(folder):  
  return [
    d for d in [os.path.join(folder, d1) for d1 in os.listdir(folder)]
    if os.path.isdir(d)
  ]

otherwise one gets a syntax error.


回答 10

[x for x in os.listdir(somedir) if os.path.isdir(os.path.join(somedir, x))]
[x for x in os.listdir(somedir) if os.path.isdir(os.path.join(somedir, x))]

回答 11

有关完整路径名的列表,相对于其他解决方案,我更喜欢此版本:

def listdirs(dir):
    return [os.path.join(os.path.join(dir, x)) for x in os.listdir(dir) 
        if os.path.isdir(os.path.join(dir, x))]

For a list of full path names I prefer this version to the other solutions here:

def listdirs(dir):
    return [os.path.join(os.path.join(dir, x)) for x in os.listdir(dir) 
        if os.path.isdir(os.path.join(dir, x))]

回答 12

scanDir = "abc"
directories = [d for d in os.listdir(scanDir) if os.path.isdir(os.path.join(os.path.abspath(scanDir), d))]
scanDir = "abc"
directories = [d for d in os.listdir(scanDir) if os.path.isdir(os.path.join(os.path.abspath(scanDir), d))]

回答 13

没有目录时不会失败的更安全的选项。

def listdirs(folder):
    if os.path.exists(folder):
         return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))]
    else:
         return []

A safer option that does not fail when there is no directory.

def listdirs(folder):
    if os.path.exists(folder):
         return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))]
    else:
         return []

回答 14

这样吗

>>>> [path for path in os.listdir(os.getcwd()) if os.path.isdir(path)]

Like so?

>>>> [path for path in os.listdir(os.getcwd()) if os.path.isdir(path)]

回答 15

蟒3.4引入pathlib模块到标准库,它提供了一个面向对象的方法来处理的文件系统的路径:

from pathlib import Path

p = Path('./')
[f for f in p.iterdir() if f.is_dir()]

Python 3.4 introduced the pathlib module into the standard library, which provides an object oriented approach to handle filesystem paths:

from pathlib import Path

p = Path('./')
[f for f in p.iterdir() if f.is_dir()]

回答 16

-- This will exclude files and traverse through 1 level of sub folders in the root

def list_files(dir):
    List = []
    filterstr = ' '
    for root, dirs, files in os.walk(dir, topdown = True):
        #r.append(root)
        if (root == dir):
            pass
        elif filterstr in root:
            #filterstr = ' '
            pass
        else:
            filterstr = root
            #print(root)
            for name in files:
                print(root)
                print(dirs)
                List.append(os.path.join(root,name))
            #print(os.path.join(root,name),"\n")
                print(List,"\n")

    return List
-- This will exclude files and traverse through 1 level of sub folders in the root

def list_files(dir):
    List = []
    filterstr = ' '
    for root, dirs, files in os.walk(dir, topdown = True):
        #r.append(root)
        if (root == dir):
            pass
        elif filterstr in root:
            #filterstr = ' '
            pass
        else:
            filterstr = root
            #print(root)
            for name in files:
                print(root)
                print(dirs)
                List.append(os.path.join(root,name))
            #print(os.path.join(root,name),"\n")
                print(List,"\n")

    return List

在Python的相对位置打开文件

问题:在Python的相对位置打开文件

假设python代码在以前的Windows目录“ main”中未知的位置执行,并且在运行时将代码安装在任何地方,都需要访问目录“ main / 2091 / data.txt”。

我应该如何使用open(location)函数?应该在什么位置?

编辑:

我发现下面的简单代码可以工作..它有什么缺点吗?

    file="\2091\sample.txt"
    path=os.getcwd()+file
    fp=open(path,'r+');

Suppose python code is executed in not known by prior windows directory say ‘main’ , and wherever code is installed when it runs it needs to access to directory ‘main/2091/data.txt’ .

how should I use open(location) function? what should be location ?

Edit :

I found that below simple code will work..does it have any disadvantages ?

    file="\2091\sample.txt"
    path=os.getcwd()+file
    fp=open(path,'r+');

回答 0

使用这种类型的东西时,您需要注意实际的工作目录是什么。例如,您可能无法从文件所在的目录中运行脚本。在这种情况下,您不能仅使用相对路径本身。

如果确定所需文件位于脚本实际所在的子目录中,则可以__file__在此处使用帮助。 __file__是您正在运行的脚本所在的完整路径。

因此,您可以摆弄这样的东西:

import os
script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

With this type of thing you need to be careful what your actual working directory is. For example, you may not run the script from the directory the file is in. In this case, you can’t just use a relative path by itself.

If you are sure the file you want is in a subdirectory beneath where the script is actually located, you can use __file__ to help you out here. __file__ is the full path to where the script you are running is located.

So you can fiddle with something like this:

import os
script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

回答 1

这段代码可以正常工作:

import os


def readFile(filename):
    filehandle = open(filename)
    print filehandle.read()
    filehandle.close()



fileDir = os.path.dirname(os.path.realpath('__file__'))
print fileDir

#For accessing the file in the same folder
filename = "same.txt"
readFile(filename)

#For accessing the file in a folder contained in the current folder
filename = os.path.join(fileDir, 'Folder1.1/same.txt')
readFile(filename)

#For accessing the file in the parent folder of the current folder
filename = os.path.join(fileDir, '../same.txt')
readFile(filename)

#For accessing the file inside a sibling folder.
filename = os.path.join(fileDir, '../Folder2/same.txt')
filename = os.path.abspath(os.path.realpath(filename))
print filename
readFile(filename)

This code works fine:

import os


def readFile(filename):
    filehandle = open(filename)
    print filehandle.read()
    filehandle.close()



fileDir = os.path.dirname(os.path.realpath('__file__'))
print fileDir

#For accessing the file in the same folder
filename = "same.txt"
readFile(filename)

#For accessing the file in a folder contained in the current folder
filename = os.path.join(fileDir, 'Folder1.1/same.txt')
readFile(filename)

#For accessing the file in the parent folder of the current folder
filename = os.path.join(fileDir, '../same.txt')
readFile(filename)

#For accessing the file inside a sibling folder.
filename = os.path.join(fileDir, '../Folder2/same.txt')
filename = os.path.abspath(os.path.realpath(filename))
print filename
readFile(filename)

回答 2

我创建一个帐户只是为了澄清我认为在Russ原始回复中发现的差异。

作为参考,他的原始答案是:

import os
script_dir = os.path.dirname(__file__)
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

这是一个很好的答案,因为它试图动态创建所需文件的绝对系统路径。

Cory Mawhorter注意到这__file__是相对路径(在我的系统上也是这样),建议使用os.path.abspath(__file__)os.path.abspath,但是,返回当前脚本的绝对路径(即/path/to/dir/foobar.py

要使用此方法(以及我最终如何使用它),必须从路径末尾删除脚本名称:

import os
script_path = os.path.abspath(__file__) # i.e. /path/to/dir/foobar.py
script_dir = os.path.split(script_path)[0] #i.e. /path/to/dir/
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

所得的abs_file_path(在此示例中)变为: /path/to/dir/2091/data.txt

I created an account just so I could clarify a discrepancy I think I found in Russ’s original response.

For reference, his original answer was:

import os
script_dir = os.path.dirname(__file__)
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

This is a great answer because it is trying to dynamically creates an absolute system path to the desired file.

Cory Mawhorter noticed that __file__ is a relative path (it is as well on my system) and suggested using os.path.abspath(__file__). os.path.abspath, however, returns the absolute path of your current script (i.e. /path/to/dir/foobar.py)

To use this method (and how I eventually got it working) you have to remove the script name from the end of the path:

import os
script_path = os.path.abspath(__file__) # i.e. /path/to/dir/foobar.py
script_dir = os.path.split(script_path)[0] #i.e. /path/to/dir/
rel_path = "2091/data.txt"
abs_file_path = os.path.join(script_dir, rel_path)

The resulting abs_file_path (in this example) becomes: /path/to/dir/2091/data.txt


回答 3

这取决于您使用的操作系统。如果您想要一个与Windows和* nix兼容的解决方案,例如:

from os import path

file_path = path.relpath("2091/data.txt")
with open(file_path) as f:
    <do stuff>

应该工作正常。

path模块能够格式化其正在运行的任何操作系统的路径。另外,只要您具有正确的权限,python就能很好地处理相对路径。

编辑

正如kindall在评论中提到的那样,python仍然可以在unix样式和Windows样式路径之间进行转换,因此,即使是更简单的代码也可以使用:

with open("2091/data/txt") as f:
    <do stuff>

话虽如此,该path模块仍然具有一些有用的功能。

It depends on what operating system you’re using. If you want a solution that is compatible with both Windows and *nix something like:

from os import path

file_path = path.relpath("2091/data.txt")
with open(file_path) as f:
    <do stuff>

should work fine.

The path module is able to format a path for whatever operating system it’s running on. Also, python handles relative paths just fine, so long as you have correct permissions.

Edit:

As mentioned by kindall in the comments, python can convert between unix-style and windows-style paths anyway, so even simpler code will work:

with open("2091/data/txt") as f:
    <do stuff>

That being said, the path module still has some useful functions.


回答 4

我花了很多时间发现为什么我的代码找不到在Windows系统上运行Python 3的文件。所以我加了。之前/一切正常:

import os

script_dir = os.path.dirname(__file__)
file_path = os.path.join(script_dir, './output03.txt')
print(file_path)
fptr = open(file_path, 'w')

I spend a lot time to discover why my code could not find my file running Python 3 on the Windows system. So I added . before / and everything worked fine:

import os

script_dir = os.path.dirname(__file__)
file_path = os.path.join(script_dir, './output03.txt')
print(file_path)
fptr = open(file_path, 'w')

回答 5

码:

import os
script_path = os.path.abspath(__file__) 
path_list = script_path.split(os.sep)
script_directory = path_list[0:len(path_list)-1]
rel_path = "main/2091/data.txt"
path = "/".join(script_directory) + "/" + rel_path

说明:

导入库:

import os

用于__file__获取当前脚本的路径:

script_path = os.path.abspath(__file__)

将脚本路径分为多个项目:

path_list = script_path.split(os.sep)

删除列表中的最后一项(实际的脚本文件):

script_directory = path_list[0:len(path_list)-1]

添加相对文件的路径:

rel_path = "main/2091/data.txt

加入列表项,并添加相对路径的文件:

path = "/".join(script_directory) + "/" + rel_path

现在,您可以设置要对文件执行的任何操作,例如:

file = open(path)

Code:

import os
script_path = os.path.abspath(__file__) 
path_list = script_path.split(os.sep)
script_directory = path_list[0:len(path_list)-1]
rel_path = "main/2091/data.txt"
path = "/".join(script_directory) + "/" + rel_path

Explanation:

Import library:

import os

Use __file__ to attain the current script’s path:

script_path = os.path.abspath(__file__)

Separates the script path into multiple items:

path_list = script_path.split(os.sep)

Remove the last item in the list (the actual script file):

script_directory = path_list[0:len(path_list)-1]

Add the relative file’s path:

rel_path = "main/2091/data.txt

Join the list items, and addition the relative path’s file:

path = "/".join(script_directory) + "/" + rel_path

Now you are set to do whatever you want with the file, such as, for example:

file = open(path)

回答 6

如果文件在您的父文件夹中,例如。follower.txt,您可以简单地使用open('../follower.txt', 'r').read()

If the file is in your parent folder, eg. follower.txt, you can simply use open('../follower.txt', 'r').read()


回答 7

试试这个:

from pathlib import Path

data_folder = Path("/relative/path")
file_to_open = data_folder / "file.pdf"

f = open(file_to_open)

print(f.read())

Python 3.4引入了一个新的用于处理文件和路径的标准库,称为pathlib。这个对我有用!

Try this:

from pathlib import Path

data_folder = Path("/relative/path")
file_to_open = data_folder / "file.pdf"

f = open(file_to_open)

print(f.read())

Python 3.4 introduced a new standard library for dealing with files and paths called pathlib. It works for me!


回答 8

不确定是否到处都能使用。

我在ubuntu中使用ipython。

如果要读取当前文件夹的子目录中的文件:

/current-folder/sub-directory/data.csv

您的脚本在当前文件夹中,只需尝试以下操作:

import pandas as pd
path = './sub-directory/data.csv'
pd.read_csv(path)

Not sure if this work everywhere.

I’m using ipython in ubuntu.

If you want to read file in current folder’s sub-directory:

/current-folder/sub-directory/data.csv

your script is in current-folder simply try this:

import pandas as pd
path = './sub-directory/data.csv'
pd.read_csv(path)

回答 9

Python只是将您提供的文件名传递给操作系统,然后将其打开。如果您的操作系统支持相对路径main/2091/data.txt(如:提示),则可以正常工作。

您可能会发现,回答此类问题的最简单方法是尝试一下,看看会发生什么。

Python just passes the filename you give it to the operating system, which opens it. If your operating system supports relative paths like main/2091/data.txt (hint: it does), then that will work fine.

You may find that the easiest way to answer a question like this is to try it and see what happens.


回答 10

import os
def file_path(relative_path):
    dir = os.path.dirname(os.path.abspath(__file__))
    split_path = relative_path.split("/")
    new_path = os.path.join(dir, *split_path)
    return new_path

with open(file_path("2091/data.txt"), "w") as f:
    f.write("Powerful you have become.")
import os
def file_path(relative_path):
    dir = os.path.dirname(os.path.abspath(__file__))
    split_path = relative_path.split("/")
    new_path = os.path.join(dir, *split_path)
    return new_path

with open(file_path("2091/data.txt"), "w") as f:
    f.write("Powerful you have become.")

回答 11

当我还是初学者时,我发现这些描述有些令人生畏。一开始我会尝试 For Windows

f= open('C:\Users\chidu\Desktop\Skipper New\Special_Note.txt','w+')
print(f) 

这将引发一个syntax error。我曾经很困惑。然后在Google上进行一些冲浪之后。找到了发生错误的原因。写给初学者

这是因为要以Unicode读取路径,您只需\在启动文件路径时添加一个

f= open('C:\\Users\chidu\Desktop\Skipper New\Special_Note.txt','w+')
print(f)

现在,它可以\在启动目录之前添加。

When I was a beginner I found these descriptions a bit intimidating. As at first I would try For Windows

f= open('C:\Users\chidu\Desktop\Skipper New\Special_Note.txt','w+')
print(f) 

and this would raise an syntax error. I used get confused alot. Then after some surfing across google. found why the error occurred. Writing this for beginners

It’s because for path to be read in Unicode you simple add a \ when starting file path

f= open('C:\\Users\chidu\Desktop\Skipper New\Special_Note.txt','w+')
print(f)

And now it works just add \ before starting the directory.


numpy max vs amax vs maximum

问题:numpy max vs amax vs maximum

numpy的具有看起来他们可被用于同样的东西三个不同的函数—不同之处在于numpy.maximum被用于逐元素,而numpy.maxnumpy.amax可以在特定轴,或所有元件一起使用。为什么不仅仅存在numpy.max?在性能上有一些微妙之处吗?

(类似minvs. aminvs. minimum

numpy has three different functions which seem like they can be used for the same things — except that numpy.maximum can only be used element-wise, while numpy.max and numpy.amax can be used on particular axes, or all elements. Why is there more than just numpy.max? Is there some subtlety to this in performance?

(Similarly for min vs. amin vs. minimum)


回答 0

np.max只是的别名np.amax。此函数仅在单个输入数组上起作用,并在整个数组中找到最大元素的值(返回标量)。或者,它接受一个axis参数,并沿输入数组的轴找到最大值(返回一个新数组)。

>>> a = np.array([[0, 1, 6],
                  [2, 4, 1]])
>>> np.max(a)
6
>>> np.max(a, axis=0) # max of each column
array([2, 4, 6])

的默认行为np.maximum是采用两个数组并计算其按元素的最大值。在这里,“兼容”意味着可以将一个阵列广播到另一个阵列。例如:

>>> b = np.array([3, 6, 1])
>>> c = np.array([4, 2, 9])
>>> np.maximum(b, c)
array([4, 6, 9])

但是np.maximum它也是一个通用函数,这意味着它具有使用多维数组时有用的其他功能和方法。例如,您可以计算数组(或数组的特定轴)上的累积最大值:

>>> d = np.array([2, 0, 3, -4, -2, 7, 9])
>>> np.maximum.accumulate(d)
array([2, 2, 3, 3, 3, 7, 9])

无法使用np.max

您可以在使用时在一定程度上进行np.maximum模仿:np.maxnp.maximum.reduce

>>> np.maximum.reduce(d)
9
>>> np.max(d)
9

基本测试表明这两种方法在性能上是可比的。它们应该是np.max()实际需要np.maximum.reduce执行的计算。

np.max is just an alias for np.amax. This function only works on a single input array and finds the value of maximum element in that entire array (returning a scalar). Alternatively, it takes an axis argument and will find the maximum value along an axis of the input array (returning a new array).

>>> a = np.array([[0, 1, 6],
                  [2, 4, 1]])
>>> np.max(a)
6
>>> np.max(a, axis=0) # max of each column
array([2, 4, 6])

The default behaviour of np.maximum is to take two arrays and compute their element-wise maximum. Here, ‘compatible’ means that one array can be broadcast to the other. For example:

>>> b = np.array([3, 6, 1])
>>> c = np.array([4, 2, 9])
>>> np.maximum(b, c)
array([4, 6, 9])

But np.maximum is also a universal function which means that it has other features and methods which come in useful when working with multidimensional arrays. For example you can compute the cumulative maximum over an array (or a particular axis of the array):

>>> d = np.array([2, 0, 3, -4, -2, 7, 9])
>>> np.maximum.accumulate(d)
array([2, 2, 3, 3, 3, 7, 9])

This is not possible with np.max.

You can make np.maximum imitate np.max to a certain extent when using np.maximum.reduce:

>>> np.maximum.reduce(d)
9
>>> np.max(d)
9

Basic testing suggests the two approaches are comparable in performance; and they should be, as np.max() actually calls np.maximum.reduce to do the computation.


回答 1

您已经说明了为什么np.maximum不同的地方-它返回的数组是两个数组之间按元素的最大值。

至于np.amaxnp.max:它们都调用相同的函数- np.max只是的别名np.amax,它们计算数组中或沿数组轴上所有元素的最大值。

In [1]: import numpy as np

In [2]: np.amax
Out[2]: <function numpy.core.fromnumeric.amax>

In [3]: np.max
Out[3]: <function numpy.core.fromnumeric.amax>

You’ve already stated why np.maximum is different – it returns an array that is the element-wise maximum between two arrays.

As for np.amax and np.max: they both call the same function – np.max is just an alias for np.amax, and they compute the maximum of all elements in an array, or along an axis of an array.

In [1]: import numpy as np

In [2]: np.amax
Out[2]: <function numpy.core.fromnumeric.amax>

In [3]: np.max
Out[3]: <function numpy.core.fromnumeric.amax>

回答 2

为了完整起见,在Numpy中有四个最大相关函数。它们分为两个不同的类别:

  • np.amax/np.maxnp.nanmax::用于单阵列订单统计
  • np.maximumnp.fmax:用于两个数组的元素比较

单阵列订单统计

NaNs传播者np.amax/np.max及其NaN无知对应物np.nanmax

  • np.max只是的别名np.amax,因此它们被视为一个函数。

    >>> np.max.__name__
    'amax'
    >>> np.max is np.amax
    True
  • np.max传播NaN,而np.nanmax忽略NaN。

    >>> np.max([np.nan, 3.14, -1])
    nan
    >>> np.nanmax([np.nan, 3.14, -1])
    3.14

二。用于两个数组的元素比较

NaNs传播者np.maximum及其NaNs无知对应物np.fmax

  • 这两个函数都需要两个数组作为要比较的前两个位置args。

    # x1 and x2 must be the same shape or can be broadcast
    np.maximum(x1, x2, /, ...);
    np.fmax(x1, x2, /, ...)
  • np.maximum传播NaN,而np.fmax忽略NaN。

    >>> np.maximum([np.nan, 3.14, 0], [np.NINF, np.nan, 2.72])
    array([ nan,  nan, 2.72])
    >>> np.fmax([np.nan, 3.14, 0], [np.NINF, np.nan, 2.72])
    array([-inf, 3.14, 2.72])
  • 逐个元素的函数是np.ufuncUniversal Function,这意味着它们具有正常Numpy函数所不具备的一些特殊属性。

    >>> type(np.maximum)
    <class 'numpy.ufunc'>
    >>> type(np.fmax)
    <class 'numpy.ufunc'>
    >>> #---------------#
    >>> type(np.max)
    <class 'function'>
    >>> type(np.nanmax)
    <class 'function'>

最后,相同的规则适用于四个最小相关功能:

  • np.amin/np.minnp.nanmin;
  • 并且np.minimumnp.fmin

For completeness, in Numpy there are four maximum related functions. They fall into two different categories:

  • np.amax/np.max, np.nanmax: for single array order statistics
  • and np.maximum, np.fmax: for element-wise comparison of two arrays

I. For single array order statistics

NaNs propagator np.amax/np.max and its NaN ignorant counterpart np.nanmax.

  • np.max is just an alias of np.amax, so they are considered as one function.

    >>> np.max.__name__
    'amax'
    >>> np.max is np.amax
    True
    
  • np.max propagates NaNs while np.nanmax ignores NaNs.

    >>> np.max([np.nan, 3.14, -1])
    nan
    >>> np.nanmax([np.nan, 3.14, -1])
    3.14
    

II. For element-wise comparison of two arrays

NaNs propagator np.maximum and its NaNs ignorant counterpart np.fmax.

  • Both functions require two arrays as the first two positional args to compare with.

    # x1 and x2 must be the same shape or can be broadcast
    np.maximum(x1, x2, /, ...);
    np.fmax(x1, x2, /, ...)
    
  • np.maximum propagates NaNs while np.fmax ignores NaNs.

    >>> np.maximum([np.nan, 3.14, 0], [np.NINF, np.nan, 2.72])
    array([ nan,  nan, 2.72])
    >>> np.fmax([np.nan, 3.14, 0], [np.NINF, np.nan, 2.72])
    array([-inf, 3.14, 2.72])
    
  • The element-wise functions are np.ufunc(Universal Function), which means they have some special properties that normal Numpy function don’t have.

    >>> type(np.maximum)
    <class 'numpy.ufunc'>
    >>> type(np.fmax)
    <class 'numpy.ufunc'>
    >>> #---------------#
    >>> type(np.max)
    <class 'function'>
    >>> type(np.nanmax)
    <class 'function'>
    

And finally, the same rules apply to the four minimum related functions:

  • np.amin/np.min, np.nanmin;
  • and np.minimum, np.fmin.

回答 3

np.maximum 不仅按元素进行比较,而且将数组与单个值进行比较

>>>np.maximum([23, 14, 16, 20, 25], 18)
array([23, 18, 18, 20, 25])

np.maximum not only compares elementwise but also compares array elementwise with single value

>>>np.maximum([23, 14, 16, 20, 25], 18)
array([23, 18, 18, 20, 25])

iPython Notebook清除代码中的单元格输出

问题:iPython Notebook清除代码中的单元格输出

在iPython笔记本中,我有一个while循环,可print实时侦听串行端口和接收到的数据。

我想要实现的仅显示最新接收到的数据(即,仅一行显示最新数据。在单元格输出区域中不滚动)

我需要的是(我认为)当我收到新数据时清除旧的单元格输出,然后打印新数据。我想知道如何以编程方式清除旧数据?

In a iPython notebook, I have a while loop that listens to a Serial port and print the received data in real time.

What I want to achieve to only show the latest received data (i.e only one line showing the most recent data. no scrolling in the cell output area)

What I need(i think) is to clear the old cell output when I receives new data, and then prints the new data. I am wondering how can I clear old data programmatically ?


回答 0

您可以IPython.display.clear_output用来清除单元格的输出。

from IPython.display import clear_output

for i in range(10):
    clear_output(wait=True)
    print("Hello World!")

在此循环结束时,您只会看到一个Hello World!

没有代码示例,给您工作代码并不容易。最好缓冲最近的n个事件是一个好策略。每当缓冲区更改时,您都可以清除单元格的输出并再次打印缓冲区。

You can use IPython.display.clear_output to clear the output of a cell.

from IPython.display import clear_output

for i in range(10):
    clear_output(wait=True)
    print("Hello World!")

At the end of this loop you will only see one Hello World!.

Without a code example it’s not easy to give you working code. Probably buffering the latest n events is a good strategy. Whenever the buffer changes you can clear the cell’s output and print the buffer again.


回答 1

如果您像我一样来到这里,希望使用Jupitter在Jupyter的Julia笔记本中对地块做同样的事情,则可以使用:

    IJulia.clear_output(true)

所以对于多次运行的动画情节

    if nrun==1  
      display(plot(x,y))         # first plot
    else 
      IJulia.clear_output(true)  # clear the window (as above)
      display(plot!(x,y))        # plot! overlays the plot
    end

没有clear_output调用,所有图将单独显示。

And in case you come here, like I did, looking to do the same thing for plots in a Julia notebook in Jupyter, using Plots, you can use:

    IJulia.clear_output(true)

so for a kind of animated plot of multiple runs

    if nrun==1  
      display(plot(x,y))         # first plot
    else 
      IJulia.clear_output(true)  # clear the window (as above)
      display(plot!(x,y))        # plot! overlays the plot
    end

Without the clear_output call, all plots appear separately.


回答 2

您可以使用IPython.display.clear_output清除输出,如cel的答案所述。我要补充一点,对我而言,最好的解决方案是使用以下参数组合进行打印,而不会出现笔记本的“晃动”:

from IPython.display import clear_output

for i in range(10):
    clear_output(wait=True)
    print(i, flush=True)

You can use the IPython.display.clear_output to clear the output as mentioned in cel’s answer. I would add that for me the best solution was to use this combination of parameters to print without any “shakiness” of the notebook:

from IPython.display import clear_output

for i in range(10):
    clear_output(wait=True)
    print(i, flush=True)

如何在Python中进行热编码?

问题:如何在Python中进行热编码?

我有一个80%分类变量的机器学习分类问题。如果要使用一些分类器进行分类,是否必须使用一种热编码?我可以在没有编码的情况下将数据传递给分类器吗?

我正在尝试进行以下功能选择:

  1. 我读了火车文件:

    num_rows_to_read = 10000
    train_small = pd.read_csv("../../dataset/train.csv",   nrows=num_rows_to_read)
    
  2. 我将类别特征的类型更改为“类别”:

    non_categorial_features = ['orig_destination_distance',
                              'srch_adults_cnt',
                              'srch_children_cnt',
                              'srch_rm_cnt',
                              'cnt']
    
    for categorical_feature in list(train_small.columns):
        if categorical_feature not in non_categorial_features:
            train_small[categorical_feature] = train_small[categorical_feature].astype('category')
    
  3. 我使用一种热编码:

    train_small_with_dummies = pd.get_dummies(train_small, sparse=True)

问题是,尽管我使用的是坚固的机器,但第3部分经常卡住。

因此,没有一种热编码,我就无法进行任何特征选择来确定特征的重要性。

您有什么推荐的吗?

I have a machine learning classification problem with 80% categorical variables. Must I use one hot encoding if I want to use some classifier for the classification? Can i pass the data to a classifier without the encoding?

I am trying to do the following for feature selection:

  1. I read the train file:

    num_rows_to_read = 10000
    train_small = pd.read_csv("../../dataset/train.csv",   nrows=num_rows_to_read)
    
  2. I change the type of the categorical features to ‘category’:

    non_categorial_features = ['orig_destination_distance',
                              'srch_adults_cnt',
                              'srch_children_cnt',
                              'srch_rm_cnt',
                              'cnt']
    
    for categorical_feature in list(train_small.columns):
        if categorical_feature not in non_categorial_features:
            train_small[categorical_feature] = train_small[categorical_feature].astype('category')
    
  3. I use one hot encoding:

    train_small_with_dummies = pd.get_dummies(train_small, sparse=True)
    

The problem is that the 3’rd part often get stuck, although I am using a strong machine.

Thus, without the one hot encoding I can’t do any feature selection, for determining the importance of the features.

What do you recommend?


回答 0

方法1:您可以在pandas数据框上使用get_dummies。

范例1:

import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]: 
     a    b    c
0  1.0  0.0  0.0
1  0.0  1.0  0.0
2  0.0  0.0  1.0
3  1.0  0.0  0.0

范例2:

下面将把给定的列转换为一个热门列。使用前缀具有多个虚拟变量。

import pandas as pd

df = pd.DataFrame({
          'A':['a','b','a'],
          'B':['b','a','c']
        })
df
Out[]: 
   A  B
0  a  b
1  b  a
2  a  c

# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df  
Out[]: 
       A  a  b  c
    0  a  0  1  0
    1  b  1  0  0
    2  a  0  0  1

方法2:使用Scikit学习

给定一个具有三个特征和四个样本的数据集,我们让编码器找到每个特征的最大值,并将数据转换为二进制的一键编码。

>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])   
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
   handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1.,  0.,  0.,  1.,  0.,  0.,  1.,  0.,  0.]])

这是此示例的链接:http : //scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

Approach 1: You can use pandas’ pd.get_dummies.

Example 1:

import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]: 
     a    b    c
0  1.0  0.0  0.0
1  0.0  1.0  0.0
2  0.0  0.0  1.0
3  1.0  0.0  0.0

Example 2:

The following will transform a given column into one hot. Use prefix to have multiple dummies.

import pandas as pd
        
df = pd.DataFrame({
          'A':['a','b','a'],
          'B':['b','a','c']
        })
df
Out[]: 
   A  B
0  a  b
1  b  a
2  a  c

# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df  
Out[]: 
       A  a  b  c
    0  a  0  1  0
    1  b  1  0  0
    2  a  0  0  1

Approach 2: Use Scikit-learn

Using a OneHotEncoder has the advantage of being able to fit on some training data and then transform on some other data using the same instance. We also have handle_unknown to further control what the encoder does with unseen data.

Given a dataset with three features and four samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding.

>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])   
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
   handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1.,  0.,  0.,  1.,  0.,  0.,  1.,  0.,  0.]])

Here is the link for this example: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html


回答 1

使用Pandas进行基本的一键编码要容易得多。如果您正在寻找更多选项,可以使用scikit-learn

对于使用Pandas的基本一键编码,您只需将数据帧传递到get_dummies函数中。

例如,如果我有一个名为imdb_movies的数据

…并且我想对“额定值”列进行一次热编码,我只需要这样做:

pd.get_dummies(imdb_movies.Rated)

dataframe将为存在的每个“ 等级 ” 返回一个新的带有列的列,以及一个1或0,用于指定给定观察值的等级。

通常,我们希望将其作为原始文档的一部分dataframe。在这种情况下,我们只需使用“ column-binding ”将新的伪编码帧附加到原始帧即可。

我们可以使用Pandas concat函数进行列绑定:

rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)

现在,我们可以对全部数据进行分析dataframe

简单的功能

我建议您使自己成为实用工具,以快速完成此任务:

def encode_and_bind(original_dataframe, feature_to_encode):
    dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
    res = pd.concat([original_dataframe, dummies], axis=1)
    return(res)

用法

encode_and_bind(imdb_movies, 'Rated')

结果

另外,按照@pmalbu注释,如果您希望该函数删除原始的feature_to_encode,请使用以下版本:

def encode_and_bind(original_dataframe, feature_to_encode):
    dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
    res = pd.concat([original_dataframe, dummies], axis=1)
    res = res.drop([feature_to_encode], axis=1)
    return(res) 

您可以同时对多个功能进行编码,如下所示:

features_to_encode = ['feature_1', 'feature_2', 'feature_3',
                      'feature_4']
for feature in features_to_encode:
    res = encode_and_bind(train_set, feature)

Much easier to use Pandas for basic one-hot encoding. If you’re looking for more options you can use scikit-learn.

For basic one-hot encoding with Pandas you pass your data frame into the get_dummies function.

For example, if I have a dataframe called imdb_movies:

…and I want to one-hot encode the Rated column, I do this:

pd.get_dummies(imdb_movies.Rated)

This returns a new dataframe with a column for every “level” of rating that exists, along with either a 1 or 0 specifying the presence of that rating for a given observation.

Usually, we want this to be part of the original dataframe. In this case, we attach our new dummy coded frame onto the original frame using “column-binding.

We can column-bind by using Pandas concat function:

rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)

We can now run an analysis on our full dataframe.

SIMPLE UTILITY FUNCTION

I would recommend making yourself a utility function to do this quickly:

def encode_and_bind(original_dataframe, feature_to_encode):
    dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
    res = pd.concat([original_dataframe, dummies], axis=1)
    return(res)

Usage:

encode_and_bind(imdb_movies, 'Rated')

Result:

Also, as per @pmalbu comment, if you would like the function to remove the original feature_to_encode then use this version:

def encode_and_bind(original_dataframe, feature_to_encode):
    dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
    res = pd.concat([original_dataframe, dummies], axis=1)
    res = res.drop([feature_to_encode], axis=1)
    return(res) 

You can encode multiple features at the same time as follows:

features_to_encode = ['feature_1', 'feature_2', 'feature_3',
                      'feature_4']
for feature in features_to_encode:
    res = encode_and_bind(train_set, feature)

回答 2

您可以使用numpy.eye和使用数组元素选择机制来做到这一点:

import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]

def indices_to_one_hot(data, nb_classes):
    """Convert an iterable of indices to one-hot encoded labels."""
    targets = np.array(data).reshape(-1)
    return np.eye(nb_classes)[targets]

现在的返回值indices_to_one_hot(nb_classes, data)

array([[[ 0.,  0.,  1.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  1.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  1.,  0.],
        [ 1.,  0.,  0.,  0.,  0.,  0.]]])

使用.reshape(-1)可以确保您使用正确的标签格式(也可能使用[[2], [3], [4], [0]])。

You can do it with numpy.eye and a using the array element selection mechanism:

import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]

def indices_to_one_hot(data, nb_classes):
    """Convert an iterable of indices to one-hot encoded labels."""
    targets = np.array(data).reshape(-1)
    return np.eye(nb_classes)[targets]

The the return value of indices_to_one_hot(nb_classes, data) is now

array([[[ 0.,  0.,  1.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  1.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  1.,  0.],
        [ 1.,  0.,  0.,  0.,  0.,  0.]]])

The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]).


回答 3

首先,最简单的一种热编码方式是:使用Sklearn。

http://scikit-learn.org/stable/modules/generation/sklearn.preprocessing.OneHotEncoder.html

其次,我不认为使用pandas进行一次热编码就这么简单(不过未经证实)

在Pandas中为Python创建虚拟变量

最后,您是否有必要进行一次热编码?一种热编码以指数方式增加了功能数量,从而极大地增加了任何分类器或将要运行的任何其他对象的运行时间。尤其是当每个分类特征具有多个级别时。相反,您可以进行伪编码。

使用伪编码通常效果很好,运行时间和复杂性要少得多。一位明智的教授曾经告诉我,“少即是多”。

如果需要,这是我的自定义编码功能的代码。

from sklearn.preprocessing import LabelEncoder

#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
        columnsToEncode = list(df.select_dtypes(include=['category','object']))
        le = LabelEncoder()
        for feature in columnsToEncode:
            try:
                df[feature] = le.fit_transform(df[feature])
            except:
                print('Error encoding '+feature)
        return df

编辑:比较要更清楚:

一键编码:将n个级别转换为n-1列。

Index  Animal         Index  cat  mouse
  1     dog             1     0     0
  2     cat       -->   2     1     0
  3    mouse            3     0     1

如果分类功能中有许多不同的类型(或级别),则可以看到这将如何扩展您的内存。请记住,这只是一栏。

虚拟编码:

Index  Animal         Index  Animal
  1     dog             1      0   
  2     cat       -->   2      1 
  3    mouse            3      2

改为转换为数字表示形式。大大节省了功能空间,但以准确性为代价。

Firstly, easiest way to one hot encode: use Sklearn.

http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

Secondly, I don’t think using pandas to one hot encode is that simple (unconfirmed though)

Creating dummy variables in pandas for python

Lastly, is it necessary for you to one hot encode? One hot encoding exponentially increases the number of features, drastically increasing the run time of any classifier or anything else you are going to run. Especially when each categorical feature has many levels. Instead you can do dummy coding.

Using dummy encoding usually works well, for much less run time and complexity. A wise prof once told me, ‘Less is More’.

Here’s the code for my custom encoding function if you want.

from sklearn.preprocessing import LabelEncoder

#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
        columnsToEncode = list(df.select_dtypes(include=['category','object']))
        le = LabelEncoder()
        for feature in columnsToEncode:
            try:
                df[feature] = le.fit_transform(df[feature])
            except:
                print('Error encoding '+feature)
        return df

EDIT: Comparison to be clearer:

One-hot encoding: convert n levels to n-1 columns.

Index  Animal         Index  cat  mouse
  1     dog             1     0     0
  2     cat       -->   2     1     0
  3    mouse            3     0     1

You can see how this will explode your memory if you have many different types (or levels) in your categorical feature. Keep in mind, this is just ONE column.

Dummy Coding:

Index  Animal         Index  Animal
  1     dog             1      0   
  2     cat       -->   2      1 
  3    mouse            3      2

Convert to numerical representations instead. Greatly saves feature space, at the cost of a bit of accuracy.


回答 4

使用熊猫进行热编码非常简单:

def one_hot(df, cols):
    """
    @param df pandas DataFrame
    @param cols a list of columns to encode 
    @return a DataFrame with one-hot encoding
    """
    for each in cols:
        dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
        df = pd.concat([df, dummies], axis=1)
    return df

编辑:

使用sklearn的另一种方式one_hot LabelBinarizer

from sklearn.preprocessing import LabelBinarizer 
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    return label_binarizer.transform(x)

One hot encoding with pandas is very easy:

def one_hot(df, cols):
    """
    @param df pandas DataFrame
    @param cols a list of columns to encode 
    @return a DataFrame with one-hot encoding
    """
    for each in cols:
        dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
        df = pd.concat([df, dummies], axis=1)
    return df

EDIT:

Another way to one_hot using sklearn’s LabelBinarizer :

from sklearn.preprocessing import LabelBinarizer 
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    return label_binarizer.transform(x)

回答 5

您可以使用numpy.eye函数。

import numpy as np

def one_hot_encode(x, n_classes):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
     """
    return np.eye(n_classes)[x]

def main():
    list = [0,1,2,3,4,3,2,1,0]
    n_classes = 5
    one_hot_list = one_hot_encode(list, n_classes)
    print(one_hot_list)

if __name__ == "__main__":
    main()

结果

D:\Desktop>python test.py
[[ 1.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.]
 [ 0.  0.  1.  0.  0.]
 [ 0.  0.  0.  1.  0.]
 [ 0.  0.  0.  0.  1.]
 [ 0.  0.  0.  1.  0.]
 [ 0.  0.  1.  0.  0.]
 [ 0.  1.  0.  0.  0.]
 [ 1.  0.  0.  0.  0.]]

You can use numpy.eye function.

import numpy as np

def one_hot_encode(x, n_classes):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
     """
    return np.eye(n_classes)[x]

def main():
    list = [0,1,2,3,4,3,2,1,0]
    n_classes = 5
    one_hot_list = one_hot_encode(list, n_classes)
    print(one_hot_list)

if __name__ == "__main__":
    main()

Result

D:\Desktop>python test.py
[[ 1.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.]
 [ 0.  0.  1.  0.  0.]
 [ 0.  0.  0.  1.  0.]
 [ 0.  0.  0.  0.  1.]
 [ 0.  0.  0.  1.  0.]
 [ 0.  0.  1.  0.  0.]
 [ 0.  1.  0.  0.  0.]
 [ 1.  0.  0.  0.  0.]]

回答 6

pandas具有内置功能“ get_dummies”,可以对该特定列进行一次热编码。

一种热编码的行代码:

df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)

pandas as has inbuilt function “get_dummies” to get one hot encoding of that particular column/s.

one line code for one-hot-encoding:

df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)

回答 7

这是使用DictVectorizer和Pandas DataFrame.to_dict('records')方法的解决方案。

>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
                      'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
                      'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
                     })

>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
 'country=MEX': 1,
 'country=US': 2,
 'race=Black': 3,
 'race=Latino': 4,
 'race=White': 5}

>>> X_qual.toarray()
array([[ 0.,  0.,  1.,  0.,  0.,  1.],
       [ 1.,  0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  1.,  0.],
       [ 1.,  0.,  0.,  0.,  0.,  1.],
       [ 0.,  1.,  0.,  0.,  0.,  1.],
       [ 0.,  0.,  1.,  1.,  0.,  0.]])

Here is a solution using DictVectorizer and the Pandas DataFrame.to_dict('records') method.

>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
                      'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
                      'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
                     })

>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
 'country=MEX': 1,
 'country=US': 2,
 'race=Black': 3,
 'race=Latino': 4,
 'race=White': 5}

>>> X_qual.toarray()
array([[ 0.,  0.,  1.,  0.,  0.,  1.],
       [ 1.,  0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  1.,  0.],
       [ 1.,  0.,  0.,  0.,  0.,  1.],
       [ 0.,  1.,  0.,  0.,  0.,  1.],
       [ 0.,  0.,  1.,  1.,  0.,  0.]])

回答 8

一键编码比将值转换为指示符变量还需要更多一点。通常,机器学习过程要求您多次将此编码应用于验证或测试数据集,并将构造的模型应用于实时观察到的数据。您应该存储用于构造模型的映射(转换)。一个好的解决方案是使用DictVectorizeror LabelEncoder(后跟get_dummies。这是可以使用的函数:

def oneHotEncode2(df, le_dict = {}):
    if not le_dict:
        columnsToEncode = list(df.select_dtypes(include=['category','object']))
        train = True;
    else:
        columnsToEncode = le_dict.keys()   
        train = False;

    for feature in columnsToEncode:
        if train:
            le_dict[feature] = LabelEncoder()
        try:
            if train:
                df[feature] = le_dict[feature].fit_transform(df[feature])
            else:
                df[feature] = le_dict[feature].transform(df[feature])

            df = pd.concat([df, 
                              pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
            df = df.drop(feature, axis=1)
        except:
            print('Error encoding '+feature)
            #df[feature]  = df[feature].convert_objects(convert_numeric='force')
            df[feature]  = df[feature].apply(pd.to_numeric, errors='coerce')
    return (df, le_dict)

这适用于pandas数据框,并为数据框的每一列创建并返回映射。因此,您可以这样称呼它:

train_data, le_dict = oneHotEncode2(train_data)

然后在测试数据上,通过传递训练返回的字典进行调用:

test_data, _ = oneHotEncode2(test_data, le_dict)

等效的方法是使用DictVectorizer。同一篇文章的相关文章在我的博客上。我在这里提到它,是因为它提供了这种方法背后的一些理由,而不仅仅是使用get_dummies 帖子 (公开:这是我自己的博客)。

One-hot encoding requires bit more than converting the values to indicator variables. Typically ML process requires you to apply this coding several times to validation or test data sets and applying the model you construct to real-time observed data. You should store the mapping (transform) that was used to construct the model. A good solution would use the DictVectorizer or LabelEncoder (followed by get_dummies. Here is a function that you can use:

def oneHotEncode2(df, le_dict = {}):
    if not le_dict:
        columnsToEncode = list(df.select_dtypes(include=['category','object']))
        train = True;
    else:
        columnsToEncode = le_dict.keys()   
        train = False;

    for feature in columnsToEncode:
        if train:
            le_dict[feature] = LabelEncoder()
        try:
            if train:
                df[feature] = le_dict[feature].fit_transform(df[feature])
            else:
                df[feature] = le_dict[feature].transform(df[feature])

            df = pd.concat([df, 
                              pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
            df = df.drop(feature, axis=1)
        except:
            print('Error encoding '+feature)
            #df[feature]  = df[feature].convert_objects(convert_numeric='force')
            df[feature]  = df[feature].apply(pd.to_numeric, errors='coerce')
    return (df, le_dict)

This works on a pandas dataframe and for each column of the dataframe it creates and returns a mapping back. So you would call it like this:

train_data, le_dict = oneHotEncode2(train_data)

Then on the test data, the call is made by passing the dictionary returned back from training:

test_data, _ = oneHotEncode2(test_data, le_dict)

An equivalent method is to use DictVectorizer. A related post on the same is on my blog. I mention it here since it provides some reasoning behind this approach over simply using get_dummies post (disclosure: this is my own blog).


回答 9

您可以将数据传递给catboost分类器,而无需进行编码。Catboost通过执行一键式和目标扩展均值编码来自身处理分类变量。

You can pass the data to catboost classifier without encoding. Catboost handles categorical variables itself by performing one-hot and target expanding mean encoding.


回答 10

您也可以执行以下操作。请注意以下内容,您不必使用pd.concat

import pandas as pd 
# intialise data of lists. 
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
       'Group':[1,2,1,2]} 

# Create DataFrame 
df = pd.DataFrame(data) 

for _c in df.select_dtypes(include=['object']).columns:
    print(_c)
    df[_c]  = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed

您还可以将显式列更改为分类。例如,在这里我要更改ColorGroup

import pandas as pd 
# intialise data of lists. 
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
       'Group':[1,2,1,2]} 

# Create DataFrame 
df = pd.DataFrame(data) 
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
    print(_c)
    df[_c]  = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed

You can do the following as well. Note for the below you don’t have to use pd.concat.

import pandas as pd 
# intialise data of lists. 
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
       'Group':[1,2,1,2]} 

# Create DataFrame 
df = pd.DataFrame(data) 

for _c in df.select_dtypes(include=['object']).columns:
    print(_c)
    df[_c]  = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed

You can also change explicit columns to categorical. For example, here I am changing the Color and Group

import pandas as pd 
# intialise data of lists. 
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
       'Group':[1,2,1,2]} 

# Create DataFrame 
df = pd.DataFrame(data) 
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
    print(_c)
    df[_c]  = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed

回答 11

我知道我来晚了,但是以自动化方式对数据帧进行热编码的最简单方法是使用此功能:

def hot_encode(df):
    obj_df = df.select_dtypes(include=['object'])
    return pd.get_dummies(df, columns=obj_df.columns).values

I know I’m late to this party, but the simplest way to hot encode a dataframe in an automated way is to use this function:

def hot_encode(df):
    obj_df = df.select_dtypes(include=['object'])
    return pd.get_dummies(df, columns=obj_df.columns).values

回答 12

我在声学模型中使用了它:可能对您的模型有帮助。

def one_hot_encoding(x, n_out):
    x = x.astype(int)  
    shape = x.shape
    x = x.flatten()
    N = len(x)
    x_categ = np.zeros((N,n_out))
    x_categ[np.arange(N), x] = 1
    return x_categ.reshape((shape)+(n_out,))

I used this in my acoustic model: probably this helps in ur model.

def one_hot_encoding(x, n_out):
    x = x.astype(int)  
    shape = x.shape
    x = x.flatten()
    N = len(x)
    x_categ = np.zeros((N,n_out))
    x_categ[np.arange(N), x] = 1
    return x_categ.reshape((shape)+(n_out,))

回答 13

要添加其他问题,让我提供如何使用Numpy使用Python 2.0函数来实现它:

def one_hot(y_):
    # Function to encode output labels from number indexes 
    # e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]

    y_ = y_.reshape(len(y_))
    n_values = np.max(y_) + 1
    return np.eye(n_values)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

该行n_values = np.max(y_) + 1可能经过硬编码,以便在使用迷你批处理的情况下使用大量神经元。

使用此功能的演示项目/教程:https : //github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

To add to other questions, let me provide how I did it with a Python 2.0 function using Numpy:

def one_hot(y_):
    # Function to encode output labels from number indexes 
    # e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]

    y_ = y_.reshape(len(y_))
    n_values = np.max(y_) + 1
    return np.eye(n_values)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

The line n_values = np.max(y_) + 1 could be hard-coded for you to use the good number of neurons in case you use mini-batches for example.

Demo project/tutorial where this function has been used: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition


回答 14

这对我有用:

pandas.factorize( ['B', 'C', 'D', 'B'] )[0]

输出:

[0, 1, 2, 0]

This works for me:

pandas.factorize( ['B', 'C', 'D', 'B'] )[0]

Output:

[0, 1, 2, 0]

回答 15

它可以并且应该很容易:

class OneHotEncoder:
    def __init__(self,optionKeys):
        length=len(optionKeys)
        self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}

用法:

ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)

It can and it should be easy as :

class OneHotEncoder:
    def __init__(self,optionKeys):
        length=len(optionKeys)
        self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}

Usage :

ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)

回答 16

扩展@Martin Thoma的答案

def one_hot_encode(y):
    """Convert an iterable of indices to one-hot encoded labels."""
    y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
    # the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
    nb_classes = len(np.unique(y)) # get the number of unique classes
    standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
    # which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
    # directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
    # standardised labels fixes this issue by returning a dictionary;
    # standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
    # standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
    # cannot be called by an integer index e.g y[1.0] - throws an index error.
    targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
    return np.eye(nb_classes)[targets]

Expanding @Martin Thoma’s answer

def one_hot_encode(y):
    """Convert an iterable of indices to one-hot encoded labels."""
    y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
    # the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
    nb_classes = len(np.unique(y)) # get the number of unique classes
    standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
    # which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
    # directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
    # standardised labels fixes this issue by returning a dictionary;
    # standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
    # standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
    # cannot be called by an integer index e.g y[1.0] - throws an index error.
    targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
    return np.eye(nb_classes)[targets]

回答 17

简短答案

这是一个无需使用numpy,pandas或其他软件包即可进行一次热编码的函数。它需要一个整数,布尔值或字符串(可能还有其他类型)的列表。

import typing


def one_hot_encode(items: list) -> typing.List[list]:
    results = []
    # find the unique items (we want to unique items b/c duplicate items will have the same encoding)
    unique_items = list(set(items))
    # sort the unique items
    sorted_items = sorted(unique_items)
    # find how long the list of each item should be
    max_index = len(unique_items)

    for item in items:
        # create a list of zeros the appropriate length
        one_hot_encoded_result = [0 for i in range(0, max_index)]
        # find the index of the item
        one_hot_index = sorted_items.index(item)
        # change the zero at the index from the previous line to a one
        one_hot_encoded_result[one_hot_index] = 1
        # add the result
        results.append(one_hot_encoded_result)

    return results

例:

one_hot_encode([2, 1, 1, 2, 5, 3])

# [[0, 1, 0, 0],
#  [1, 0, 0, 0],
#  [1, 0, 0, 0],
#  [0, 1, 0, 0],
#  [0, 0, 0, 1],
#  [0, 0, 1, 0]]
one_hot_encode([True, False, True])

# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])

# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]

长(长)答案

我知道这个问题已经有很多答案了,但是我注意到了两点。首先,大多数答案都使用numpy和/或pandas之类的软件包。这是一件好事。如果要编写生产代码,则可能应该使用健壮,快速的算法,例如numpy / pandas软件包中提供的算法。但是,出于教育的目的,我认为应该提供一个答案,该答案具有透明的算法,而不仅仅是其他人算法的实现。其次,我注意到许多答案没有提供可靠的一键编码实现,因为它们不满足以下要求之一。以下是一些有用,准确且健壮的一键编码功能的要求(如我所见):

一键编码功能必须:

  • 处理各种类型的列表(例如,整数,字符串,浮点数等)作为输入
  • 处理重复的输入列表
  • 返回与输入相对应的列表列表(顺序相同)
  • 返回列表列表,其中每个列表都尽可能短

我测试了这个问题的许多答案,但大多数都无法满足上述要求之一。

Short Answer

Here is a function to do one-hot-encoding without using numpy, pandas, or other packages. It takes a list of integers, booleans, or strings (and perhaps other types too).

import typing


def one_hot_encode(items: list) -> typing.List[list]:
    results = []
    # find the unique items (we want to unique items b/c duplicate items will have the same encoding)
    unique_items = list(set(items))
    # sort the unique items
    sorted_items = sorted(unique_items)
    # find how long the list of each item should be
    max_index = len(unique_items)

    for item in items:
        # create a list of zeros the appropriate length
        one_hot_encoded_result = [0 for i in range(0, max_index)]
        # find the index of the item
        one_hot_index = sorted_items.index(item)
        # change the zero at the index from the previous line to a one
        one_hot_encoded_result[one_hot_index] = 1
        # add the result
        results.append(one_hot_encoded_result)

    return results

Example:

one_hot_encode([2, 1, 1, 2, 5, 3])

# [[0, 1, 0, 0],
#  [1, 0, 0, 0],
#  [1, 0, 0, 0],
#  [0, 1, 0, 0],
#  [0, 0, 0, 1],
#  [0, 0, 1, 0]]
one_hot_encode([True, False, True])

# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])

# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]

Long(er) Answer

I know there are already a lot of answers to this question, but I noticed two things. First, most of the answers use packages like numpy and/or pandas. And this is a good thing. If you are writing production code, you should probably be using robust, fast algorithms like those provided in the numpy/pandas packages. But, for the sake of education, I think someone should provide an answer which has a transparent algorithm and not just an implementation of someone else’s algorithm. Second, I noticed that many of the answers do not provide a robust implementation of one-hot encoding because they do not meet one of the requirements below. Below are some of the requirements (as I see them) for a useful, accurate, and robust one-hot encoding function:

A one-hot encoding function must:

  • handle list of various types (e.g. integers, strings, floats, etc.) as input
  • handle an input list with duplicates
  • return a list of lists corresponding (in the same order as) to the inputs
  • return a list of lists where each list is as short as possible

I tested many of the answers to this question and most of them fail on one of the requirements above.


回答 18

试试这个:

!pip install category_encoders
import category_encoders as ce

categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)

df_encoded.head()

生成的数据框df_train_encoded与原始数据框相同,但是现在将分类功能替换为它们的一键编码版本。

有关更多信息,请category_encoders 参见此处

Try this:

!pip install category_encoders
import category_encoders as ce

categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)

df_encoded.head()

The resulting dataframe df_train_encoded is the same as the original, but the categorical features are now replaced with their one-hot-encoded versions.

More information on category_encoders here.


回答 19

在这里,我尝试了这种方法:

import numpy as np
#converting to one_hot





def one_hot_encoder(value, datal):

    datal[value] = 1

    return datal


def _one_hot_values(labels_data):
    encoded = [0] * len(labels_data)

    for j, i in enumerate(labels_data):
        max_value = [0] * (np.max(labels_data) + 1)

        encoded[j] = one_hot_encoder(i, max_value)

    return np.array(encoded)

Here i tried with this approach :

import numpy as np
#converting to one_hot





def one_hot_encoder(value, datal):

    datal[value] = 1

    return datal


def _one_hot_values(labels_data):
    encoded = [0] * len(labels_data)

    for j, i in enumerate(labels_data):
        max_value = [0] * (np.max(labels_data) + 1)

        encoded[j] = one_hot_encoder(i, max_value)

    return np.array(encoded)

如何使用matplotlib颜色图将NumPy数组转换为PIL图像

问题:如何使用matplotlib颜色图将NumPy数组转换为PIL图像

我有一个简单的问题,但找不到很好的解决方案。

我想获取一个代表灰度图像的NumPy 2D数组,并在应用一些matplotlib颜色图时将其转换为RGB PIL图像。

我可以使用以下pyplot.figure.figimage命令获得合理的PNG输出:

dpi = 100.0
w, h = myarray.shape[1]/dpi, myarray.shape[0]/dpi
fig = plt.figure(figsize=(w,h), dpi=dpi)
fig.figimage(sub, cmap=cm.gist_earth)
plt.savefig('out.png')

尽管我可以修改它以获取所需的东西(可能使用StringIO可以获取PIL图像),但我想知道是否没有一种更简单的方法可以这样做,因为这似乎是图像可视化的一个非常自然的问题。假设是这样的:

colored_PIL_image = magic_function(array, cmap)

I have a simple problem, but I cannot find a good solution to it.

I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps.

I can get a reasonable PNG output by using the pyplot.figure.figimage command:

dpi = 100.0
w, h = myarray.shape[1]/dpi, myarray.shape[0]/dpi
fig = plt.figure(figsize=(w,h), dpi=dpi)
fig.figimage(sub, cmap=cm.gist_earth)
plt.savefig('out.png')

Although I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let’s say, something like this:

colored_PIL_image = magic_function(array, cmap)

回答 0

一行代码很忙,但是这里是:

  1. 首先,请确保您的NumPy数组myarray使用处的最大值进行了规范化1.0
  2. 将颜色表直接应用于myarray
  3. 重新调整0-255范围。
  4. 使用转换为整数np.uint8()
  5. 使用Image.fromarray()

这样就完成了:

from PIL import Image
from matplotlib import cm
im = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255))

plt.savefig()

im.save()

Quite a busy one-liner, but here it is:

  1. First ensure your NumPy array, myarray, is normalised with the max value at 1.0.
  2. Apply the colormap directly to myarray.
  3. Rescale to the 0-255 range.
  4. Convert to integers, using np.uint8().
  5. Use Image.fromarray().

And you’re done:

from PIL import Image
from matplotlib import cm
im = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255))

with plt.savefig():

with im.save():


回答 1

  • 输入= numpy_image
  • np.unit8->转换为整数
  • convert(’RGB’)->转换为RGB
  • Image.fromarray->返回图像对象

    from PIL import Image
    import numpy as np
    
    PIL_image = Image.fromarray(np.uint8(numpy_image)).convert('RGB')
    
    PIL_image = Image.fromarray(numpy_image.astype('uint8'), 'RGB')
  • input = numpy_image
  • np.unit8 -> converts to integers
  • convert(‘RGB’) -> converts to RGB
  • Image.fromarray -> returns an image object

    from PIL import Image
    import numpy as np
    
    PIL_image = Image.fromarray(np.uint8(numpy_image)).convert('RGB')
    
    PIL_image = Image.fromarray(numpy_image.astype('uint8'), 'RGB')
    

回答 2

即使应用了注释中提到的更改,接受的答案中描述的方法对我也不起作用。但是下面的简单代码有效:

import matplotlib.pyplot as plt
plt.imsave(filename, np_array, cmap='Greys')

np_array可以是2D数组,其值从0..1浮点型到o2 0..255 uint8,在这种情况下,它需要cmap。对于3D阵列,cmap将被忽略。

The method described in the accepted answer didn’t work for me even after applying changes mentioned in its comments. But the below simple code worked:

import matplotlib.pyplot as plt
plt.imsave(filename, np_array, cmap='Greys')

np_array could be either a 2D array with values from 0..1 floats o2 0..255 uint8, and in that case it needs cmap. For 3D arrays, cmap will be ignored.


Pandas DataFrame:根据条件替换列中的所有值

问题:Pandas DataFrame:根据条件替换列中的所有值

我有一个简单的DataFrame如下所示:

我想从“第一季”列中选择所有值,然后将1990年以后的值替换为1。在此示例中,只有巴尔的摩乌鸦将1996年替换为1(其余数据保持不变)。

我使用了以下内容:

df.loc[(df['First Season'] > 1990)] = 1

但是,它将行中的所有值替换为1,而不仅仅是“第一季”列中的值。

如何仅替换该列中的值?

I have a simple DataFrame like the following:

I want to select all values from the ‘First Season’ column and replace those that are over 1990 by 1. In this example, only Baltimore Ravens would have the 1996 replaced by 1 (keeping the rest of the data intact).

I have used the following:

df.loc[(df['First Season'] > 1990)] = 1

But, it replaces all the values in that row by 1, and not just the values in the ‘First Season’ column.

How can I replace just the values from that column?


回答 0

您需要选择该列:

In [41]:
df.loc[df['First Season'] > 1990, 'First Season'] = 1
df

Out[41]:
                 Team  First Season  Total Games
0      Dallas Cowboys          1960          894
1       Chicago Bears          1920         1357
2   Green Bay Packers          1921         1339
3      Miami Dolphins          1966          792
4    Baltimore Ravens             1          326
5  San Franciso 49ers          1950         1003

所以这里的语法是:

df.loc[<mask>(here mask is generating the labels to index) , <optional column(s)> ]

您可以检查文档以及显示语义的10分钟熊猫查询

编辑

如果你想生成一个布尔值指标,那么你可以只使用布尔条件产生boolean值系列和铸铁的D型到int这将转换TrueFalse10分别为:

In [43]:
df['First Season'] = (df['First Season'] > 1990).astype(int)
df

Out[43]:
                 Team  First Season  Total Games
0      Dallas Cowboys             0          894
1       Chicago Bears             0         1357
2   Green Bay Packers             0         1339
3      Miami Dolphins             0          792
4    Baltimore Ravens             1          326
5  San Franciso 49ers             0         1003

You need to select that column:

In [41]:
df.loc[df['First Season'] > 1990, 'First Season'] = 1
df

Out[41]:
                 Team  First Season  Total Games
0      Dallas Cowboys          1960          894
1       Chicago Bears          1920         1357
2   Green Bay Packers          1921         1339
3      Miami Dolphins          1966          792
4    Baltimore Ravens             1          326
5  San Franciso 49ers          1950         1003

So the syntax here is:

df.loc[<mask>(here mask is generating the labels to index) , <optional column(s)> ]

You can check the docs and also the 10 minutes to pandas which shows the semantics

EDIT

If you want to generate a boolean indicator then you can just use the boolean condition to generate a boolean Series and cast the dtype to int this will convert True and False to 1 and 0 respectively:

In [43]:
df['First Season'] = (df['First Season'] > 1990).astype(int)
df

Out[43]:
                 Team  First Season  Total Games
0      Dallas Cowboys             0          894
1       Chicago Bears             0         1357
2   Green Bay Packers             0         1339
3      Miami Dolphins             0          792
4    Baltimore Ravens             1          326
5  San Franciso 49ers             0         1003

回答 1

聚会晚了一点,但仍然-我更喜欢在以下地方使用numpy:

import numpy as np
df['First Season'] = np.where(df['First Season'] > 1990, 1, df['First Season'])

A bit late to the party but still – I prefer using numpy where:

import numpy as np
df['First Season'] = np.where(df['First Season'] > 1990, 1, df['First Season'])

回答 2

df['First Season'].loc[(df['First Season'] > 1990)] = 1

奇怪的是没有人有这个答案,您的代码唯一缺少的部分是df之后的[‘First Season’],只需删除其中的大括号即可。

df['First Season'].loc[(df['First Season'] > 1990)] = 1

strange that nobody has this answer, the only missing part of your code is the [‘First Season’] right after df and just remove your curly brackets inside.


回答 3

对于单一条件,即。 ( 'employrate'] > 70 )

       country        employrate alcconsumption
0  Afghanistan  55.7000007629394            .03
1      Albania  51.4000015258789           7.29
2      Algeria              50.5            .69
3      Andorra                            10.17
4       Angola  75.6999969482422           5.57

用这个:

df.loc[df['employrate'] > 70, 'employrate'] = 7

       country  employrate alcconsumption
0  Afghanistan   55.700001            .03
1      Albania   51.400002           7.29
2      Algeria   50.500000            .69
3      Andorra         nan          10.17
4       Angola    7.000000           5.57

因此,语法如下:

df.loc[<mask>(here mask is generating the labels to index) , <optional column(s)> ]

对于多个条件,即。 (df['employrate'] <=55) & (df['employrate'] > 50)

用这个:

df['employrate'] = np.where(
   (df['employrate'] <=55) & (df['employrate'] > 50) , 11, df['employrate']
   )

out[108]:
       country  employrate alcconsumption
0  Afghanistan   55.700001            .03
1      Albania   11.000000           7.29
2      Algeria   11.000000            .69
3      Andorra         nan          10.17
4       Angola   75.699997           5.57

因此,语法如下:

 df['<column_name>'] = np.where((<filter 1> ) & (<filter 2>) , <new value>, df['column_name'])

for single condition, ie. ( 'employrate'] > 70 )

       country        employrate alcconsumption
0  Afghanistan  55.7000007629394            .03
1      Albania  51.4000015258789           7.29
2      Algeria              50.5            .69
3      Andorra                            10.17
4       Angola  75.6999969482422           5.57

use this:

df.loc[df['employrate'] > 70, 'employrate'] = 7

       country  employrate alcconsumption
0  Afghanistan   55.700001            .03
1      Albania   51.400002           7.29
2      Algeria   50.500000            .69
3      Andorra         nan          10.17
4       Angola    7.000000           5.57

therefore syntax here is:

df.loc[<mask>(here mask is generating the labels to index) , <optional column(s)> ]

For multiple conditions ie. (df['employrate'] <=55) & (df['employrate'] > 50)

use this:

df['employrate'] = np.where(
   (df['employrate'] <=55) & (df['employrate'] > 50) , 11, df['employrate']
   )

out[108]:
       country  employrate alcconsumption
0  Afghanistan   55.700001            .03
1      Albania   11.000000           7.29
2      Algeria   11.000000            .69
3      Andorra         nan          10.17
4       Angola   75.699997           5.57

therefore syntax here is:

 df['<column_name>'] = np.where((<filter 1> ) & (<filter 2>) , <new value>, df['column_name'])

回答 4

df.loc[df['First season'] > 1990, 'First Season'] = 1

说明:

df.loc接受两个参数,“行索引”和“列索引”。我们正在“第一季”列下检查该值是否大于每行值的27,然后将其替换为1。

df.loc[df['First season'] > 1990, 'First Season'] = 1

Explanation:

df.loc takes two arguments, ‘row index’ and ‘column index’. We are checking if the value is greater than 27 of each row value, under “First season” column and then we replacing it with 1.


Python-提取和保存视频帧

问题:Python-提取和保存视频帧

因此,我已按照本教程进行操作,但似乎没有任何作用。根本没有。等待几秒钟,然后关闭程序。此代码有什么问题?

import cv2
vidcap = cv2.VideoCapture('Compton.mp4')
success,image = vidcap.read()
count = 0
success = True
while success:
  success,image = vidcap.read()
  cv2.imwrite("frame%d.jpg" % count, image)     # save frame as JPEG file
  if cv2.waitKey(10) == 27:                     # exit if Escape is hit
      break
  count += 1

另外,在评论中说这将帧数限制为1000?为什么?

编辑:我尝试先做,success = True但这没有帮助。它仅创建了一个0字节的图像。

So I’ve followed this tutorial but it doesn’t seem to do anything. Simply nothing. It waits a few seconds and closes the program. What is wrong with this code?

import cv2
vidcap = cv2.VideoCapture('Compton.mp4')
success,image = vidcap.read()
count = 0
success = True
while success:
  success,image = vidcap.read()
  cv2.imwrite("frame%d.jpg" % count, image)     # save frame as JPEG file
  if cv2.waitKey(10) == 27:                     # exit if Escape is hit
      break
  count += 1

Also, in the comments it says that this limits the frames to 1000? Why?

EDIT: I tried doing success = True first but that didn’t help. It only created one image that was 0 bytes.


回答 0

这里下载此视频,以便我们拥有用于测试的相同视频文件。确保将mp4文件放在python代码的同一目录中。然后还要确保从同一目录运行python解释器。

然后修改代码,waitKey浪费时间也没有窗口,它无法捕获键盘事件。另外,我们打印该success值以确保它已成功读取帧。

import cv2
vidcap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
success,image = vidcap.read()
count = 0
while success:
  cv2.imwrite("frame%d.jpg" % count, image)     # save frame as JPEG file      
  success,image = vidcap.read()
  print('Read a new frame: ', success)
  count += 1

怎么样了?

From here download this video so we have the same video file for the test. Make sure to have that mp4 file in the same directory of your python code. Then also make sure to run the python interpreter from the same directory.

Then modify the code, ditch waitKey that’s wasting time also without a window it cannot capture the keyboard events. Also we print the success value to make sure it’s reading the frames successfully.

import cv2
vidcap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
success,image = vidcap.read()
count = 0
while success:
  cv2.imwrite("frame%d.jpg" % count, image)     # save frame as JPEG file      
  success,image = vidcap.read()
  print('Read a new frame: ', success)
  count += 1

How does that go?


回答 1

如果有人不想提取每一帧,但想每秒钟提取一帧,则针对稍有不同的情况扩展此问题(@ user2700065的答案)。因此,一分钟的视频将提供60帧(图像)。

import sys
import argparse

import cv2
print(cv2.__version__)

def extractImages(pathIn, pathOut):
    count = 0
    vidcap = cv2.VideoCapture(pathIn)
    success,image = vidcap.read()
    success = True
    while success:
        vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))    # added this line 
        success,image = vidcap.read()
        print ('Read a new frame: ', success)
        cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image)     # save frame as JPEG file
        count = count + 1

if __name__=="__main__":
    a = argparse.ArgumentParser()
    a.add_argument("--pathIn", help="path to video")
    a.add_argument("--pathOut", help="path to images")
    args = a.parse_args()
    print(args)
    extractImages(args.pathIn, args.pathOut)

To extend on this question (& answer by @user2700065) for a slightly different cases, if anyone does not want to extract every frame but wants to extract frame every one second. So a 1-minute video will give 60 frames(images).

import sys
import argparse

import cv2
print(cv2.__version__)

def extractImages(pathIn, pathOut):
    count = 0
    vidcap = cv2.VideoCapture(pathIn)
    success,image = vidcap.read()
    success = True
    while success:
        vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))    # added this line 
        success,image = vidcap.read()
        print ('Read a new frame: ', success)
        cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image)     # save frame as JPEG file
        count = count + 1

if __name__=="__main__":
    a = argparse.ArgumentParser()
    a.add_argument("--pathIn", help="path to video")
    a.add_argument("--pathOut", help="path to images")
    args = a.parse_args()
    print(args)
    extractImages(args.pathIn, args.pathOut)

回答 2

这是来自@GShocked的python 3.x以前答案的调整,我将其发布到注释中,但信誉不足

import sys
import argparse

import cv2
print(cv2.__version__)

def extractImages(pathIn, pathOut):
    vidcap = cv2.VideoCapture(pathIn)
    success,image = vidcap.read()
    count = 0
    success = True
    while success:
      success,image = vidcap.read()
      print ('Read a new frame: ', success)
      cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image)     # save frame as JPEG file
      count += 1

if __name__=="__main__":
    print("aba")
    a = argparse.ArgumentParser()
    a.add_argument("--pathIn", help="path to video")
    a.add_argument("--pathOut", help="path to images")
    args = a.parse_args()
    print(args)
    extractImages(args.pathIn, args.pathOut)

This is a tweak from previous answer for python 3.x from @GShocked, I would post it to the comment, but dont have enough reputation

import sys
import argparse

import cv2
print(cv2.__version__)

def extractImages(pathIn, pathOut):
    vidcap = cv2.VideoCapture(pathIn)
    success,image = vidcap.read()
    count = 0
    success = True
    while success:
      success,image = vidcap.read()
      print ('Read a new frame: ', success)
      cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image)     # save frame as JPEG file
      count += 1

if __name__=="__main__":
    print("aba")
    a = argparse.ArgumentParser()
    a.add_argument("--pathIn", help="path to video")
    a.add_argument("--pathOut", help="path to images")
    args = a.parse_args()
    print(args)
    extractImages(args.pathIn, args.pathOut)

回答 3

此功能可将大多数视频格式转换为视频中的帧数。它的工作原理上Python3OpenCV 3+

import cv2
import time
import os

def video_to_frames(input_loc, output_loc):
    """Function to extract frames from input video file
    and save them as separate frames in an output directory.
    Args:
        input_loc: Input video file.
        output_loc: Output directory to save the frames.
    Returns:
        None
    """
    try:
        os.mkdir(output_loc)
    except OSError:
        pass
    # Log the time
    time_start = time.time()
    # Start capturing the feed
    cap = cv2.VideoCapture(input_loc)
    # Find the number of frames
    video_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - 1
    print ("Number of frames: ", video_length)
    count = 0
    print ("Converting video..\n")
    # Start converting the video
    while cap.isOpened():
        # Extract the frame
        ret, frame = cap.read()
        # Write the results back to output location.
        cv2.imwrite(output_loc + "/%#05d.jpg" % (count+1), frame)
        count = count + 1
        # If there are no more frames left
        if (count > (video_length-1)):
            # Log the time again
            time_end = time.time()
            # Release the feed
            cap.release()
            # Print stats
            print ("Done extracting frames.\n%d frames extracted" % count)
            print ("It took %d seconds forconversion." % (time_end-time_start))
            break

if __name__=="__main__":

    input_loc = '/path/to/video/00009.MTS'
    output_loc = '/path/to/output/frames/'
    video_to_frames(input_loc, output_loc)

它支持.mts和普通文件,例如.mp4.avi。在.mts文件上尝试和测试。奇迹般有效。

This is Function which will convert most of the video formats to number of frames there are in the video. It works on Python3 with OpenCV 3+

import cv2
import time
import os

def video_to_frames(input_loc, output_loc):
    """Function to extract frames from input video file
    and save them as separate frames in an output directory.
    Args:
        input_loc: Input video file.
        output_loc: Output directory to save the frames.
    Returns:
        None
    """
    try:
        os.mkdir(output_loc)
    except OSError:
        pass
    # Log the time
    time_start = time.time()
    # Start capturing the feed
    cap = cv2.VideoCapture(input_loc)
    # Find the number of frames
    video_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - 1
    print ("Number of frames: ", video_length)
    count = 0
    print ("Converting video..\n")
    # Start converting the video
    while cap.isOpened():
        # Extract the frame
        ret, frame = cap.read()
        # Write the results back to output location.
        cv2.imwrite(output_loc + "/%#05d.jpg" % (count+1), frame)
        count = count + 1
        # If there are no more frames left
        if (count > (video_length-1)):
            # Log the time again
            time_end = time.time()
            # Release the feed
            cap.release()
            # Print stats
            print ("Done extracting frames.\n%d frames extracted" % count)
            print ("It took %d seconds forconversion." % (time_end-time_start))
            break

if __name__=="__main__":

    input_loc = '/path/to/video/00009.MTS'
    output_loc = '/path/to/output/frames/'
    video_to_frames(input_loc, output_loc)

It supports .mts and normal files like .mp4 and .avi. Tried and Tested on .mts files. Works like a Charm.


回答 4

经过大量有关如何将帧转换为视频的研究,我创建了此功能,希望对您有所帮助。为此,我们需要opencv:

import cv2
import numpy as np
import os

def frames_to_video(inputpath,outputpath,fps):
   image_array = []
   files = [f for f in os.listdir(inputpath) if isfile(join(inputpath, f))]
   files.sort(key = lambda x: int(x[5:-4]))
   for i in range(len(files)):
       img = cv2.imread(inputpath + files[i])
       size =  (img.shape[1],img.shape[0])
       img = cv2.resize(img,size)
       image_array.append(img)
   fourcc = cv2.VideoWriter_fourcc('D', 'I', 'V', 'X')
   out = cv2.VideoWriter(outputpath,fourcc, fps, size)
   for i in range(len(image_array)):
       out.write(image_array[i])
   out.release()


inputpath = 'folder path'
outpath =  'video file path/video.mp4'
fps = 29
frames_to_video(inputpath,outpath,fps)

根据您自己的本地位置更改fps(帧/秒),输入文件夹路径和输出文件夹路径的值

After a lot of research on how to convert frames to video I have created this function hope this helps. We require opencv for this:

import cv2
import numpy as np
import os

def frames_to_video(inputpath,outputpath,fps):
   image_array = []
   files = [f for f in os.listdir(inputpath) if isfile(join(inputpath, f))]
   files.sort(key = lambda x: int(x[5:-4]))
   for i in range(len(files)):
       img = cv2.imread(inputpath + files[i])
       size =  (img.shape[1],img.shape[0])
       img = cv2.resize(img,size)
       image_array.append(img)
   fourcc = cv2.VideoWriter_fourcc('D', 'I', 'V', 'X')
   out = cv2.VideoWriter(outputpath,fourcc, fps, size)
   for i in range(len(image_array)):
       out.write(image_array[i])
   out.release()


inputpath = 'folder path'
outpath =  'video file path/video.mp4'
fps = 29
frames_to_video(inputpath,outpath,fps)

change the value of fps(frames per second),input folder path and output folder path according to your own local locations


回答 5

先前的答案丢失了第一帧。而且最好将图像存储在文件夹中。

# create a folder to store extracted images
import os
folder = 'test'  
os.mkdir(folder)
# use opencv to do the job
import cv2
print(cv2.__version__)  # my version is 3.1.0
vidcap = cv2.VideoCapture('test_video.mp4')
count = 0
while True:
    success,image = vidcap.read()
    if not success:
        break
    cv2.imwrite(os.path.join(folder,"frame{:d}.jpg".format(count)), image)     # save frame as JPEG file
    count += 1
print("{} images are extacted in {}.".format(count,folder))

顺便说一下,您可以通过VLC 检查帧率。转到Windows->媒体信息->编解码器详细信息

The previous answers have lost the first frame. And it will be nice to store the images in a folder.

# create a folder to store extracted images
import os
folder = 'test'  
os.mkdir(folder)
# use opencv to do the job
import cv2
print(cv2.__version__)  # my version is 3.1.0
vidcap = cv2.VideoCapture('test_video.mp4')
count = 0
while True:
    success,image = vidcap.read()
    if not success:
        break
    cv2.imwrite(os.path.join(folder,"frame{:d}.jpg".format(count)), image)     # save frame as JPEG file
    count += 1
print("{} images are extacted in {}.".format(count,folder))

By the way, you can check the frame rate by VLC. Go to windows -> media information -> codec details


回答 6

此代码从视频中提取帧并将帧保存为.jpg formate

import cv2
import numpy as np
import os

# set video file path of input video with name and extension
vid = cv2.VideoCapture('VideoPath')


if not os.path.exists('images'):
    os.makedirs('images')

#for frame identity
index = 0
while(True):
    # Extract images
    ret, frame = vid.read()
    # end of frames
    if not ret: 
        break
    # Saves images
    name = './images/frame' + str(index) + '.jpg'
    print ('Creating...' + name)
    cv2.imwrite(name, frame)

    # next frame
    index += 1

This code extract frames from the video and save the frames in .jpg formate

import cv2
import numpy as np
import os

# set video file path of input video with name and extension
vid = cv2.VideoCapture('VideoPath')


if not os.path.exists('images'):
    os.makedirs('images')

#for frame identity
index = 0
while(True):
    # Extract images
    ret, frame = vid.read()
    # end of frames
    if not ret: 
        break
    # Saves images
    name = './images/frame' + str(index) + '.jpg'
    print ('Creating...' + name)
    cv2.imwrite(name, frame)

    # next frame
    index += 1

回答 7

我正在通过Anaconda的Spyder软件使用Python。使用@Gshocked在此线程问题中列出的原始代码,该代码不起作用(Python无法读取mp4文件)。因此,我下载了OpenCV 3.2,并从“ bin”文件夹中复制了“ opencv_ffmpeg320.dll”和“ opencv_ffmpeg320_64.dll”。我将这两个dll文件都粘贴到了Anaconda的“ Dlls”文件夹中。

Anaconda也有一个“ pckgs”文件夹…我复制并粘贴了我下载到Anaconda“ pckgs”文件夹中的整个“ OpenCV 3.2”文件夹。

最后,Anaconda有一个“ Library”文件夹,其中有一个“ bin”子文件夹。我将“ opencv_ffmpeg320.dll”和“ opencv_ffmpeg320_64.dll”文件粘贴到该文件夹​​中。

关闭并重新启动Spyder之后,代码即可正常工作。我不确定这三种方法中的哪一种有效,而且我懒得回头再去弄清楚。但这很奏效,欢呼!

I am using Python via Anaconda’s Spyder software. Using the original code listed in the question of this thread by @Gshocked, the code does not work (the python won’t read the mp4 file). So I downloaded OpenCV 3.2 and copied “opencv_ffmpeg320.dll” and “opencv_ffmpeg320_64.dll” from the “bin” folder. I pasted both of these dll files to Anaconda’s “Dlls” folder.

Anaconda also has a “pckgs” folder…I copied and pasted the entire “OpenCV 3.2” folder that I downloaded to the Anaconda “pckgs” folder.

Finally, Anaconda has a “Library” folder which has a “bin” subfolder. I pasted the “opencv_ffmpeg320.dll” and “opencv_ffmpeg320_64.dll” files to that folder.

After closing and restarting Spyder, the code worked. I’m not sure which of the three methods worked, and I’m too lazy to go back and figure it out. But it works so, cheers!


回答 8

此功能以1 fps的速度从视频中提取图像,此外它还标识最后一帧并停止读取:

import cv2
import numpy as np

def extract_image_one_fps(video_source_path):

    vidcap = cv2.VideoCapture(video_source_path)
    count = 0
    success = True
    while success:
      vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))      
      success,image = vidcap.read()

      ## Stop when last frame is identified
      image_last = cv2.imread("frame{}.png".format(count-1))
      if np.array_equal(image,image_last):
          break

      cv2.imwrite("frame%d.png" % count, image)     # save frame as PNG file
      print '{}.sec reading a new frame: {} '.format(count,success)
      count += 1

This function extracts images from video with 1 fps, IN ADDITION it identifies the last frame and stops reading also:

import cv2
import numpy as np

def extract_image_one_fps(video_source_path):

    vidcap = cv2.VideoCapture(video_source_path)
    count = 0
    success = True
    while success:
      vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))      
      success,image = vidcap.read()

      ## Stop when last frame is identified
      image_last = cv2.imread("frame{}.png".format(count-1))
      if np.array_equal(image,image_last):
          break

      cv2.imwrite("frame%d.png" % count, image)     # save frame as PNG file
      print '{}.sec reading a new frame: {} '.format(count,success)
      count += 1

回答 9

以下脚本将每隔半秒提取一次文件夹中所有视频的帧。(适用于python 3.7)

import cv2
import os
listing = os.listdir(r'D:/Images/AllVideos')
count=1
for vid in listing:
    vid = r"D:/Images/AllVideos/"+vid
    vidcap = cv2.VideoCapture(vid)
    def getFrame(sec):
        vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000)
        hasFrames,image = vidcap.read()
        if hasFrames:
            cv2.imwrite("D:/Images/Frames/image"+str(count)+".jpg", image) # Save frame as JPG file
        return hasFrames
    sec = 0
    frameRate = 0.5 # Change this number to 1 for each 1 second
    
    success = getFrame(sec)
    while success:
        count = count + 1
        sec = sec + frameRate
        sec = round(sec, 2)
        success = getFrame(sec)

Following script will extract frames every half a second of all videos in folder. (Works on python 3.7)

import cv2
import os
listing = os.listdir(r'D:/Images/AllVideos')
count=1
for vid in listing:
    vid = r"D:/Images/AllVideos/"+vid
    vidcap = cv2.VideoCapture(vid)
    def getFrame(sec):
        vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000)
        hasFrames,image = vidcap.read()
        if hasFrames:
            cv2.imwrite("D:/Images/Frames/image"+str(count)+".jpg", image) # Save frame as JPG file
        return hasFrames
    sec = 0
    frameRate = 0.5 # Change this number to 1 for each 1 second
    
    success = getFrame(sec)
    while success:
        count = count + 1
        sec = sec + frameRate
        sec = round(sec, 2)
        success = getFrame(sec)

标准化大熊猫中的数据

问题:标准化大熊猫中的数据

假设我有一个熊猫数据框df

我想计算数据框的列均值。

这很简单:

df.apply(average) 

然后按列范围max(col)-min(col)。这又很容易:

df.apply(max) - df.apply(min)

现在,对于每个元素,我要减去其列的均值并除以其列的范围。我不确定该怎么做

非常感谢任何帮助/指针。

Suppose I have a pandas data frame df:

I want to calculate the column wise mean of a data frame.

This is easy:

df.apply(average) 

then the column wise range max(col) – min(col). This is easy again:

df.apply(max) - df.apply(min)

Now for each element I want to subtract its column’s mean and divide by its column’s range. I am not sure how to do that

Any help/pointers are much appreciated.


回答 0

In [92]: df
Out[92]:
           a         b          c         d
A  -0.488816  0.863769   4.325608 -4.721202
B -11.937097  2.993993 -12.916784 -1.086236
C  -5.569493  4.672679  -2.168464 -9.315900
D   8.892368  0.932785   4.535396  0.598124

In [93]: df_norm = (df - df.mean()) / (df.max() - df.min())

In [94]: df_norm
Out[94]:
          a         b         c         d
A  0.085789 -0.394348  0.337016 -0.109935
B -0.463830  0.164926 -0.650963  0.256714
C -0.158129  0.605652 -0.035090 -0.573389
D  0.536170 -0.376229  0.349037  0.426611

In [95]: df_norm.mean()
Out[95]:
a   -2.081668e-17
b    4.857226e-17
c    1.734723e-17
d   -1.040834e-17

In [96]: df_norm.max() - df_norm.min()
Out[96]:
a    1
b    1
c    1
d    1
In [92]: df
Out[92]:
           a         b          c         d
A  -0.488816  0.863769   4.325608 -4.721202
B -11.937097  2.993993 -12.916784 -1.086236
C  -5.569493  4.672679  -2.168464 -9.315900
D   8.892368  0.932785   4.535396  0.598124

In [93]: df_norm = (df - df.mean()) / (df.max() - df.min())

In [94]: df_norm
Out[94]:
          a         b         c         d
A  0.085789 -0.394348  0.337016 -0.109935
B -0.463830  0.164926 -0.650963  0.256714
C -0.158129  0.605652 -0.035090 -0.573389
D  0.536170 -0.376229  0.349037  0.426611

In [95]: df_norm.mean()
Out[95]:
a   -2.081668e-17
b    4.857226e-17
c    1.734723e-17
d   -1.040834e-17

In [96]: df_norm.max() - df_norm.min()
Out[96]:
a    1
b    1
c    1
d    1

回答 1

如果您不介意导入sklearn库,我建议您使用博客上介绍的方法。

import pandas as pd
from sklearn import preprocessing

data = {'score': [234,24,14,27,-74,46,73,-18,59,160]}
cols = data.columns
df = pd.DataFrame(data)
df

min_max_scaler = preprocessing.MinMaxScaler()
np_scaled = min_max_scaler.fit_transform(df)
df_normalized = pd.DataFrame(np_scaled, columns = cols)
df_normalized

If you don’t mind importing the sklearn library, I would recommend the method talked on this blog.

import pandas as pd
from sklearn import preprocessing

data = {'score': [234,24,14,27,-74,46,73,-18,59,160]}
cols = data.columns
df = pd.DataFrame(data)
df

min_max_scaler = preprocessing.MinMaxScaler()
np_scaled = min_max_scaler.fit_transform(df)
df_normalized = pd.DataFrame(np_scaled, columns = cols)
df_normalized

回答 2

您可以使用apply它,它有点整洁:

import numpy as np
import pandas as pd

np.random.seed(1)

df = pd.DataFrame(np.random.randn(4,4)* 4 + 3)

          0         1         2         3
0  9.497381  0.552974  0.887313 -1.291874
1  6.461631 -6.206155  9.979247 -0.044828
2  4.276156  2.002518  8.848432 -5.240563
3  1.710331  1.463783  7.535078 -1.399565

df.apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x)))

          0         1         2         3
0  0.515087  0.133967 -0.651699  0.135175
1  0.125241 -0.689446  0.348301  0.375188
2 -0.155414  0.310554  0.223925 -0.624812
3 -0.484913  0.244924  0.079473  0.114448

此外,groupby如果您选择相关列,它也可以与配合使用:

df['grp'] = ['A', 'A', 'B', 'B']

          0         1         2         3 grp
0  9.497381  0.552974  0.887313 -1.291874   A
1  6.461631 -6.206155  9.979247 -0.044828   A
2  4.276156  2.002518  8.848432 -5.240563   B
3  1.710331  1.463783  7.535078 -1.399565   B


df.groupby(['grp'])[[0,1,2,3]].apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x)))

     0    1    2    3
0  0.5  0.5 -0.5 -0.5
1 -0.5 -0.5  0.5  0.5
2  0.5  0.5  0.5 -0.5
3 -0.5 -0.5 -0.5  0.5

You can use apply for this, and it’s a bit neater:

import numpy as np
import pandas as pd

np.random.seed(1)

df = pd.DataFrame(np.random.randn(4,4)* 4 + 3)

          0         1         2         3
0  9.497381  0.552974  0.887313 -1.291874
1  6.461631 -6.206155  9.979247 -0.044828
2  4.276156  2.002518  8.848432 -5.240563
3  1.710331  1.463783  7.535078 -1.399565

df.apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x)))

          0         1         2         3
0  0.515087  0.133967 -0.651699  0.135175
1  0.125241 -0.689446  0.348301  0.375188
2 -0.155414  0.310554  0.223925 -0.624812
3 -0.484913  0.244924  0.079473  0.114448

Also, it works nicely with groupby, if you select the relevant columns:

df['grp'] = ['A', 'A', 'B', 'B']

          0         1         2         3 grp
0  9.497381  0.552974  0.887313 -1.291874   A
1  6.461631 -6.206155  9.979247 -0.044828   A
2  4.276156  2.002518  8.848432 -5.240563   B
3  1.710331  1.463783  7.535078 -1.399565   B


df.groupby(['grp'])[[0,1,2,3]].apply(lambda x: (x - np.mean(x)) / (np.max(x) - np.min(x)))

     0    1    2    3
0  0.5  0.5 -0.5 -0.5
1 -0.5 -0.5  0.5  0.5
2  0.5  0.5  0.5 -0.5
3 -0.5 -0.5 -0.5  0.5

回答 3

稍作修改自:Python Pandas数据框:归一化0.01和0.99之间的数据?但是从一些评论中认为这是相关的(抱歉,如果考虑重新发布…)

我想要自定义归一化,因为基准或z分数的常规百分位数不够。有时我知道总体的可行最大值和最小值是多少,因此除了我的样本或其他中点之外,还想对其进行定义!这通常对于重新缩放和规范化神经网络的数据很有用,因为您可能希望所有输入都在0到1之间,但是某些数据可能需要以更自定义的方式进行缩放…因为百分位数和标准差假设您的样本覆盖了人口,但有时我们知道这是不对的。在可视化热图中的数据时,这对我也非常有用。因此,我构建了一个自定义函数(在此处的代码中使用了额外的步骤,以使其更具可读性):

def NormData(s,low='min',center='mid',hi='max',insideout=False,shrinkfactor=0.):    
    if low=='min':
        low=min(s)
    elif low=='abs':
        low=max(abs(min(s)),abs(max(s)))*-1.#sign(min(s))
    if hi=='max':
        hi=max(s)
    elif hi=='abs':
        hi=max(abs(min(s)),abs(max(s)))*1.#sign(max(s))

    if center=='mid':
        center=(max(s)+min(s))/2
    elif center=='avg':
        center=mean(s)
    elif center=='median':
        center=median(s)

    s2=[x-center for x in s]
    hi=hi-center
    low=low-center
    center=0.

    r=[]

    for x in s2:
        if x<low:
            r.append(0.)
        elif x>hi:
            r.append(1.)
        else:
            if x>=center:
                r.append((x-center)/(hi-center)*0.5+0.5)
            else:
                r.append((x-low)/(center-low)*0.5+0.)

    if insideout==True:
        ir=[(1.-abs(z-0.5)*2.) for z in r]
        r=ir

    rr =[x-(x-0.5)*shrinkfactor for x in r]    
    return rr

这将采用熊猫系列,甚至只是一个列表,并将其标准化为您指定的低点,中点和高点。还有一个缩小因素!使您可以缩小端点0和1之外的数据的比例(在matplotlib中组合颜色图时,我必须这样做:使用Matplotlib单个pcolormesh中使用多个颜色图)样本中具有[-5,1,10]的值,但要基于-7到7(因此,大于7的任何值,我们的“ 10”有效地视为7)以2为中点进行归一化但将其缩小以适合256 RGB色彩图:

#In[1]
NormData([-5,2,10],low=-7,center=1,hi=7,shrinkfactor=2./256)
#Out[1]
[0.1279296875, 0.5826822916666667, 0.99609375]

它也可以将您的数据完全翻过来……这似乎很奇怪,但是我发现它对于热图很有用。假设您想使用深色来表示接近0的值,而不是高/低。您可以基于归一化数据的热图,其中Insideout = True:

#In[2]
NormData([-5,2,10],low=-7,center=1,hi=7,insideout=True,shrinkfactor=2./256)
#Out[2]
[0.251953125, 0.8307291666666666, 0.00390625]

因此,现在最接近中心的“ 2”(定义为“ 1”)是最大值。

无论如何,如果您希望以其他可能对您有用的应用程序重新缩放数据的方式,我认为我的应用程序很重要。

Slightly modified from: Python Pandas Dataframe: Normalize data between 0.01 and 0.99? but from some of the comments thought it was relevant (sorry if considered a repost though…)

I wanted customized normalization in that regular percentile of datum or z-score was not adequate. Sometimes I knew what the feasible max and min of the population were, and therefore wanted to define it other than my sample, or a different midpoint, or whatever! This can often be useful for rescaling and normalizing data for neural nets where you may want all inputs between 0 and 1, but some of your data may need to be scaled in a more customized way… because percentiles and stdevs assumes your sample covers the population, but sometimes we know this isn’t true. It was also very useful for me when visualizing data in heatmaps. So i built a custom function (used extra steps in the code here to make it as readable as possible):

def NormData(s,low='min',center='mid',hi='max',insideout=False,shrinkfactor=0.):    
    if low=='min':
        low=min(s)
    elif low=='abs':
        low=max(abs(min(s)),abs(max(s)))*-1.#sign(min(s))
    if hi=='max':
        hi=max(s)
    elif hi=='abs':
        hi=max(abs(min(s)),abs(max(s)))*1.#sign(max(s))

    if center=='mid':
        center=(max(s)+min(s))/2
    elif center=='avg':
        center=mean(s)
    elif center=='median':
        center=median(s)

    s2=[x-center for x in s]
    hi=hi-center
    low=low-center
    center=0.

    r=[]

    for x in s2:
        if x<low:
            r.append(0.)
        elif x>hi:
            r.append(1.)
        else:
            if x>=center:
                r.append((x-center)/(hi-center)*0.5+0.5)
            else:
                r.append((x-low)/(center-low)*0.5+0.)

    if insideout==True:
        ir=[(1.-abs(z-0.5)*2.) for z in r]
        r=ir

    rr =[x-(x-0.5)*shrinkfactor for x in r]    
    return rr

This will take in a pandas series, or even just a list and normalize it to your specified low, center, and high points. also there is a shrink factor! to allow you to scale down the data away from endpoints 0 and 1 (I had to do this when combining colormaps in matplotlib:Single pcolormesh with more than one colormap using Matplotlib) So you can likely see how the code works, but basically say you have values [-5,1,10] in a sample, but want to normalize based on a range of -7 to 7 (so anything above 7, our “10” is treated as a 7 effectively) with a midpoint of 2, but shrink it to fit a 256 RGB colormap:

#In[1]
NormData([-5,2,10],low=-7,center=1,hi=7,shrinkfactor=2./256)
#Out[1]
[0.1279296875, 0.5826822916666667, 0.99609375]

It can also turn your data inside out… this may seem odd, but I found it useful for heatmapping. Say you want a darker color for values closer to 0 rather than hi/low. You could heatmap based on normalized data where insideout=True:

#In[2]
NormData([-5,2,10],low=-7,center=1,hi=7,insideout=True,shrinkfactor=2./256)
#Out[2]
[0.251953125, 0.8307291666666666, 0.00390625]

So now “2” which is closest to the center, defined as “1” is the highest value.

Anyways, I thought my application was relevant if you’re looking to rescale data in other ways that could have useful applications to you.


回答 4

这是按列进行的方式:

[df[col].update((df[col] - df[col].min()) / (df[col].max() - df[col].min())) for col in df.columns]

This is how you do it column-wise:

[df[col].update((df[col] - df[col].min()) / (df[col].max() - df[col].min())) for col in df.columns]

无论内容类型标头如何,都可以在Python Flask中获取原始POST正文

问题:无论内容类型标头如何,都可以在Python Flask中获取原始POST正文

以前,我问过如何获取Flask请求中的数据,因为它request.data是空的。答案解释request.data为原始帖子正文,但如果分析表单数据将为空。我如何无条件获得原始职位?

@app.route('/', methods=['POST'])
def parse_request():
    data = request.data  # empty in some cases
    # always need raw data here, not parsed form data

Previously, I asked How to get data received in Flask request because request.data was empty. The answer explained that request.data is the raw post body, but will be empty if form data is parsed. How can I get the raw post body unconditionally?

@app.route('/', methods=['POST'])
def parse_request():
    data = request.data  # empty in some cases
    # always need raw data here, not parsed form data

回答 0

使用request.get_data()获得的原始数据,而不管内容类型。该数据被缓存,您可以随后访问request.datarequest.jsonrequest.form随意。

如果您request.data首先访问,它将首先调用get_data一个参数以解析表单数据。如果请求具有形式的内容类型(multipart/form-dataapplication/x-www-form-urlencoded,或application/x-url-encoded),则原始数据将被消耗。request.data并且request.json在这种情况下将显示为空。

Use request.get_data() to get the raw data, regardless of content type. The data is cached and you can subsequently access request.data, request.json, request.form at will.

If you access request.data first, it will call get_data with an argument to parse form data first. If the request has a form content type (multipart/form-data, application/x-www-form-urlencoded, or application/x-url-encoded) then the raw data will be consumed. request.data and request.json will appear empty in this case.


回答 1

request.stream是WSGI服务器传递给应用程序的原始数据流。读取时不进行任何解析,尽管通常会这样做request.get_data()

data = request.stream.read()

如果该流先前被request.data或其他属性读取,则将为空。

request.stream is the stream of raw data passed to the application by the WSGI server. No parsing is done when reading it, although you usually want request.get_data() instead.

data = request.stream.read()

The stream will be empty if it was previously read by request.data or another attribute.


回答 2

我创建了一个WSGI中间件,用于存储environ['wsgi.input']流中的原始内容。我将值保存在WSGI环境中,因此可以从request.environ['body_copy']我的应用程序中访问它。

在Werkzeug或Flask中这不是必需的,因为request.get_data()无论内容类型如何,都将获取原始数据,但是可以更好地处理HTTP和WSGI行为。

这会将整个主体读入内存,如果发布了一个大文件,这将是一个问题。如果Content-Length缺少标题,它将不会读取任何内容,因此它将无法处理流式请求。

from io import BytesIO

class WSGICopyBody(object):
    def __init__(self, application):
        self.application = application

    def __call__(self, environ, start_response):
        length = int(environ.get('CONTENT_LENGTH') or 0)
        body = environ['wsgi.input'].read(length)
        environ['body_copy'] = body
        # replace the stream since it was exhausted by read()
        environ['wsgi.input'] = BytesIO(body)
        return self.application(environ, start_response)

app.wsgi_app = WSGICopyBody(app.wsgi_app)
request.environ['body_copy']

I created a WSGI middleware that stores the raw body from the environ['wsgi.input'] stream. I saved the value in the WSGI environ so I could access it from request.environ['body_copy'] within my app.

This isn’t necessary in Werkzeug or Flask, as request.get_data() will get the raw data regardless of content type, but with better handling of HTTP and WSGI behavior.

This reads the entire body into memory, which will be an issue if for example a large file is posted. This won’t read anything if the Content-Length header is missing, so it won’t handle streaming requests.

from io import BytesIO

class WSGICopyBody(object):
    def __init__(self, application):
        self.application = application

    def __call__(self, environ, start_response):
        length = int(environ.get('CONTENT_LENGTH') or 0)
        body = environ['wsgi.input'].read(length)
        environ['body_copy'] = body
        # replace the stream since it was exhausted by read()
        environ['wsgi.input'] = BytesIO(body)
        return self.application(environ, start_response)

app.wsgi_app = WSGICopyBody(app.wsgi_app)
request.environ['body_copy']

回答 3

request.data如果request.headers["Content-Type"]被识别为表单数据,则将为空,并将其解析为request.form。要获取原始数据而不管内容类型如何,请使用request.get_data()

request.data调用request.get_data(parse_form_data=True),这导致表单数据的行为不同。

request.data will be empty if request.headers["Content-Type"] is recognized as form data, which will be parsed into request.form. To get the raw data regardless of content type, use request.get_data().

request.data calls request.get_data(parse_form_data=True), which results in the different behavior for form data.