subprocess.Popen和os.system之间的区别

问题:subprocess.Popen和os.system之间的区别

subprocess.Popen()和之间有什么区别os.system()

What is the difference between subprocess.Popen() and os.system()?


回答 0

如果您查看Python文档subprocess部分,您会注意到有一个示例如何替换os.system()subprocess.Popen()

sts = os.system("mycmd" + " myarg")

…与…具有相同的作用

sts = Popen("mycmd" + " myarg", shell=True).wait()

“改进的”代码看起来更复杂,但是更好,因为一旦知道了subprocess.Popen(),就不需要其他任何东西。subprocess.Popen()替换了os.system()分散在其他三个Python模块中的其他几个工具(仅是其中的一种)。

如果有帮助,可以认为它subprocess.Popen()是非常灵活的os.system()

If you check out the subprocess section of the Python docs, you’ll notice there is an example of how to replace os.system() with subprocess.Popen():

sts = os.system("mycmd" + " myarg")

…does the same thing as…

sts = Popen("mycmd" + " myarg", shell=True).wait()

The “improved” code looks more complicated, but it’s better because once you know subprocess.Popen(), you don’t need anything else. subprocess.Popen() replaces several other tools (os.system() is just one of those) that were scattered throughout three other Python modules.

If it helps, think of subprocess.Popen() as a very flexible os.system().


回答 1

subprocess.Popen()是的严格超集os.system()

subprocess.Popen() is strict super-set of os.system().


回答 2

os.system等效于Unix system命令,同时subprocess创建了一个帮助程序模块,以提供Popen命令提供的许多功能,并提供一个更容易控制的界面。它们的设计类似于Unix Popen命令。

system()通过调用执行命令中指定的命令/bin/sh -c command,命令完成后返回

鉴于:

popen()函数通过创建管道,派生和调用外壳程序来打开进程。

如果您正在考虑使用哪一个,那么subprocess一定要使用它,因为您具有执行的所有功能以及对该过程的额外控制。

os.system is equivalent to Unix system command, while subprocess was a helper module created to provide many of the facilities provided by the Popen commands with an easier and controllable interface. Those were designed similar to the Unix Popen command.

system() executes a command specified in command by calling /bin/sh -c command, and returns after the command has been completed

Whereas:

The popen() function opens a process by creating a pipe, forking, and invoking the shell.

If you are thinking which one to use, then use subprocess definitely because you have all the facilities for execution, plus additional control over the process.


回答 3

子是基于popen2,因此具有很多优点-有在一个完整列表在这里PEP,但有些是:

  • 在外壳中使用管道
  • 更好的换行支持
  • 更好地处理异常

Subprocess is based on popen2, and as such has a number of advantages – there’s a full list in the PEP here, but some are:

  • using pipe in the shell
  • better newline support
  • better handling of exceptions

回答 4

当运行在Windows上的Python(CPython的)的<built-in function system> 使用os.system将窗帘下执行_wsystem而如果你使用非Windows操作系统,它会利用系统

相反,Popen应该在Windows上使用CreateProcess,在基于posix的操作系统中使用_posixsubprocess.fork_exec

就是说,重要的建议来自os.system docs,它说:

子流程模块提供了更强大的功能来生成新流程并检索其结果。使用该模块优于使用此功能。有关一些有用的食谱,请参见子过程文档中的用子过程模块替换较早的功能部分。

When running python (cpython) on windows the <built-in function system> os.system will execute under the curtains _wsystem while if you’re using a non-windows os, it’ll use system.

On contrary, Popen should use CreateProcess on windows and _posixsubprocess.fork_exec in posix-based operating-systems.

That said, an important piece of advice comes from os.system docs, which says:

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.


避免默认参数为空列表的pythonic方法是什么?

问题:避免默认参数为空列表的pythonic方法是什么?

有时,使用默认参数(一个空列表)似乎很自然。但是Python在这些情况下会产生意外的行为

例如,如果我有一个功能:

def my_func(working_list = []):
    working_list.append("a")
    print(working_list)

第一次调用它时,默认设置将起作用,但是此后的调用将更新现有列表(每个调用一个“ a”)并打印更新的版本。

那么,什么是获得我想要的行为的Python方法(每次调用都会有一个新列表)?

Sometimes it seems natural to have a default parameter which is an empty list. Yet Python gives unexpected behavior in these situations.

If for example, I have a function:

def my_func(working_list = []):
    working_list.append("a")
    print(working_list)

The first time it is called the default will work, but calls after that will update the existing list (with one “a” each call) and print the updated version.

So, what is the pythonic way to get the behavior I desire (a fresh list on each call)?


回答 0

def my_func(working_list=None):
    if working_list is None: 
        working_list = []

    working_list.append("a")
    print(working_list)

文档说您应该将其None用作默认值,并在函数主体中对其进行显式测试

def my_func(working_list=None):
    if working_list is None: 
        working_list = []

    working_list.append("a")
    print(working_list)

The docs say you should use None as the default and explicitly test for it in the body of the function.


回答 1

现有答案已经按要求提供了直接解决方案。但是,由于这对于新Python程序员来说是一个非常常见的陷阱,因此有必要添加解释python为何采用这种方式的解释,在《Python的Hitchhikers指南》中很好地总结为“ Mutable Default Arguments ”: http:// docs .python-guide.org / en / latest / writing / gotchas /

Quote:“ Python的默认参数在定义函数时进行一次评估,而不是在每次调用函数时都进行评估(例如Ruby)。这意味着,如果您使用可变的默认参数并对它进行突变,您将拥有将该对象也更改为将来对函数的所有调用

实现它的示例代码:

def foo(element, to=None):
    if to is None:
        to = []
    to.append(element)
    return to

Other answers have already already provided the direct solutions as asked for, however, since this is a very common pitfall for new Python programmers, it’s worth adding the explanation of why Python behaves this way, which is nicely summarized in The Hitchhikers Guide to Python under Mutable Default Arguments:

Python’s default arguments are evaluated once when the function is defined, not each time the function is called (like it is in say, Ruby). This means that if you use a mutable default argument and mutate it, you will and have mutated that object for all future calls to the function as well.


回答 2

在这种情况下,这并不重要,但是您可以使用对象标识来测试None:

if working_list is None: working_list = []

您还可以利用布尔运算符or在python中的定义方式:

working_list = working_list or []

尽管如果调用者给您一个空列表(该列表为false)作为working_list并期望您的函数修改他提供的列表,这将出乎意料。

Not that it matters in this case, but you can use object identity to test for None:

if working_list is None: working_list = []

You could also take advantage of how the boolean operator or is defined in python:

working_list = working_list or []

Though this will behave unexpectedly if the caller gives you an empty list (which counts as false) as working_list and expects your function to modify the list he gave it.


回答 3

如果该函数的目的是修改作为参数传递的参数working_list,请参见HenryR的答案(= None,检查内部是否为None)。

但是,如果您不想改变该参数,只需将其用作列表的起点,则可以简单地将其复制:

def myFunc(starting_list = []):
    starting_list = list(starting_list)
    starting_list.append("a")
    print starting_list

(或者只是在这种简单情况下,print starting_list + ["a"]但我想那只是一个玩具示例)

通常,在Python中更改参数是不好的样式。完全期望使对象发生突变的唯一功能是该对象的方法。突变可选参数的情况更是罕见-仅在某些调用中发生的副作用确实是最好的接口吗?

  • 如果按照“输出参数”的C习惯执行此操作,则完全没有必要-您始终可以将多个值作为元组返回。

  • 如果这样做是为了有效地构建一长串结果而不创建中间列表,请考虑将其作为生成器编写并result_list.extend(myFunc())在调用时使用。这样,您的调用约定仍然非常干净。

经常更改可选arg的一种模式递归函数中的隐藏“ memo” arg:

def depth_first_walk_graph(graph, node, _visited=None):
    if _visited is None:
        _visited = set()  # create memo once in top-level call

    if node in _visited:
        return
    _visited.add(node)
    for neighbour in graph[node]:
        depth_first_walk_graph(graph, neighbour, _visited)

If the intent of the function is to modify the parameter passed as working_list, see HenryR’s answer (=None, check for None inside).

But if you didn’t intend to mutate the argument, just use it as starting point for a list, you can simply copy it:

def myFunc(starting_list = []):
    starting_list = list(starting_list)
    starting_list.append("a")
    print starting_list

(or in this simple case just print starting_list + ["a"] but I guess that was just a toy example)

In general, mutating your arguments is bad style in Python. The only functions that are fully expected to mutate an object are methods of the object. It’s even rarer to mutate an optional argument — is a side effect that happens only in some calls really the best interface?

  • If you do it from the C habit of “output arguments”, that’s completely unnecessary – you can always return multiple values as a tuple.

  • If you do this to efficiently build a long list of results without building intermediate lists, consider writing it as a generator and using result_list.extend(myFunc()) when you are calling it. This way your calling conventions remains very clean.

One pattern where mutating an optional arg is frequently done is a hidden “memo” arg in recursive functions:

def depth_first_walk_graph(graph, node, _visited=None):
    if _visited is None:
        _visited = set()  # create memo once in top-level call

    if node in _visited:
        return
    _visited.add(node)
    for neighbour in graph[node]:
        depth_first_walk_graph(graph, neighbour, _visited)

回答 4

我可能没什么意思,但是请记住,如果您只想传递可变数量的参数,则pythonic方法是传递元组*args或字典**kargs。这些是可选的,并且比语法更好myFunc([1, 2, 3])

如果要传递元组:

def myFunc(arg1, *args):
  print args
  w = []
  w += args
  print w
>>>myFunc(1, 2, 3, 4, 5, 6, 7)
(2, 3, 4, 5, 6, 7)
[2, 3, 4, 5, 6, 7]

如果您想通过字典:

def myFunc(arg1, **kargs):
   print kargs
>>>myFunc(1, option1=2, option2=3)
{'option2' : 2, 'option1' : 3}

I might be off-topic, but remember that if you just want to pass a variable number of arguments, the pythonic way is to pass a tuple *args or a dictionary **kargs. These are optional and are better than the syntax myFunc([1, 2, 3]).

If you want to pass a tuple:

def myFunc(arg1, *args):
  print args
  w = []
  w += args
  print w
>>>myFunc(1, 2, 3, 4, 5, 6, 7)
(2, 3, 4, 5, 6, 7)
[2, 3, 4, 5, 6, 7]

If you want to pass a dictionary:

def myFunc(arg1, **kargs):
   print kargs
>>>myFunc(1, option1=2, option2=3)
{'option2' : 2, 'option1' : 3}

回答 5

已经提供了正确的正确答案。我只是想提供另一种语法来编写您想做的事情,例如,当您想创建一个带有默认空列表的类时,我会发现它更漂亮:

class Node(object):
    def __init__(self, _id, val, parents=None, children=None):
        self.id = _id
        self.val = val
        self.parents = parents if parents is not None else []
        self.children = children if children is not None else []

此代码段使用if else运算符语法。我特别喜欢它,因为它是一个整齐的小单行,没有冒号等,而且读起来几乎像普通的英语句子。:)

您可以写

def myFunc(working_list=None):
    working_list = [] if working_list is None else working_list
    working_list.append("a")
    print working_list

There have already been good and correct answers provided. I just wanted to give another syntax to write what you want to do which I find more beautiful when you for instance want to create a class with default empty lists:

class Node(object):
    def __init__(self, _id, val, parents=None, children=None):
        self.id = _id
        self.val = val
        self.parents = parents if parents is not None else []
        self.children = children if children is not None else []

This snippet makes use of the if else operator syntax. I like it especially because it’s a neat little one-liner without colons, etc. involved and it nearly reads like a normal English sentence. :)

In your case you could write

def myFunc(working_list=None):
    working_list = [] if working_list is None else working_list
    working_list.append("a")
    print working_list

回答 6

我参加了UCSC扩展类 Python for programmer

def Fn(data = []):

a)是个好主意,这样您的数据列表在每次调用时都开始为空。

b)是一个好主意,以便对函数的所有调用(在调用中不提供任何参数)都将获得空列表作为数据。

c)是一个合理的想法,只要您的数据是字符串列表即可。

d)是个坏主意,因为默认[]将累积数据,而默认[]将随后续调用而改变。

回答:

d)是个坏主意,因为默认[]将累积数据,而默认[]将随后续调用而改变。

I took the UCSC extension class Python for programmer

Which is true of: def Fn(data = []):

a) is a good idea so that your data lists start empty with every call.

b) is a good idea so that all calls to the function that do not provide any arguments on the call will get the empty list as data.

c) is a reasonable idea as long as your data is a list of strings.

d) is a bad idea because the default [] will accumulate data and the default [] will change with subsequent calls.

Answer:

d) is a bad idea because the default [] will accumulate data and the default [] will change with subsequent calls.


Python编码标准/最佳实践

问题:Python编码标准/最佳实践

在python中,您通常使用PEP 8-Python代码样式指南作为您的编码标准/准则吗?您还有其他更喜欢的正式标准吗?

In python do you generally use PEP 8 — Style Guide for Python Code as your coding standards/guidelines? Are there any other formalized standards that you prefer?


回答 0

“在python中,您通常使用PEP 8-Python代码样式指南作为您的编码标准/准则吗?您是否还需要其他正式的标准?”

如您所提到的,请遵循PEP 8作为主要文本,并遵循PEP 257作为文档字符串约定

与Python样式指南一起,建议您参考以下内容:

  1. 像Pythonista一样的代码:惯用的Python
  2. 常见错误和疣
  3. 如何不编写Python代码
  4. Python陷阱

“In python do you generally use PEP 8 — Style Guide for Python Code as your coding standards/guidelines? Are there any other formalized standards that you prefer?”

As mentioned by you follow PEP 8 for the main text, and PEP 257 for docstring conventions

Along with Python Style Guides, I suggest that you refer the following:

  1. Code Like a Pythonista: Idiomatic Python
  2. Common mistakes and Warts
  3. How not to write Python code
  4. Python gotcha

回答 1

我遵循Rob Knight 撰写Python习语和效率指南。我认为它们与PEP 8完全相同,但更多是基于实例的综合。

如果您使用的是wxPython,则可能还需要查看Chris Barker 撰写的wxPython代码的样式指南

I follow the Python Idioms and Efficiency guidelines, by Rob Knight. I think they are exactly the same as PEP 8, but are more synthetic and based on examples.

If you are using wxPython you might also want to check Style Guide for wxPython code, by Chris Barker, as well.


回答 2

我非常坚持PEP-8。

我有三件事要更改为PEP-8。

  • 避免在括号,方括号或大括号内立即使用多余的空格。

    建议: spam(ham[1], {eggs: 2})

    无论如何,我这样做: spam( ham[ 1 ], { eggs: 2 } )

    为什么?30多年的根深蒂固的习惯使()与函数名或C语言中的语句关键字不符。从70年代的Fortran IV开始。

  • 在算术运算符周围使用空格:

    建议: x = x * 2 - 1

    无论如何,我这样做: x= x * 2 - 1

    为什么?Gries的《编程科学》建议这是强调赋值与状态被更改的变量之间的联系的一种方式。

    对于多重分配或扩充分配,它不太适用,因为我使用了大量空格。

  • 对于函数名称,方法名称和实例变量名称

    建议:小写,单词之间用下划线分隔,以提高可读性。

    我还是这样做:camelCase

    为什么?从80年代的Pascal开始,有20多年根深蒂固的camelCase习惯。

I stick to PEP-8 very closely.

There are three specific things that I can’t be bothered to change to PEP-8.

  • Avoid extraneous whitespace immediately inside parentheses, brackets or braces.

    Suggested: spam(ham[1], {eggs: 2})

    I do this anyway: spam( ham[ 1 ], { eggs: 2 } )

    Why? 30+ years of ingrained habit is snuggling ()’s up against function names or (in C) statements keywords. Starting with Fortran IV in the 70’s.

  • Use spaces around arithmetic operators:

    Suggested: x = x * 2 - 1

    I do this anyway: x= x * 2 - 1

    Why? Gries’ The Science of Programming suggested this as a way to emphasize the connection between assignment and the variable who’s state is being changed.

    It doesn’t work well for multiple assignment or augmented assignment, for that I use lots of spaces.

  • For function names, method names and instance variable names

    Suggested: lowercase, with words separated by underscores as necessary to improve readability.

    I do this anyway: camelCase

    Why? 20+ years of ingrained habit of camelCase, starting with Pascal in the 80’s.


回答 3

PEP 8很好,我唯一希望它更难解决的是Tabs-vs-Spaces的神圣战争。

基本上,如果您要使用python启动项目,则需要选择“制表符”或“空格”,然后立即将所有违规者开枪。

PEP 8 is good, the only thing that i wish it came down harder on was the Tabs-vs-Spaces holy war.

Basically if you are starting a project in python, you need to choose Tabs or Spaces and then shoot all offenders on sight.


回答 4

要添加到bhadra的 惯用指南列表中:

查阅Anthony Baxter的有关有效Python编程的演示文稿(来自OSON 2005)。

摘录:

# dict's setdefault method turns this:
if key in dictobj:
    dictobj[key].append(val)
else:
    dictobj[key] = [val]
# into this:
dictobj.setdefault(key,[]).append(val)

To add to bhadra’s list of idiomatic guides:

Checkout Anthony Baxter’s presentation on Effective Python Programming (from OSON 2005).

An excerpt:

# dict's setdefault method turns this:
if key in dictobj:
    dictobj[key].append(val)
else:
    dictobj[key] = [val]
# into this:
dictobj.setdefault(key,[]).append(val)

回答 5

我非常严格地遵循它。PEP-8之前唯一的神是现有的代码库。

I follow it extremely rigorously. The only god before PEP-8 is existing code bases.


回答 6

是的,我会尽量紧跟其后。

我没有遵循任何其他编码标准。

Yes, I try to follow it as closely as possible.

I don’t follow any other coding standards.


回答 7

我遵循PEP8,这是很棒的编码风格。

I follow the PEP8, it is a great piece of coding style.


为什么在Python 3.5中str.translate比Python 3.4更快?

问题:为什么在Python 3.5中str.translate比Python 3.4更快?

我试图使用text.translate()Python 3.4 从给定的字符串中删除不需要的字符。

最小的代码是:

import sys 
s = 'abcde12345@#@$#%$'
mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$')
print(s.translate(mapper))

它按预期工作。但是,在Python 3.4和Python 3.5中执行相同的程序会产生很大的不同。

计算时间的代码是

python3 -m timeit -s "import sys;s = 'abcde12345@#@$#%$'*1000 ; mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$'); "   "s.translate(mapper)"

Python 3.4程序花费1.3毫秒,而Python 3.5中的同一程序仅花费26.4μs

Python 3.5中有哪些改进使其比Python 3.4更快?

I was trying to remove unwanted characters from a given string using text.translate() in Python 3.4.

The minimal code is:

import sys 
s = 'abcde12345@#@$#%$'
mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$')
print(s.translate(mapper))

It works as expected. However the same program when executed in Python 3.4 and Python 3.5 gives a large difference.

The code to calculate timings is

python3 -m timeit -s "import sys;s = 'abcde12345@#@$#%$'*1000 ; mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$'); "   "s.translate(mapper)"

The Python 3.4 program takes 1.3ms whereas the same program in Python 3.5 takes only 26.4μs.

What has improved in Python 3.5 that makes it faster compared to Python 3.4?


回答 0

TL; DR- 问题21118


长篇故事

Josh Rosenberg发现str.translate()与相比,该功能非常慢bytes.translate,他提出了一个问题,并指出:

在Python 3中,str.translate()通常是性能悲观,而不是优化。

为什么str.translate()慢呢?

str.translate()速度很慢的主要原因是查找曾经在Python字典中进行。

使用maketrans此问题使情况变得更糟。类似的方法是使用bytes256个项目构建一个C数组以快速查找表。因此,较高级别的Python的使用dict使str.translate()Python 3.4中的速度非常慢。

现在发生什么事?

第一种方法是添加一个小的补丁,translate_writer,但是速度的提高并不令人满意。很快又测试了另一个补丁fast_translate,它产生了非常好的结果,加速了55%。

从文件中可以看到的主要变化是Python字典查找已更改为C级查找。

现在的速度几乎与 bytes

                                unpatched           patched

str.translate                   4.55125927699919    0.7898181750006188
str.translate from bytes trans  1.8910855210015143  0.779950579000797

这里需要注意的一点是,性能增强仅在ASCII字符串中突出。

正如JFSebastian在下面的注释中提到的,在3.5之前,对于ASCII和非ASCII情况,转换以前都以相同的方式工作。但是从3.5 ASCII起,大小写要快得多。

早期的ASCII与非ASCII几乎相同,但是现在我们可以看到性能有了很大的变化。

答案所示,它可以从71.6μs改善到2.33μs 。

以下代码演示了这一点

python3.5 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)"
100000 loops, best of 3: 2.3 usec per loop
python3.5 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)"
10000 loops, best of 3: 117 usec per loop

python3 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)"
10000 loops, best of 3: 91.2 usec per loop
python3 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)"
10000 loops, best of 3: 101 usec per loop

结果列表:

         Python 3.4    Python 3.5  
Ascii     91.2          2.3 
Unicode   101           117

TL;DR – ISSUE 21118


The long Story

Josh Rosenberg found out that the str.translate() function is very slow compared to the bytes.translate, he raised an issue, stating that:

In Python 3, str.translate() is usually a performance pessimization, not optimization.

Why was str.translate() slow?

The main reason for str.translate() to be very slow was that the lookup used to be in a Python dictionary.

The usage of maketrans made this problem worse. The similar approach using bytes builds a C array of 256 items to fast table lookup. Hence the usage of higher level Python dict makes the str.translate() in Python 3.4 very slow.

What happened now?

The first approach was to add a small patch, translate_writer, However the speed increase was not that pleasing. Soon another patch fast_translate was tested and it yielded very nice results of up to 55% speedup.

The main change as can be seen from the file is that the Python dictionary lookup is changed into a C level lookup.

The speeds now are almost the same as bytes

                                unpatched           patched

str.translate                   4.55125927699919    0.7898181750006188
str.translate from bytes trans  1.8910855210015143  0.779950579000797

A small note here is that the performance enhancement is only prominent in ASCII strings.

As J.F.Sebastian mentions in a comment below, Before 3.5, translate used to work in the same way for both ASCII and non-ASCII cases. However from 3.5 ASCII case is much faster.

Earlier ASCII vs non-ascii used to be almost same, however now we can see a great change in the performance.

It can be an improvement from 71.6μs to 2.33μs as seen in this answer.

The following code demonstrates this

python3.5 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)"
100000 loops, best of 3: 2.3 usec per loop
python3.5 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)"
10000 loops, best of 3: 117 usec per loop

python3 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)"
10000 loops, best of 3: 91.2 usec per loop
python3 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)"
10000 loops, best of 3: 101 usec per loop

Tabulation of the results:

         Python 3.4    Python 3.5  
Ascii     91.2          2.3 
Unicode   101           117

嵌套类的范围?

问题:嵌套类的范围?

我试图了解Python嵌套类中的作用域。这是我的示例代码:

class OuterClass:
    outer_var = 1
    class InnerClass:
        inner_var = outer_var

类的创建未完成,并且出现错误:

<type 'exceptions.NameError'>: name 'outer_var' is not defined

尝试inner_var = Outerclass.outer_var不起作用。我得到:

<type 'exceptions.NameError'>: name 'OuterClass' is not defined

我正在尝试从访问静态outer_var信息InnerClass

有没有办法做到这一点?

I’m trying to understand scope in nested classes in Python. Here is my example code:

class OuterClass:
    outer_var = 1
    class InnerClass:
        inner_var = outer_var

The creation of class does not complete and I get the error:

<type 'exceptions.NameError'>: name 'outer_var' is not defined

Trying inner_var = Outerclass.outer_var doesn’t work. I get:

<type 'exceptions.NameError'>: name 'OuterClass' is not defined

I am trying to access the static outer_var from InnerClass.

Is there a way to do this?


回答 0

class Outer(object):
    outer_var = 1

    class Inner(object):
        @property
        def inner_var(self):
            return Outer.outer_var

这与其他语言中的类似功能不太一样,并且使用全局查找而不是限制对的访问outer_var。(如果更改名称Outer绑定到的对象,则此代码将在下次执行该对象时使用该对象。)

相反,如果您希望所有Inner对象都具有对的引用,Outer因为outer_var它实际上是实例属性:

class Outer(object):
    def __init__(self):
        self.outer_var = 1

    def get_inner(self):
        return self.Inner(self)
        # "self.Inner" is because Inner is a class attribute of this class
        # "Outer.Inner" would also work, or move Inner to global scope
        # and then just use "Inner"

    class Inner(object):
        def __init__(self, outer):
            self.outer = outer

        @property
        def inner_var(self):
            return self.outer.outer_var

请注意,嵌套类在Python中并不常见,并且不会自动暗示类之间的任何特殊关系。您最好不要嵌套。(您仍然可以设置一个类属性上OuterInner,如果你想要的。)

class Outer(object):
    outer_var = 1

    class Inner(object):
        @property
        def inner_var(self):
            return Outer.outer_var

This isn’t quite the same as similar things work in other languages, and uses global lookup instead of scoping the access to outer_var. (If you change what object the name Outer is bound to, then this code will use that object the next time it is executed.)

If you instead want all Inner objects to have a reference to an Outer because outer_var is really an instance attribute:

class Outer(object):
    def __init__(self):
        self.outer_var = 1

    def get_inner(self):
        return self.Inner(self)
        # "self.Inner" is because Inner is a class attribute of this class
        # "Outer.Inner" would also work, or move Inner to global scope
        # and then just use "Inner"

    class Inner(object):
        def __init__(self, outer):
            self.outer = outer

        @property
        def inner_var(self):
            return self.outer.outer_var

Note that nesting classes is somewhat uncommon in Python, and doesn’t automatically imply any sort of special relationship between the classes. You’re better off not nesting. (You can still set a class attribute on Outer to Inner, if you want.)


回答 1

我认为您可以做到:

class OuterClass:
    outer_var = 1

    class InnerClass:
        pass
    InnerClass.inner_var = outer_var

您遇到的问题是由于以下原因:

块是作为单元执行的一段Python程序文本。以下是块:模块,函数体和类定义。
(…)
范围定义了块中名称的可见性。
(…)
在类块中定义的名称范围仅限于该类块;它不会扩展到方法的代码块–包括生成器表达式,因为它们是使用函数范围实现的。这意味着以下操作将失败:

   class A:  

       a = 42  

       b = list(a + i for i in range(10))

http://docs.python.org/reference/executionmodel.html#naming-and-binding

上面的意思是:
一个函数体是一个代码块,一个方法是一个函数,那么在类定义中存在于该函数体之外的名称将不会扩展到该函数体。

用您的情况解释一下:
类定义是一个代码块,然后在外部类定义中存在的内部类定义之外定义的名称不会扩展到内部类定义。

I think you can simply do:

class OuterClass:
    outer_var = 1

    class InnerClass:
        pass
    InnerClass.inner_var = outer_var

The problem you encountered is due to this:

A block is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition.
(…)
A scope defines the visibility of a name within a block.
(…)
The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods – this includes generator expressions since they are implemented using a function scope. This means that the following will fail:

   class A:  

       a = 42  

       b = list(a + i for i in range(10))

http://docs.python.org/reference/executionmodel.html#naming-and-binding

The above means:
a function body is a code block and a method is a function, then names defined out of the function body present in a class definition do not extend to the function body.

Paraphrasing this for your case:
a class definition is a code block, then names defined out of the inner class definition present in an outer class definition do not extend to the inner class definition.


回答 2

如果您不使用嵌套类,则可能会更好。如果必须嵌套,请尝试以下操作:

x = 1
class OuterClass:
    outer_var = x
    class InnerClass:
        inner_var = x

或在嵌套它们之前声明两个类:

class OuterClass:
    outer_var = 1

class InnerClass:
    inner_var = OuterClass.outer_var

OuterClass.InnerClass = InnerClass

(在此之后,您可以del InnerClass根据需要。)

You might be better off if you just don’t use nested classes. If you must nest, try this:

x = 1
class OuterClass:
    outer_var = x
    class InnerClass:
        inner_var = x

Or declare both classes before nesting them:

class OuterClass:
    outer_var = 1

class InnerClass:
    inner_var = OuterClass.outer_var

OuterClass.InnerClass = InnerClass

(After this you can del InnerClass if you need to.)


回答 3

最简单的解决方案:

class OuterClass:
    outer_var = 1
    class InnerClass:
        def __init__(self):
            self.inner_var = OuterClass.outer_var

它要求您保持明确,但不需要花费很多精力。

Easiest solution:

class OuterClass:
    outer_var = 1
    class InnerClass:
        def __init__(self):
            self.inner_var = OuterClass.outer_var

It requires you to be explicit, but doesn’t take much effort.


回答 4

在Python中,可变对象作为引用传递,因此您可以将外部类的引用传递给内部类。

class OuterClass:
    def __init__(self):
        self.outer_var = 1
        self.inner_class = OuterClass.InnerClass(self)
        print('Inner variable in OuterClass = %d' % self.inner_class.inner_var)

    class InnerClass:
        def __init__(self, outer_class):
            self.outer_class = outer_class
            self.inner_var = 2
            print('Outer variable in InnerClass = %d' % self.outer_class.outer_var)

In Python mutable objects are passed as reference, so you can pass a reference of the outer class to the inner class.

class OuterClass:
    def __init__(self):
        self.outer_var = 1
        self.inner_class = OuterClass.InnerClass(self)
        print('Inner variable in OuterClass = %d' % self.inner_class.inner_var)

    class InnerClass:
        def __init__(self, outer_class):
            self.outer_class = outer_class
            self.inner_var = 2
            print('Outer variable in InnerClass = %d' % self.outer_class.outer_var)

回答 5

所有说明都可以在Python文档中找到。

对于您的第一个错误<type 'exceptions.NameError'>: name 'outer_var' is not defined。解释是:

没有从方法内部引用数据属性(或其他方法!)的捷径。我发现这实际上提高了方法的可读性:浏览方法时,不会混淆局部变量和实例变量。

引自《 Python教程9.4》

对于第二个错误 <type 'exceptions.NameError'>: name 'OuterClass' is not defined

当正常保留类定义时(通过结尾),将创建一个类对象。

引自Python教程9.3.1

因此,当您尝试时inner_var = Outerclass.outer_varQuterclass尚未创建,这就是为什么name 'OuterClass' is not defined

有关第一个错误的更详细但乏味的解释:

尽管类可以访问封闭函数的作用域,但是它们不能充当嵌套在类内的代码的封闭作用域:Python在封闭函数中搜索引用的名称,但从不搜索任何封闭类。也就是说,一个类是一个局部作用域,可以访问封闭的局部作用域,但不能用作进一步嵌套代码的封闭的局部作用域。

引用自Learning.Python(5th).Mark.Lutz

All explanations can be found in Python Documentation The Python Tutorial

For your first error <type 'exceptions.NameError'>: name 'outer_var' is not defined. The explanation is:

There is no shorthand for referencing data attributes (or other methods!) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.

quoted from The Python Tutorial 9.4

For your second error <type 'exceptions.NameError'>: name 'OuterClass' is not defined

When a class definition is left normally (via the end), a class object is created.

quoted from The Python Tutorial 9.3.1

So when you try inner_var = Outerclass.outer_var, the Quterclass hasn’t been created yet, that’s why name 'OuterClass' is not defined

A more detailed but tedious explanation for your first error:

Although classes have access to enclosing functions’ scopes, though, they do not act as enclosing scopes to code nested within the class: Python searches enclosing functions for referenced names, but never any enclosing classes. That is, a class is a local scope and has access to enclosing local scopes, but it does not serve as an enclosing local scope to further nested code.

quoted from Learning.Python(5th).Mark.Lutz


Python的“漂亮”持续集成

问题:Python的“漂亮”持续集成

这是一个..徒劳的问题,但是BuildBot的输出并不是特别好看。

例如,相比

..及其他,BuildBot看起来..古老

我目前正在与Hudson一起玩,但是它是非常以Java为中心的(尽管使用本指南,我发现它比BuildBot容易设置,并提供了更多信息)

基本上:是否有任何针对python的持续集成系统,它们会生成许多闪亮的图形等?


更新:自从这次以来,Jenkins项目已将Hudson替换为软件包的社区版本。原始作者也已移至该项目。Jenkins现在是Ubuntu / Debian,RedHat / Fedora / CentOS等上的标准软件包。以下更新本质上仍然正确。詹金斯做到这一点的起点是不同的。

更新:尝试了几种选择之后,我认为我会坚持使用哈德森。完整性很好而且很简单,但是非常有限。我认为 Buildbot更适合拥有多个构建从属,而不是像我在使用它那样在一台机器上运行的所有东西。

将Hudson设置为Python项目非常简单:

  • http://hudson-ci.org/下载Hudson
  • 运行它 java -jar hudson.war
  • 打开Web界面的默认地址为 http://localhost:8080
  • 转到管理哈德森,插件,单击“更新”或类似内容
  • 安装Git插件(我必须git在Hudson全局首选项中设置路径)
  • 创建一个新项目,输入存储库,SCM轮询间隔等
  • 如果尚未安装,请nosetests通过安装easy_install
  • 在构建步骤中,添加 nosetests --with-xunit --verbose
  • 选中“发布JUnit测试结果报告”并将“测试报告XML”设置为 **/nosetests.xml

这就是全部。您可以设置电子邮件通知,这些插件值得一看。我目前正在使用一些Python项目:

  • SLOCCount插件可以计算代码行(并绘制图形!)-您需要单独安装sloccount
  • 违反解析PyLint输出(您可以设置警告阈值,绘制每个构建中违反次数的图表)
  • Cobertura可以解析coverage.py的输出。Nosetest可以在运行测试时使用收集覆盖范围nosetests --with-coverage(将输出写入**/coverage.xml

This is a slightly.. vain question, but BuildBot’s output isn’t particularly nice to look at..

For example, compared to..

..and others, BuildBot looks rather.. archaic

I’m currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info)

Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes?


Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different.

Update: After trying a few alternatives, I think I’ll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it.

Setting Hudson up for a Python project was pretty simple:

  • Download Hudson from http://hudson-ci.org/
  • Run it with java -jar hudson.war
  • Open the web interface on the default address of http://localhost:8080
  • Go to Manage Hudson, Plugins, click “Update” or similar
  • Install the Git plugin (I had to set the git path in the Hudson global preferences)
  • Create a new project, enter the repository, SCM polling intervals and so on
  • Install nosetests via easy_install if it’s not already
  • In the a build step, add nosetests --with-xunit --verbose
  • Check “Publish JUnit test result report” and set “Test report XMLs” to **/nosetests.xml

That’s all that’s required. You can setup email notifications, and the plugins are worth a look. A few I’m currently using for Python projects:

  • SLOCCount plugin to count lines of code (and graph it!) – you need to install sloccount separately
  • Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build)
  • Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)

回答 0

您可能想看看NoseXunit输出插件。您可以使用以下命令运行单元测试和覆盖率检查:

nosetests --with-xunit --enable-cover

如果您想走Jenkins路线,或者要使用其他支持JUnit测试报告的CI服务器,这将很有帮助。

同样,您可以使用Jenkins违规插件捕获pylint的输出

You might want to check out Nose and the Xunit output plugin. You can have it run your unit tests, and coverage checks with this command:

nosetests --with-xunit --enable-cover

That’ll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting.

Similarly you can capture the output of pylint using the violations plugin for Jenkins


回答 1

不知道它是否会做:Bitten是由写Trac的人制作的,并与Trac集成在一起。Apache Gump是Apache使用的CI工具。它是用Python编写的。

Don’t know if it would do : Bitten is made by the guys who write Trac and is integrated with Trac. Apache Gump is the CI tool used by Apache. It is written in Python.


回答 2

使用TeamCity作为CI服务器并使用鼻子作为测试运行器,我们已经取得了巨大的成功。 Teamcity的鼻子测试插件可让您计数通过/失败,失败的测试的可读显示(可以通过电子邮件发送)。您甚至可以在堆栈运行时查看测试失败的详细信息。

当然,如果支持在多台计算机上运行之类的事情,则它的设置和维护比buildbot容易得多。

We’ve had great success with TeamCity as our CI server and using nose as our test runner. Teamcity plugin for nosetests gives you count pass/fail, readable display for failed test( that can be E-Mailed). You can even see details of the test failures while you stack is running.

If of course supports things like running on multiple machines, and it’s much simpler to setup and maintain than buildbot.


回答 3

Buildbot的瀑布页面可以修饰得很漂亮。这是一个很好的例子http://build.chromium.org/buildbot/waterfall/waterfall

Buildbot’s waterfall page can be considerably prettified. Here’s a nice example http://build.chromium.org/buildbot/waterfall/waterfall


回答 4

Atlassian的Bamboo也绝对值得一试。整个Atlassian套件(JIRA,Confluence,FishEye等)非常漂亮。

Atlassian’s Bamboo is also definitely worth checking out. The entire Atlassian suite (JIRA, Confluence, FishEye, etc) is pretty sweet.


回答 5

我猜这个线程已经很老了,但是这是我和哈德森的看法:

我决定与pip一起建立一个repo(上班很痛苦,但是看上去很漂亮),hudson会自动将其上传到测试成功的仓库中。这是与hudson config执行脚本一起使用的我粗略且准备就绪的脚本,例如:/var/lib/hudson/venv/main/bin/hudson_script.py -w $ WORKSPACE -p my.package -v $ BUILD_NUMBER,只需放入** / coverage.xml,pylint.txt和鼻子测试.xml在配置位中:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s %(message)s')

#venvDir = "/var/lib/hudson/venv/main/bin/"

UPLOAD_REPO = "http://ldndev01:3442"

def call_command(command, cwd, ignore_error_code=False):
    try:
        logging.info("Running: %s" % command)
        status = subprocess.call(command, cwd=cwd, shell=True)
        if not ignore_error_code and status != 0:
            raise Exception("Last command failed")

        return status

    except:
        logging.exception("Could not run command %s" % command)
        raise

def main():
    usage = "usage: %prog [options]"
    parser = optparse.OptionParser(usage)
    parser.add_option("-w", "--workspace", dest="workspace",
                      help="workspace folder for the job")
    parser.add_option("-p", "--package", dest="package",
                      help="the package name i.e., back_office.reconciler")
    parser.add_option("-v", "--build_number", dest="build_number",
                      help="the build number, which will get put at the end of the package version")
    options, args = parser.parse_args()

    if not options.workspace or not options.package:
        raise Exception("Need both args, do --help for info")

    venvDir = options.package + "_venv/"

    #find out if venv is there
    if not os.path.exists(venvDir):
        #make it
        call_command("virtualenv %s --no-site-packages" % venvDir,
                     options.workspace)

    #install the venv/make sure its there plus install the local package
    call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
                 options.workspace)

    #make sure pylint, nose and coverage are installed
    call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
                 options.workspace)

    #make sure we have an __init__.py
    #this shouldn't be needed if the packages are set up correctly
    #modules = options.package.split(".")
    #if len(modules) > 1: 
    #    call_command("touch '%s/__init__.py'" % modules[0], 
    #                 options.workspace)
    #do the nosetests
    test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
                                                                                     options.package.replace(".", "/"),
                                                                                     options.package),
                 options.workspace, True)
    #produce coverage report -i for ignore weird missing file errors
    call_command("%sbin/coverage xml -i" % venvDir,
                 options.workspace)
    #move it so that the code coverage plugin can find it
    call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
                 options.workspace)
    #run pylint
    call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir, 
                                                                                     options.package),
                 options.workspace, True)

    #remove old dists so we only have the newest at the end
    call_command("rm -rfv %s" % (options.workspace + "/dist"),
                 options.workspace)

    #if the build passes upload the result to the egg_basket
    if test_status == 0:
        logging.info("Success - uploading egg")
        upload_bit = "upload -r %s/upload" % UPLOAD_REPO
    else:
        logging.info("Failure - not uploading egg")
        upload_bit = ""

    #create egg
    call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
                                                                                                              options.build_number,
                                                                                                              upload_bit),
                 options.workspace)

    call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
                 options.workspace)

    logging.info("Complete")

if __name__ == "__main__":
    main()

在部署内容时,您可以执行以下操作:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

然后人们可以使用以下方法开发东西:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

这些东西假设您每个包都有一个带有setup.py和依赖项的仓库结构,然后就可以检出中继并在上面运行它。

我希望这可以帮助某人。

——更新———

我添加了epydoc,它非常适合hudson。只需使用html文件夹将javadoc添加到您的配置中

请注意,pip目前不正确支持-E标志,因此您必须单独创建venv

I guess this thread is quite old but here is my take on it with hudson:

I decided to go with pip and set up a repo (the painful to get working but nice looking eggbasket), which hudson auto uploads to with a successful tests. Here is my rough and ready script for use with a hudson config execute script like: /var/lib/hudson/venv/main/bin/hudson_script.py -w $WORKSPACE -p my.package -v $BUILD_NUMBER, just put in **/coverage.xml, pylint.txt and nosetests.xml in the config bits:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s %(message)s')

#venvDir = "/var/lib/hudson/venv/main/bin/"

UPLOAD_REPO = "http://ldndev01:3442"

def call_command(command, cwd, ignore_error_code=False):
    try:
        logging.info("Running: %s" % command)
        status = subprocess.call(command, cwd=cwd, shell=True)
        if not ignore_error_code and status != 0:
            raise Exception("Last command failed")

        return status

    except:
        logging.exception("Could not run command %s" % command)
        raise

def main():
    usage = "usage: %prog [options]"
    parser = optparse.OptionParser(usage)
    parser.add_option("-w", "--workspace", dest="workspace",
                      help="workspace folder for the job")
    parser.add_option("-p", "--package", dest="package",
                      help="the package name i.e., back_office.reconciler")
    parser.add_option("-v", "--build_number", dest="build_number",
                      help="the build number, which will get put at the end of the package version")
    options, args = parser.parse_args()

    if not options.workspace or not options.package:
        raise Exception("Need both args, do --help for info")

    venvDir = options.package + "_venv/"

    #find out if venv is there
    if not os.path.exists(venvDir):
        #make it
        call_command("virtualenv %s --no-site-packages" % venvDir,
                     options.workspace)

    #install the venv/make sure its there plus install the local package
    call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
                 options.workspace)

    #make sure pylint, nose and coverage are installed
    call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
                 options.workspace)

    #make sure we have an __init__.py
    #this shouldn't be needed if the packages are set up correctly
    #modules = options.package.split(".")
    #if len(modules) > 1: 
    #    call_command("touch '%s/__init__.py'" % modules[0], 
    #                 options.workspace)
    #do the nosetests
    test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
                                                                                     options.package.replace(".", "/"),
                                                                                     options.package),
                 options.workspace, True)
    #produce coverage report -i for ignore weird missing file errors
    call_command("%sbin/coverage xml -i" % venvDir,
                 options.workspace)
    #move it so that the code coverage plugin can find it
    call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
                 options.workspace)
    #run pylint
    call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir, 
                                                                                     options.package),
                 options.workspace, True)

    #remove old dists so we only have the newest at the end
    call_command("rm -rfv %s" % (options.workspace + "/dist"),
                 options.workspace)

    #if the build passes upload the result to the egg_basket
    if test_status == 0:
        logging.info("Success - uploading egg")
        upload_bit = "upload -r %s/upload" % UPLOAD_REPO
    else:
        logging.info("Failure - not uploading egg")
        upload_bit = ""

    #create egg
    call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
                                                                                                              options.build_number,
                                                                                                              upload_bit),
                 options.workspace)

    call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
                 options.workspace)

    logging.info("Complete")

if __name__ == "__main__":
    main()

When it comes to deploying stuff you can do something like:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

And then people can develop stuff using:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

This stuff assumes you have a repo structure per package with a setup.py and dependencies all set up then you can just check out the trunk and run this stuff on it.

I hope this helps someone out.

——update———

I’ve added epydoc which fits in really nicely with hudson. Just add javadoc to your config with the html folder

Note that pip doesn’t support the -E flag properly these days, so you have to create your venv separately


回答 6

另一个:Shining Panda是python的托管工具

another one : Shining Panda is a hosted tool for python


回答 7

如果您正在考虑托管CI解决方案并进行开源,那么您也应该研究Travis CI-它与GitHub的集成非常好。当它最初是Ruby工具时,他们不久前就添加了Python支持

If you’re considering hosted CI solution, and doing open source, you should look into Travis CI as well – it has very nice integration with GitHub. While it started as a Ruby tool, they have added Python support a while ago.


回答 8

信号是另一种选择。您可以在此处了解更多信息并观看视频。

Signal is another option. You can know more about it and watch a video also here.


回答 9

我会考虑使用CircleCi-它具有强大的Python支持,并且输出非常漂亮。

I would consider CircleCi – it has great Python support, and very pretty output.


回答 10

Continuousumbinstar现在可以触发来自github的构建,并且可以针对linux,osx和Windows(32/64)进行编译。整洁的是,它确实允许您紧密耦合分发和持续集成。这是跨越t并整合I的点。该站点,工作流和工具确实经过了完善,并且AFAIK conda是分发复杂的python模块的最可靠,最pythonic的方式,您需要在其中包装分发C / C ++ / Fotran库。

continuum’s binstar now is able to trigger builds from github and can compile for linux, osx and windows ( 32 / 64 ). the neat thing is that it really allows you to closely couple distribution and continuous integration. That’s crossing the t’s and dotting the I’s of Integration. The site, workflow and tools are really polished and AFAIK conda is the most robust and pythonic way to distributing complex python modules, where you need to wrap and distribute C/C++/Fotran libraries.


回答 11

我们已经使用过很多咬了。它很漂亮,并且可以与Trac很好地集成在一起,但是如果您有任何非标准的工作流程,则很难自定义。而且,与流行工具相比,插件的数量也不多。目前,我们正在评估哈德森作为替代者。

We have used bitten quite a bit. It is pretty and integrates well with Trac, but it is a pain in the butt to customize if you have any nonstandard workflow. Also there just aren’t as many plugins as there are for the more popular tools. Currently we are evaluating Hudson as a replacement.


回答 12

检查rultor.com。如本文所述,它在每个构建中都使用Docker。因此,您可以在Docker映像中配置任何所需的内容,包括Python。

Check rultor.com. As this article explains, it uses Docker for every build. Thanks to that, you can configure whatever you like inside your Docker image, including Python.


回答 13

小小免责声明,我实际上已经为客户端构建了这样的解决方案,该客户端想要一种方法来自动测试和部署git push上的任何代码,以及通过git notes管理问题单。这也导致我从事AIMS项目

人们很容易只安装,通过有一个内置的用户和管理他们构建裸节点系统make(1)expect(1)crontab(1)/ systemd.unit(5),和incrontab(1)。甚至可以更进一步,并通过gridfs / nfs文件存储将ansible和celery用于分布式构建。

虽然,我不希望Graybeard UNIX的人或Principles级的工程师/架构师能走这么远。这只是一个好主意和潜在的学习经验,因为构建服务器无非是一种以自动化方式任意执行脚本任务的方法。

Little disclaimer, I’ve actually had to build a solution like this for a client that wanted a way to automatically test and deploy any code on a git push plus manage the issue tickets via git notes. This also lead to my work on the AIMS project.

One could easily just setup a bare node system that has a build user and manage their build through make(1), expect(1), crontab(1)/systemd.unit(5), and incrontab(1). One could even go a step further and use ansible and celery for distributed builds with a gridfs/nfs file store.

Although, I would not expect anyone other than a Graybeard UNIX guy or Principle level engineer/architect to actually go this far. Just makes for a nice idea and potential learning experience since a build server is nothing more than a way to arbitrarily execute scripted tasks in an automated fashion.


数学神器!Sympy 模块解数学方程解微积分

SymPy 是一个Python库,专注于符号数学,它的目标是成为一个全功能的计算机代数系统,同时保持代码简洁、易于理解和扩展。

举一个简单的例子,比如说展开二次方程:

from sympy import *
x = Symbol('x')
y = Symbol('y')
d = ((x+y)**2).expand()
print(d)
# 结果:x**2 + 2*x*y + y**2

你可以随便输入表达式,即便是十次方,它都能轻易的展开,非常方便:

from sympy import *
x = Symbol('x')
y = Symbol('y')
d = ((x+y)**10).expand()
print(d)
# 结果:x**10 + 10*x**9*y + 45*x**8*y**2 + 120*x**7*y**3 + 210*x**6*y**4 + 252*x**5*y**5 + 210*x**4*y**6 + 120*x**3*y**7 + 45*x**2*y**8 + 10*x*y**9 + y**10

下面就来讲讲这个模块的具体使用方法和例子。

1.准备

开始之前,你要确保Python和pip已经成功安装在电脑上,如果没有,请访问这篇文章:超详细Python安装指南 进行安装。

(可选1) 如果你用Python的目的是数据分析,可以直接安装Anaconda:Python数据分析与挖掘好帮手—Anaconda,它内置了Python和pip.

(可选2) 此外,推荐大家用VSCode编辑器来编写小型Python项目:Python 编程的最好搭档—VSCode 详细指南

Windows环境下打开Cmd(开始—运行—CMD),苹果系统环境下请打开Terminal(command+空格输入Terminal),输入命令安装依赖:

pip install Sympy

2.基本使用

简化表达式(化简)

sympy支持三种化简方式,分别是普通化简、三角化简、指数化简。

普通化简 simplify( ):

from sympy import *
x = Symbol('x')
d = simplify((x**3 + x**2 - x - 1)/(x**2 + 2*x + 1))
print(d)
# 结果:x - 1

三角化简 trigsimp( ):

from sympy import *
x = Symbol('x')
d = trigsimp(sin(x)/cos(x))
print(d)
# 结果:tan(x)

指数化简 powsimp( ):

from sympy import *
x = Symbol('x')
a = Symbol('a')
b = Symbol('b')
d = powsimp(x**a*x**b)
print(d)
# 结果:x**(a + b)

解方程 solve()

第一个参数为要解的方程,要求右端等于0,第二个参数为要解的未知数。

如一元一次方程:

from sympy import *
x = Symbol('x')
d = solve(x * 3 - 6, x)
print(d)
# 结果:[2]

二元一次方程:

from sympy import *
x = Symbol('x')
y = Symbol('y')
d = solve([2 * x - y - 3, 3 * x + y - 7],[x, y])
print(d)
# 结果:{x: 2, y: 1}

求极限 limit()

dir=’+’表示求解右极限,dir=’-‘表示求解左极限:

from sympy import *
x = Symbol('x')
d = limit(1/x,x,oo,dir='+')
print(d)
# 结果:0
d = limit(1/x,x,oo,dir='-')
print(d)
# 结果:0

求积分 integrate( )

先试试求解不定积分:

from sympy import *
x = Symbol('x')
d = integrate(sin(x),x)
print(d)
# 结果:-cos(x)

再试试定积分:

from sympy import *
x = Symbol('x')
d = integrate(sin(x),(x,0,pi/2))
print(d)
# 结果:1

求导 diff()

使用 diff 函数可以对方程进行求导:

from sympy import *
x = Symbol('x')
d = diff(x**3,x)
print(d)
# 结果:3*x**2

d = diff(x**3,x,2)
print(d)
# 结果:6*x

解微分方程 dsolve( )

以 y′=2xy 为例:

from sympy import *
x = Symbol('x')
f = Function('f')
d = dsolve(diff(f(x),x) - 2*f(x)*x,f(x))
print(d)
# 结果:Eq(f(x), C1*exp(x**2))

3.实战一下

今天群里有同学问了这个问题,“大佬们,我想问问,如果这个积分用Python应该怎么写呢,谢谢大家”:

from sympy import *
x = Symbol('x')
y = Symbol('y')
d = integrate(x-y, (y, 0, 1))
print(d)
# 结果:x - 1/2

为了计算这个结果,integrate的第一个参数是公式,第二个参数是积分变量及积分范围下标和上标。

运行后得到的结果便是 x – 1/2 与预期一致。

如果大家也有求解微积分、复杂方程的需要,可以试试sympy,它几乎是完美的存在。

我们的文章到此就结束啦,如果你喜欢今天的 Python 教程,请持续关注Python实用宝典。

有任何问题,可以在公众号后台回复:加群,回答相应验证信息,进入互助群询问。

原创不易,希望你能在下面点个赞和在看支持我继续创作,谢谢!

给作者打赏,选择打赏金额
¥1¥5¥10¥20¥50¥100¥200 自定义

​Python实用宝典 ( pythondict.com )
不只是一个宝典
欢迎关注公众号:Python实用宝典

在一维Numpy数组中使用Numpy查找局部最大值/最小值

问题:在一维Numpy数组中使用Numpy查找局部最大值/最小值

您能否建议使用numpy / scipy中的模块函数在一维numpy数组中找到局部最大值/最小值?显然,最简单的方法是看一下最近的邻居,但我想拥有一个被接受的解决方案,它是numpy发行版的一部分。

Can you suggest a module function from numpy/scipy that can find local maxima/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro.


回答 0

如果您要查找一维数组中所有a小于其邻居的条目,则可以尝试

numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]

您还可以在使用此步骤之前使数组平滑numpy.convolve()

我认为没有专用的功能。

If you are looking for all entries in the 1d array a smaller than their neighbors, you can try

numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]

You could also smooth your array before this step using numpy.convolve().

I don’t think there is a dedicated function for this.


回答 1

在SciPy中> = 0.11

import numpy as np
from scipy.signal import argrelextrema

x = np.random.random(12)

# for local maxima
argrelextrema(x, np.greater)

# for local minima
argrelextrema(x, np.less)

产生

>>> x
array([ 0.56660112,  0.76309473,  0.69597908,  0.38260156,  0.24346445,
    0.56021785,  0.24109326,  0.41884061,  0.35461957,  0.54398472,
    0.59572658,  0.92377974])
>>> argrelextrema(x, np.greater)
(array([1, 5, 7]),)
>>> argrelextrema(x, np.less)
(array([4, 6, 8]),)

注意,这些是x的索引,它们是局部最大值/最小值。要获取值,请尝试:

>>> x[argrelextrema(x, np.greater)[0]]

scipy.signal还分别提供argrelmaxargrelmin查找最大值和最小值。

In SciPy >= 0.11

import numpy as np
from scipy.signal import argrelextrema

x = np.random.random(12)

# for local maxima
argrelextrema(x, np.greater)

# for local minima
argrelextrema(x, np.less)

Produces

>>> x
array([ 0.56660112,  0.76309473,  0.69597908,  0.38260156,  0.24346445,
    0.56021785,  0.24109326,  0.41884061,  0.35461957,  0.54398472,
    0.59572658,  0.92377974])
>>> argrelextrema(x, np.greater)
(array([1, 5, 7]),)
>>> argrelextrema(x, np.less)
(array([4, 6, 8]),)

Note, these are the indices of x that are local max/min. To get the values, try:

>>> x[argrelextrema(x, np.greater)[0]]

scipy.signal also provides argrelmax and argrelmin for finding maxima and minima respectively.


回答 2

对于噪声不太大的曲线,我建议使用以下小代码段:

from numpy import *

# example data with some peaks:
x = linspace(0,4,1e3)
data = .2*sin(10*x)+ exp(-abs(2-x)**2)

# that's the line, you need:
a = diff(sign(diff(data))).nonzero()[0] + 1 # local min+max
b = (diff(sign(diff(data))) > 0).nonzero()[0] + 1 # local min
c = (diff(sign(diff(data))) < 0).nonzero()[0] + 1 # local max


# graphical output...
from pylab import *
plot(x,data)
plot(x[b], data[b], "o", label="min")
plot(x[c], data[c], "o", label="max")
legend()
show()

+1很重要,因为diff减少了原始索引号。

For curves with not too much noise, I recommend the following small code snippet:

from numpy import *

# example data with some peaks:
x = linspace(0,4,1e3)
data = .2*sin(10*x)+ exp(-abs(2-x)**2)

# that's the line, you need:
a = diff(sign(diff(data))).nonzero()[0] + 1 # local min+max
b = (diff(sign(diff(data))) > 0).nonzero()[0] + 1 # local min
c = (diff(sign(diff(data))) < 0).nonzero()[0] + 1 # local max


# graphical output...
from pylab import *
plot(x,data)
plot(x[b], data[b], "o", label="min")
plot(x[c], data[c], "o", label="max")
legend()
show()

The +1 is important, because diff reduces the original index number.


回答 3

另一种方法(更多的单词,更少的代码)可能会有所帮助:

局部最大值和最小值的位置也是一阶导数的零交叉的位置。通常,找到零交叉比直接找到局部最大值和最小值要容易得多。

不幸的是,一阶导数往往会“放大”噪声,因此,如果原始数据中存在明显的噪声,则仅在对原始数据进行一定程度的平滑处理后,才最好使用一阶导数。

因为从最简单的意义上讲,平滑是一个低通滤波器,所以平滑通常是最好的(很好,最容易),它是使用卷积内核完成的,并且“整形”内核可以提供惊人数量的特征保留/增强功能。查找最佳内核的过程可以使用多种方法实现自动化,但最好的方法可能是简单的蛮力操作(查找小内核的速度非常快)。一个好的内核将(按预期的方式)使原始数据大量失真,但不会影响目标峰/谷的位置。

幸运的是,通常可以通过简单的SWAG(“有根据的猜测”)创建合适的内核。平滑内核的宽度应比原始数据中最宽的预期“有趣”峰稍宽一些,并且其形状将类似于该峰(单刻度小波)。对于保留均值的内核(应该有任何良好的平滑滤波器),内核元素的总和应精确等于1.00,并且内核应关于其中心对称(这意味着它将具有奇数个元素)。

给定最佳平滑内核(或为不同数据内容优化的少量内核),平滑程度就成为卷积内核(“卷积”)的缩放因子。

甚至可以自动确定“正确的”(最佳)平滑度(卷积核增益):将一阶导数数据的标准偏差与平滑数据的标准偏差进行比较。两个标准偏差的比率如何随平滑度的变化而变化,可用于预测有效的平滑值。只需要一些手动数据运行(真正具有代表性)。

上面发布的所有现有解决方案均计算一阶导数,但它们并未将其视为统计量,上述解决方案也未尝试执行特征保留/增强平滑(以帮助微妙的峰值“跨越”噪声)。

最后,一个坏消息是:当噪声还具有看起来像真实峰值(重叠带宽)的特征时,找到“真实”峰值变得很痛苦。下一个更复杂的解决方案通常是使用更长的卷积核(“更大的核孔径”),该卷积核考虑了相邻“真实”峰之间的关系(例如峰出现的最小或最大速率),或使用多个卷积使用具有不同宽度的内核传递(但前提是速度更快:这是一个基本的数学真理,按顺序执行的线性卷积始终可以一起卷积为单个卷积)。但是,通常要比一个步骤直接找到最终内核要容易得多,首先找到一系列有用的内核(宽度可变)并将它们卷积在一起。

希望这可以提供足够的信息,以使Google(也许还有不错的统计信息)能够填补空白。我真的希望我有时间提供一个可行的示例或一个示例的链接。如果有人在网上碰到过,请在此处发布!

Another approach (more words, less code) that may help:

The locations of local maxima and minima are also the locations of the zero crossings of the first derivative. It is generally much easier to find zero crossings than it is to directly find local maxima and minima.

Unfortunately, the first derivative tends to “amplify” noise, so when significant noise is present in the original data, the first derivative is best used only after the original data has had some degree of smoothing applied.

Since smoothing is, in the simplest sense, a low pass filter, the smoothing is often best (well, most easily) done by using a convolution kernel, and “shaping” that kernel can provide a surprising amount of feature-preserving/enhancing capability. The process of finding an optimal kernel can be automated using a variety of means, but the best may be simple brute force (plenty fast for finding small kernels). A good kernel will (as intended) massively distort the original data, but it will NOT affect the location of the peaks/valleys of interest.

Fortunately, quite often a suitable kernel can be created via a simple SWAG (“educated guess”). The width of the smoothing kernel should be a little wider than the widest expected “interesting” peak in the original data, and its shape will resemble that peak (a single-scaled wavelet). For mean-preserving kernels (what any good smoothing filter should be) the sum of the kernel elements should be precisely equal to 1.00, and the kernel should be symmetric about its center (meaning it will have an odd number of elements.

Given an optimal smoothing kernel (or a small number of kernels optimized for different data content), the degree of smoothing becomes a scaling factor for (the “gain” of) the convolution kernel.

Determining the “correct” (optimal) degree of smoothing (convolution kernel gain) can even be automated: Compare the standard deviation of the first derivative data with the standard deviation of the smoothed data. How the ratio of the two standard deviations changes with changes in the degree of smoothing cam be used to predict effective smoothing values. A few manual data runs (that are truly representative) should be all that’s needed.

All the prior solutions posted above compute the first derivative, but they don’t treat it as a statistical measure, nor do the above solutions attempt to performing feature preserving/enhancing smoothing (to help subtle peaks “leap above” the noise).

Finally, the bad news: Finding “real” peaks becomes a royal pain when the noise also has features that look like real peaks (overlapping bandwidth). The next more-complex solution is generally to use a longer convolution kernel (a “wider kernel aperture”) that takes into account the relationship between adjacent “real” peaks (such as minimum or maximum rates for peak occurrence), or to use multiple convolution passes using kernels having different widths (but only if it is faster: it is a fundamental mathematical truth that linear convolutions performed in sequence can always be convolved together into a single convolution). But it is often far easier to first find a sequence of useful kernels (of varying widths) and convolve them together than it is to directly find the final kernel in a single step.

Hopefully this provides enough info to let Google (and perhaps a good stats text) fill in the gaps. I really wish I had the time to provide a worked example, or a link to one. If anyone comes across one online, please post it here!


回答 4

从SciPy 1.1版开始,您还可以使用find_peaks。以下是从文档本身获取的两个示例。

使用该height参数,可以选择高于某个阈值的所有最大值(在此示例中,所有非负最大值;如果必须处理嘈杂的基线,这将非常有用;如果要查找最小值,只需将输入乘以通过-1):

import matplotlib.pyplot as plt
from scipy.misc import electrocardiogram
from scipy.signal import find_peaks
import numpy as np

x = electrocardiogram()[2000:4000]
peaks, _ = find_peaks(x, height=0)
plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.plot(np.zeros_like(x), "--", color="gray")
plt.show()

另一个非常有用的参数是distance,它定义了两个峰之间的最小距离:

peaks, _ = find_peaks(x, distance=150)
# difference between peaks is >= 150
print(np.diff(peaks))
# prints [186 180 177 171 177 169 167 164 158 162 172]

plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.show()

As of SciPy version 1.1, you can also use find_peaks. Below are two examples taken from the documentation itself.

Using the height argument, one can select all maxima above a certain threshold (in this example, all non-negative maxima; this can be very useful if one has to deal with a noisy baseline; if you want to find minima, just multiply you input by -1):

import matplotlib.pyplot as plt
from scipy.misc import electrocardiogram
from scipy.signal import find_peaks
import numpy as np

x = electrocardiogram()[2000:4000]
peaks, _ = find_peaks(x, height=0)
plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.plot(np.zeros_like(x), "--", color="gray")
plt.show()

Another extremely helpful argument is distance, which defines the minimum distance between two peaks:

peaks, _ = find_peaks(x, distance=150)
# difference between peaks is >= 150
print(np.diff(peaks))
# prints [186 180 177 171 177 169 167 164 158 162 172]

plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.show()


回答 5

为什么不使用Scipy内置函数signal.find_peaks_cwt来完成这项工作?

from scipy import signal
import numpy as np

#generate junk data (numpy 1D arr)
xs = np.arange(0, np.pi, 0.05)
data = np.sin(xs)

# maxima : use builtin function to find (max) peaks
max_peakind = signal.find_peaks_cwt(data, np.arange(1,10))

# inverse  (in order to find minima)
inv_data = 1/data
# minima : use builtin function fo find (min) peaks (use inversed data)
min_peakind = signal.find_peaks_cwt(inv_data, np.arange(1,10))

#show results
print "maxima",  data[max_peakind]
print "minima",  data[min_peakind]

结果:

maxima [ 0.9995736]
minima [ 0.09146464]

问候

Why not use Scipy built-in function signal.find_peaks_cwt to do the job ?

from scipy import signal
import numpy as np

#generate junk data (numpy 1D arr)
xs = np.arange(0, np.pi, 0.05)
data = np.sin(xs)

# maxima : use builtin function to find (max) peaks
max_peakind = signal.find_peaks_cwt(data, np.arange(1,10))

# inverse  (in order to find minima)
inv_data = 1/data
# minima : use builtin function fo find (min) peaks (use inversed data)
min_peakind = signal.find_peaks_cwt(inv_data, np.arange(1,10))

#show results
print "maxima",  data[max_peakind]
print "minima",  data[min_peakind]

results:

maxima [ 0.9995736]
minima [ 0.09146464]

Regards


回答 6

更新: 我对渐变不满意,因此发现它使用起来更可靠numpy.diff。请让我知道它是否满足您的要求。

关于噪声问题,数学问题是定位最大值/最小值,如果我们要查看噪声,可以使用前面提到的卷积之类的方法。

import numpy as np
from matplotlib import pyplot

a=np.array([10.3,2,0.9,4,5,6,7,34,2,5,25,3,-26,-20,-29],dtype=np.float)

gradients=np.diff(a)
print gradients


maxima_num=0
minima_num=0
max_locations=[]
min_locations=[]
count=0
for i in gradients[:-1]:
        count+=1

    if ((cmp(i,0)>0) & (cmp(gradients[count],0)<0) & (i != gradients[count])):
        maxima_num+=1
        max_locations.append(count)     

    if ((cmp(i,0)<0) & (cmp(gradients[count],0)>0) & (i != gradients[count])):
        minima_num+=1
        min_locations.append(count)


turning_points = {'maxima_number':maxima_num,'minima_number':minima_num,'maxima_locations':max_locations,'minima_locations':min_locations}  

print turning_points

pyplot.plot(a)
pyplot.show()

Update: I wasn’t happy with gradient so I found it more reliable to use numpy.diff. Please let me know if it does what you want.

Regarding the issue of noise, the mathematical problem is to locate maxima/minima if we want to look at noise we can use something like convolve which was mentioned earlier.

import numpy as np
from matplotlib import pyplot

a=np.array([10.3,2,0.9,4,5,6,7,34,2,5,25,3,-26,-20,-29],dtype=np.float)

gradients=np.diff(a)
print gradients


maxima_num=0
minima_num=0
max_locations=[]
min_locations=[]
count=0
for i in gradients[:-1]:
        count+=1

    if ((cmp(i,0)>0) & (cmp(gradients[count],0)<0) & (i != gradients[count])):
        maxima_num+=1
        max_locations.append(count)     

    if ((cmp(i,0)<0) & (cmp(gradients[count],0)>0) & (i != gradients[count])):
        minima_num+=1
        min_locations.append(count)


turning_points = {'maxima_number':maxima_num,'minima_number':minima_num,'maxima_locations':max_locations,'minima_locations':min_locations}  

print turning_points

pyplot.plot(a)
pyplot.show()

回答 7

虽然这个问题确实很老。我相信在numpy中使用一种简单得多的方法(一个划线员)。

import numpy as np

list = [1,3,9,5,2,5,6,9,7]

np.diff(np.sign(np.diff(list))) #the one liner

#output
array([ 0, -2,  0,  2,  0,  0, -2])

要找到局部最大值或最小值,我们本质上是想查找列表中值(3-1、9-3 …)之间的差值从正变为负(最大值)或从负变为正(最小值)。因此,首先我们发现差异。然后我们找到符号,然后通过再次求和以找到符号的变化。(类似于微积分中的一阶和二阶导数,只有我们有离散的数据,没有连续的函数。)

在我的示例中,输出不包含极值(列表中的第一个和最后一个值)。同样,与微积分一样,如果二阶导数为负,则表示最大值,如果其为正,则表示最小值。

因此,我们有以下比赛:

[1,  3,  9,  5,  2,  5,  6,  9,  7]
    [0, -2,  0,  2,  0,  0, -2]
        Max     Min         Max

While this question is really old. I believe there is a much simpler approach in numpy (a one liner).

import numpy as np

list = [1,3,9,5,2,5,6,9,7]

np.diff(np.sign(np.diff(list))) #the one liner

#output
array([ 0, -2,  0,  2,  0,  0, -2])

To find a local max or min we essentially want to find when the difference between the values in the list (3-1, 9-3…) changes from positive to negative (max) or negative to positive (min). Therefore, first we find the difference. Then we find the sign, and then we find the changes in sign by taking the difference again. (Sort of like a first and second derivative in calculus, only we have discrete data and don’t have a continuous function.)

The output in my example does not contain the extrema (the first and last values in the list). Also, just like calculus, if the second derivative is negative, you have max, and if it is positive you have a min.

Thus we have the following matchup:

[1,  3,  9,  5,  2,  5,  6,  9,  7]
    [0, -2,  0,  2,  0,  0, -2]
        Max     Min         Max

回答 8

这些解决方案都不适合我,因为我也想在重复值的中心找到峰值。例如,在

ar = np.array([0,1,2,2,2,1,3,3,3,2,5,0])

答案应该是

array([ 3,  7, 10], dtype=int64)

我使用循环来做到这一点。我知道这不是超级干净,但是可以完成工作。

def findLocalMaxima(ar):
# find local maxima of array, including centers of repeating elements    
maxInd = np.zeros_like(ar)
peakVar = -np.inf
i = -1
while i < len(ar)-1:
#for i in range(len(ar)):
    i += 1
    if peakVar < ar[i]:
        peakVar = ar[i]
        for j in range(i,len(ar)):
            if peakVar < ar[j]:
                break
            elif peakVar == ar[j]:
                continue
            elif peakVar > ar[j]:
                peakInd = i + np.floor(abs(i-j)/2)
                maxInd[peakInd.astype(int)] = 1
                i = j
                break
    peakVar = ar[i]
maxInd = np.where(maxInd)[0]
return maxInd 

None of these solutions worked for me since I wanted to find peaks in the center of repeating values as well. for example, in

ar = np.array([0,1,2,2,2,1,3,3,3,2,5,0])

the answer should be

array([ 3,  7, 10], dtype=int64)

I did this using a loop. I know it’s not super clean, but it gets the job done.

def findLocalMaxima(ar):
# find local maxima of array, including centers of repeating elements    
maxInd = np.zeros_like(ar)
peakVar = -np.inf
i = -1
while i < len(ar)-1:
#for i in range(len(ar)):
    i += 1
    if peakVar < ar[i]:
        peakVar = ar[i]
        for j in range(i,len(ar)):
            if peakVar < ar[j]:
                break
            elif peakVar == ar[j]:
                continue
            elif peakVar > ar[j]:
                peakInd = i + np.floor(abs(i-j)/2)
                maxInd[peakInd.astype(int)] = 1
                i = j
                break
    peakVar = ar[i]
maxInd = np.where(maxInd)[0]
return maxInd 

回答 9

import numpy as np
x=np.array([6,3,5,2,1,4,9,7,8])
y=np.array([2,1,3,5,3,9,8,10,7])
sortId=np.argsort(x)
x=x[sortId]
y=y[sortId]
minm = np.array([])
maxm = np.array([])
i = 0
while i < length-1:
    if i < length - 1:
        while i < length-1 and y[i+1] >= y[i]:
            i+=1

        if i != 0 and i < length-1:
            maxm = np.append(maxm,i)

        i+=1

    if i < length - 1:
        while i < length-1 and y[i+1] <= y[i]:
            i+=1

        if i < length-1:
            minm = np.append(minm,i)
        i+=1


print minm
print maxm

minm并分别maxm包含最小值和最大值的索引。对于庞大的数据集,它将提供很多最大值/最小值,因此在这种情况下,请先平滑曲线,然后再应用此算法。

import numpy as np
x=np.array([6,3,5,2,1,4,9,7,8])
y=np.array([2,1,3,5,3,9,8,10,7])
sortId=np.argsort(x)
x=x[sortId]
y=y[sortId]
minm = np.array([])
maxm = np.array([])
i = 0
while i < length-1:
    if i < length - 1:
        while i < length-1 and y[i+1] >= y[i]:
            i+=1

        if i != 0 and i < length-1:
            maxm = np.append(maxm,i)

        i+=1

    if i < length - 1:
        while i < length-1 and y[i+1] <= y[i]:
            i+=1

        if i < length-1:
            minm = np.append(minm,i)
        i+=1


print minm
print maxm

minm and maxm contain indices of minima and maxima, respectively. For a huge data set, it will give lots of maximas/minimas so in that case smooth the curve first and then apply this algorithm.


回答 10

使用膨胀运算符的另一种解决方案:

import numpy as np
from scipy.ndimage import rank_filter

def find_local_maxima(x):
   x_dilate = rank_filter(x, -1, size=3)
   return x_dilate == x

对于最小值:

def find_local_minima(x):
   x_erode = rank_filter(x, -0, size=3)
   return x_erode == x

此外,从scipy.ndimage可以替换rank_filter(x, -1, size=3)使用grey_dilation,并rank_filter(x, 0, size=3)grey_erosion。这不需要本地排序,因此速度稍快。

Another solution using essentially a dilate operator:

import numpy as np
from scipy.ndimage import rank_filter

def find_local_maxima(x):
   x_dilate = rank_filter(x, -1, size=3)
   return x_dilate == x

and for the minima:

def find_local_minima(x):
   x_erode = rank_filter(x, -0, size=3)
   return x_erode == x

Also, from scipy.ndimage you can replace rank_filter(x, -1, size=3) with grey_dilation and rank_filter(x, 0, size=3) with grey_erosion. This won’t require a local sort, so it is slightly faster.


回答 11

另一个:


def local_maxima_mask(vec):
    """
    Get a mask of all points in vec which are local maxima
    :param vec: A real-valued vector
    :return: A boolean mask of the same size where True elements correspond to maxima. 
    """
    mask = np.zeros(vec.shape, dtype=np.bool)
    greater_than_the_last = np.diff(vec)>0  # N-1
    mask[1:] = greater_than_the_last
    mask[:-1] &= ~greater_than_the_last
    return mask

Another one:


def local_maxima_mask(vec):
    """
    Get a mask of all points in vec which are local maxima
    :param vec: A real-valued vector
    :return: A boolean mask of the same size where True elements correspond to maxima. 
    """
    mask = np.zeros(vec.shape, dtype=np.bool)
    greater_than_the_last = np.diff(vec)>0  # N-1
    mask[1:] = greater_than_the_last
    mask[:-1] &= ~greater_than_the_last
    return mask

在哪里可以找到适用于Python的win32api模块?[关闭]

问题:在哪里可以找到适用于Python的win32api模块?[关闭]

我需要为Python 2.7下载它,但似乎找不到它…

I need to download it for Python 2.7, but can’t seem to find it…


回答 0

“ pywin32”是其规范名称。

http://sourceforge.net/projects/pywin32/


回答 1

还有一个新的选择:通过点子获取它!有一个包pypiwin32可用轮子,这样你就可以用刚安装:pip install pypiwin32

编辑:根据@movermeyer的评论,主项目现在在pywin32上发布了wheel ,因此可以使用pip install pywin32

There is a a new option as well: get it via pip! There is a package pypiwin32 with wheels available, so you can just install with: pip install pypiwin32!

Edit: Per comment from @movermeyer, the main project now publishes wheels at pywin32, and so can be installed with pip install pywin32


回答 2

我发现UC Irvine拥有大量的python模块集合,pywin32(win32api)是其中列出的众多模块之一。我不确定它们如何跟上这些模块的最新版本,但还没有让我失望。

UC Irvine Python扩展存储库-http: //www.lfd.uci.edu/~gohlke/pythonlibs

pywin32模块- http://www.lfd.uci.edu/~gohlke/pythonlibs/#pywin32

I’ve found that UC Irvine has a great collection of python modules, pywin32 (win32api) being one of many listed there. I’m not sure how they do with keeping up with the latest versions of these modules but it hasn’t let me down yet.

UC Irvine Python Extension Repository – http://www.lfd.uci.edu/~gohlke/pythonlibs

pywin32 module – http://www.lfd.uci.edu/~gohlke/pythonlibs/#pywin32


回答 3


使用Django在两个日期之间选择

问题:使用Django在两个日期之间选择

我正在寻找一个在Django日期之间进行选择的查询。

我知道如何使用原始SQL轻松做到这一点,但是如何使用Django ORM做到这一点呢?

这是我要在查询中添加30天之间的日期的地方:

start_date = datetime.datetime.now() + datetime.timedelta(-30)
context[self.varname] = self.model._default_manager.filter(
    current_issue__isnull=True
    ).live().order_by('-created_at')

I am looking to make a query that selects between dates with Django.

I know how to do this with raw SQL pretty easily, but how could this be achieved using the Django ORM?

This is where I want to add the between dates of 30 days in my query:

start_date = datetime.datetime.now() + datetime.timedelta(-30)
context[self.varname] = self.model._default_manager.filter(
    current_issue__isnull=True
    ).live().order_by('-created_at')

回答 0

使用__range运算符:

...filter(current_issue__isnull=True, created_at__range=(start_date, end_date))

Use the __range operator:

...filter(current_issue__isnull=True, created_at__range=(start_date, end_date))

回答 1


回答 2

两种方法

.filter(created_at__range=[from_date, to_date])

另一种方法

.filter(Q(created_at__gte=from_date)&Q(created_at__lte=to_date))
  • gte意味着大于等于
  • 小于等于

two methods

.filter(created_at__range=[from_date, to_date])

another method

.filter(Q(created_at__gte=from_date)&Q(created_at__lte=to_date))
  • gte means greater than equal
  • lte means less than equal

回答 3

如果您使用DateTimeField,则使用日期进行过滤将不包括最后一天的项目。

您需要将值转换为日期:

...filter(created_at__date__range=(start_date, end_date))

If you are using a DateTimeField, Filtering with dates won’t include items on the last day.

You need to casts the value as date:

...filter(created_at__date__range=(start_date, end_date))

有趣好用的Python教程

退出移动版
微信支付
请使用 微信 扫码支付