# Python中是否有一个//运算符的上限？

## 问题：Python中是否有一个//运算符的上限？

I found out about the // operator in Python which in Python 3 does division with floor.

Is there an operator which divides with ceil instead? (I know about the / operator which in Python 3 does floating point division.)

## 回答 0

There is no operator which divides with ceil. You need to import math and use math.ceil

## 回答 1

def ceildiv(a, b):
return -(-a // b)

>>> from __future__ import division   # a/b is float division
>>> from math import ceil
>>> b = 3
>>> for a in range(-7, 8):
...     print(["%d/%d" % (a, b), int(ceil(a / b)), -(-a // b)])
...
['-7/3', -2, -2]
['-6/3', -2, -2]
['-5/3', -1, -1]
['-4/3', -1, -1]
['-3/3', -1, -1]
['-2/3', 0, 0]
['-1/3', 0, 0]
['0/3', 0, 0]
['1/3', 1, 1]
['2/3', 1, 1]
['3/3', 1, 1]
['4/3', 2, 2]
['5/3', 2, 2]
['6/3', 2, 2]
['7/3', 3, 3]

You can just do upside-down floor division:

def ceildiv(a, b):
return -(-a // b)

This works because Python’s division operator does floor division (unlike in C, where integer division truncates the fractional part).

This also works with Python’s big integers, because there’s no (lossy) floating-point conversion.

Here’s a demonstration:

>>> from __future__ import division   # a/b is float division
>>> from math import ceil
>>> b = 3
>>> for a in range(-7, 8):
...     print(["%d/%d" % (a, b), int(ceil(a / b)), -(-a // b)])
...
['-7/3', -2, -2]
['-6/3', -2, -2]
['-5/3', -1, -1]
['-4/3', -1, -1]
['-3/3', -1, -1]
['-2/3', 0, 0]
['-1/3', 0, 0]
['0/3', 0, 0]
['1/3', 1, 1]
['2/3', 1, 1]
['3/3', 1, 1]
['4/3', 2, 2]
['5/3', 2, 2]
['6/3', 2, 2]
['7/3', 3, 3]

## 回答 2

You could do (x + (d-1)) // d when dividing x by d, i.e. (x + 4) // 5.

## 解决方案1：通过求反将地板转换为天花板

def ceiling_division(n, d):
return -(n // -d)

## 解决方案2：让divmod（）完成工作

def ceiling_division(n, d):
q, r = divmod(n, d)
return q + bool(r)

## 解决方案3：在除法之前调整分子

def ceiling_division(n, d):
return (n + d - 1) // d

## 解决方案4：转换为浮点数以使用math.ceil（）

def ceiling_division(n, d):
return math.ceil(n / d)

math.ceil（）代码很容易理解，但它从整数到彩车和背部转换。这不是很快，并且可能存在舍入问题。而且，它依赖于Python 3语义，其中“真除法”产生浮点，而ceil（）函数返回整数。

## Solution 1: Convert floor to ceiling with negation

def ceiling_division(n, d):
return -(n // -d)

Reminiscent of the Penn & Teller levitation trick, this “turns the world upside down (with negation), uses plain floor division (where the ceiling and floor have been swapped), and then turns the world right-side up (with negation again)”

## Solution 2: Let divmod() do the work

def ceiling_division(n, d):
q, r = divmod(n, d)
return q + bool(r)

The divmod() function gives (a // b, a % b) for integers (this may be less reliable with floats due to round-off error). The step with bool(r) adds one to the quotient whenever there is a non-zero remainder.

## Solution 3: Adjust the numerator before the division

def ceiling_division(n, d):
return (n + d - 1) // d

Translate the numerator upwards so that floor division rounds down to the intended ceiling. Note, this only works for integers.

## Solution 4: Convert to floats to use math.ceil()

def ceiling_division(n, d):
return math.ceil(n / d)

The math.ceil() code is easy to understand, but it converts from ints to floats and back. This isn’t very fast and it may have rounding issues. Also, it relies on Python 3 semantics where “true division” produces a float and where the ceil() function returns an integer.

## 回答 4

((foo - 1) // bar) + 1

>>> timeit.timeit("((5 - 1) // 4) + 1", number = 100000000)
1.7249219375662506
>>> timeit.timeit("ceil(5/4)", setup="from math import ceil", number = 100000000)
12.096064013894647

You can always just do it inline as well

((foo - 1) // bar) + 1

In python3, this is just shy of an order of magnitude faster than forcing the float division and calling ceil(), provided you care about the speed. Which you shouldn’t, unless you’ve proven through usage that you need to.

>>> timeit.timeit("((5 - 1) // 4) + 1", number = 100000000)
1.7249219375662506
>>> timeit.timeit("ceil(5/4)", setup="from math import ceil", number = 100000000)
12.096064013894647

## 回答 5

gmpy2 libary提供c_div它采用天花板的舍入函数。

Note that math.ceil is limited to 53 bits of precision. If you are working with large integers, you may not get exact results.

The gmpy2 libary provides a c_div function which uses ceiling rounding.

Disclaimer: I maintain gmpy2.

## 回答 6

Simple solution: a // b + 1

# 在namedtuple中输入提示

## 问题：在namedtuple中输入提示

from collections import namedtuple
point = namedtuple("Point", ("x:int", "y:int"))

Consider following piece of code:

from collections import namedtuple
point = namedtuple("Point", ("x:int", "y:int"))

The Code above is just a way to demonstrate as to what I am trying to achieve. I would like to make namedtuple with type hints.

Do you know any elegant way how to achieve result as intended?

## 回答 0

from typing import NamedTuple

class Point(NamedTuple):
x: int
y: int = 1  # Set default value

Point(3)  # -> Point(x=3, y=1)

from dataclasses import dataclass

@dataclass
class Point:
x: int
y: int = 1  # Set default value

Point(3)  # -> Point(x=3, y=1)

The prefered Syntax for a typed named tuple since 3.6 is

from typing import NamedTuple

class Point(NamedTuple):
x: int
y: int = 1  # Set default value

Point(3)  # -> Point(x=3, y=1)

Edit Starting Python 3.7, consider using dataclasses (your IDE may not yet support them for static type checking):

from dataclasses import dataclass

@dataclass
class Point:
x: int
y: int = 1  # Set default value

Point(3)  # -> Point(x=3, y=1)

## 回答 1

>>> import typing
>>> Point = typing.NamedTuple("Point", [('x', int), ('y', int)])

You can use typing.NamedTuple

From the docs

Typed version of namedtuple.

>>> import typing
>>> Point = typing.NamedTuple("Point", [('x', int), ('y', int)])

This is present only in Python 3.5 onwards

# 如何将Python的.py转换为.exe？

## 问题：如何将Python的.py转换为.exe？

1. 下载旧版本的Python并使用 pyinstaller/py2exe
2. 在Python 3.6中设置一个虚拟环境，这将允许我执行1。
3. 下载Python到C ++转换器并使用它。

• 我在安装pyinstaller所需的下载之前安装了它（pypi-something），所以它无法正常工作。下载必备文件后，pyinstaller仍然无法识别它。
• 如果要在Python 2.7中设置virtualenv，我是否真的需要安装Python 2.7？
• 同样，我看到的唯一的python至C ++转换器只能在Python 3.5之前工作-尝试这样做是否需要下载并使用此版本？

I’m trying to convert a fairly simple Python program to an executable and couldn’t find what I was looking for, so I have a few questions (I’m running Python 3.6):

The methods of doing this that I have found so far are as follows

2. setting up a virtual environment in Python 3.6 that will allow me to do 1.

Here is what I’ve tried/what problems I’ve run into.

• If I’m setting up a virtualenv in Python 2.7, do I actually need to have Python 2.7 installed?
• similarly, the only python to C++ converters I see work only up until Python 3.5 – do I need to download and use this version if attempting this?

## 回答 0

1. 安装Python 3.6
2. 安装cx_Freeze，（打开命令提示符并输入pip install cx_Freeze
3. 安装idna，（打开命令提示符并输入pip install idna
4. 编写一个.py名为的程序myfirstprog.py
5. setup.py在脚本的当前目录中创建一个新的python文件。
6. setup.py文件中，复制下面的代码并保存。
7. 按住Shift键并在同一目录上单击鼠标右键，因此您可以打开命令提示符窗口。
8. 在提示中，键入 python setup.py build
9. 如果您的脚本没有错误，那么创建应用程序将没有问题。
10. 检查新创建的文件夹build。它有另一个文件夹。在该文件夹中，您可以找到您的应用程序。运行。让自己开心。

setup.py：

from cx_Freeze import setup, Executable

base = None

executables = [Executable("myfirstprog.py", base=base)]

packages = ["idna"]
options = {
'build_exe': {
'packages':packages,
},
}

setup(
name = "<any name>",
options = options,
version = "<any number>",
description = '<any description>',
executables = executables
)

• 确保不要myfirstprog.py步骤4中.py创建的扩展名放在文件名中；
• 你应该包括每import版包您.pypackages列表（例如：packages = ["idna", "os","sys"]
• any name, any number, any descriptionsetup.py文件不应保持不变，就应该相应地改变它（例如：name = "<first_ever>", version = "0.11", description = ''
• import你开始之前，编辑软件包必须安装第8步

Steps to convert .py to .exe in Python 3.6

1. Install Python 3.6.
2. Install cx_Freeze, (open your command prompt and type pip install cx_Freeze.
3. Install idna, (open your command prompt and type pip install idna.
4. Write a .py program named myfirstprog.py.
5. Create a new python file named setup.py on the current directory of your script.
6. In the setup.py file, copy the code below and save it.
7. With shift pressed right click on the same directory, so you are able to open a command prompt window.
8. In the prompt, type python setup.py build
9. If your script is error free, then there will be no problem on creating application.
10. Check the newly created folder build. It has another folder in it. Within that folder you can find your application. Run it. Make yourself happy.

See the original script in my blog.

setup.py:

from cx_Freeze import setup, Executable

base = None

executables = [Executable("myfirstprog.py", base=base)]

packages = ["idna"]
options = {
'build_exe': {
'packages':packages,
},
}

setup(
name = "<any name>",
options = options,
version = "<any number>",
description = '<any description>',
executables = executables
)

EDIT:

• be sure that instead of myfirstprog.py you should put your .pyextension file name as created in step 4;
• you should include each imported package in your .py into packages list (ex: packages = ["idna", "os","sys"])
• any name, any number, any description in setup.py file should not remain the same, you should change it accordingly (ex:name = "<first_ever>", version = "0.11", description = '' )
• the imported packages must be installed before you start step 8.

## 回答 1

PyInstaller支持Python 3.6。

pip install pyinstaller

pyinstaller file_name.py

Python 3.6 is supported by PyInstaller.

Open a cmd window in your Python folder (open a command window and use cd or while holding shift, right click it on Windows Explorer and choose ‘Open command window here’). Then just enter

pip install pyinstaller

And that’s it.

The simplest way to use it is by entering on your command prompt

pyinstaller file_name.py

For more details on how to use it, take a look at this question.

## 回答 2

GitHub上有一个名为auto-py-to-exe的开源项目。实际上，它也仅在内部使用PyInstaller，但由于它具有控制PyInstaller的简单GUI，因此它可能是一个舒适的选择。与其他解决方案相比，它还可以输出独立文件。他们还提供了视频，展示了如何进行设置。

There is an open source project called auto-py-to-exe on GitHub. Actually it also just uses PyInstaller internally but since it is has a simple GUI that controls PyInstaller it may be a comfortable alternative. It can also output a standalone file in contrast to other solutions. They also provide a video showing how to set it up.

GUI:

Output:

## 回答 3

I can’t tell you what’s best, but a tool I have used with success in the past was cx_Freeze. They recently updated (on Jan. 7, ’17) to version 5.0.1 and it supports Python 3.6.

Here’s the pypi https://pypi.python.org/pypi/cx_Freeze

The documentation shows that there is more than one way to do it, depending on your needs. http://cx-freeze.readthedocs.io/en/latest/overview.html

I have not tried it out yet, so I’m going to point to a post where the simple way of doing it was discussed. Some things may or may not have changed though.

How do I use cx_freeze?

## 回答 4

PyInstaller 我正在运行3.6，PyInstaller运行良好！我用来创建exe文件的命令是：

pyinstaller -wF myfile.py

-wF将创建一个EXE文件。因为我的所有程序都具有GUI，并且我不想显示命令窗口，所以-w选项将隐藏命令窗口。

[2019年7月20日更新]

pip install PySimpleGUI-exemaker

python -m pysimplegui-exemaker.pysimplegui-exemaker

I’ve been using Nuitka and PyInstaller with my package, PySimpleGUI.

Nuitka There were issues getting tkinter to compile with Nuikta. One of the project contributors developed a script that fixed the problem.

If you’re not using tkinter it may “just work” for you. If you are using tkinter say so and I’ll try to get the script and instructions published.

PyInstaller I’m running 3.6 and PyInstaller is working great! The command I use to create my exe file is:

pyinstaller -wF myfile.py

The -wF will create a single EXE file. Because all of my programs have a GUI and I do not want to command window to show, the -w option will hide the command window.

This is as close to getting what looks like a Winforms program to run that was written in Python.

[Update 20-Jul-2019]

There is PySimpleGUI GUI based solution that uses PyInstaller. It uses PySimpleGUI. It’s called pysimplegui-exemaker and can be pip installed.

pip install PySimpleGUI-exemaker

To run it after installing:

python -m pysimplegui-exemaker.pysimplegui-exemaker

## 回答 5

1. 启动电脑
2. 打开命令提示符
3. 输入命令 pip install pyinstaller
4. 安装后，使用命令“ cd”转到工作目录。
5. 运行命令 pyinstall <filename>

Now you can convert it by using PyInstaller. It works with even Python 3.

Steps:

2. Open command prompt
3. Enter command pip install pyinstaller
4. When it is installed, use the command ‘cd’ to go to the working directory.
5. Run command pyinstall <filename>

# 如何使用python获取文件夹中的最新文件

## 问题：如何使用python获取文件夹中的最新文件

max(files, key = os.path.getctime)

FileNotFoundError: [WinError 2] The system cannot find the file specified: 'a'

I need to get the latest file of a folder using python. While using the code:

max(files, key = os.path.getctime)

I am getting the below error:

FileNotFoundError: [WinError 2] The system cannot find the file specified: 'a'

## 回答 0

import glob
import os

list_of_files = glob.glob('/path/to/folder/*') # * means all if need specific format then *.csv
latest_file = max(list_of_files, key=os.path.getctime)
print latest_file

Whatever is assigned to the files variable is incorrect. Use the following code.

import glob
import os

list_of_files = glob.glob('/path/to/folder/*') # * means all if need specific format then *.csv
latest_file = max(list_of_files, key=os.path.getctime)
print latest_file

## 回答 1

max(files, key = os.path.getctime)

files = os.listdir(path)
paths = [os.path.join(path, basename) for basename in files]
return max(paths, key=os.path.getctime)
max(files, key = os.path.getctime)

is quite incomplete code. What is files? It probably is a list of file names, coming out of os.listdir().

But this list lists only the filename parts (a. k. a. “basenames”), because their path is common. In order to use it correctly, you have to combine it with the path leading to it (and used to obtain it).

Such as (untested):

files = os.listdir(path)
paths = [os.path.join(path, basename) for basename in files]
return max(paths, key=os.path.getctime)

## 回答 2

glob.iglob（）返回一个迭代器，该迭代器产生的值与glob（）相同，而实际上并没有同时存储所有值。

LatestFile = max(glob.iglob(fileNamePattern),key=os.path.getctime)

I would suggest using glob.iglob() instead of the glob.glob(), as it is more efficient.

glob.iglob() Return an iterator which yields the same values as glob() without actually storing them all simultaneously.

Which means glob.iglob() will be more efficient.

I mostly use below code to find the latest file matching to my pattern:

LatestFile = max(glob.iglob(fileNamePattern),key=os.path.getctime)

NOTE: There are variants of max function, In case of finding the latest file we will be using below variant: max(iterable, *[, key, default])

which needs iterable so your first parameter should be iterable. In case of finding max of nums we can use beow variant : max (num1, num2, num3, *args[, key])

## 回答 3

import glob
import os

files_path = os.path.join(folder, '*')
files = sorted(
glob.iglob(files_path), key=os.path.getctime, reverse=True)
print files[0]

Try to sort items by creation time. Example below sorts files in a folder and gets first element which is latest.

import glob
import os

files_path = os.path.join(folder, '*')
files = sorted(
glob.iglob(files_path), key=os.path.getctime, reverse=True)
print files[0]

## 回答 4

import glob
import os

list_of_files = glob.glob('/path/to/folder/*') # * means all if need specific format then *.csv
latest_file = max(list_of_files, key=os.path.getmtime)
print latest_file

I lack the reputation to comment but ctime from Marlon Abeykoons response did not give the correct result for me. Using mtime does the trick though. (key=os.path.getmtime))

import glob
import os

list_of_files = glob.glob('/path/to/folder/*') # * means all if need specific format then *.csv
latest_file = max(list_of_files, key=os.path.getmtime)
print latest_file

I found two answers for that problem:

## 回答 5

（编辑以改善答案）

def get_latest_file(path, *paths):
fullpath = os.path.join(path, paths)
...
get_latest_file('example', 'files','randomtext011.*.txt')

def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)

def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)
files = glob.glob(fullpath)  # You may use iglob in Python3
if not files:                # I prefer using the negation
return None                      # because it behaves like a shortcut
latest_file = max(files, key=os.path.getctime)
_, filename = os.path.split(latest_file)
return filename

First define a function get_latest_file

def get_latest_file(path, *paths):
fullpath = os.path.join(path, paths)
...
get_latest_file('example', 'files','randomtext011.*.txt')

You may also use a docstring !

def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)

If you use Python 3, you can use iglob instead.

Complete code to return the name of latest file:

def get_latest_file(path, *paths):
"""Returns the name of the latest (most recent) file
of the joined path(s)"""
fullpath = os.path.join(path, *paths)
files = glob.glob(fullpath)  # You may use iglob in Python3
if not files:                # I prefer using the negation
return None                      # because it behaves like a shortcut
latest_file = max(files, key=os.path.getctime)
_, filename = os.path.split(latest_file)
return filename

## 回答 6

files_before = glob.glob(os.path.join(my_path,'*'))
**code where new file is created**
new_file = set(files_before).symmetric_difference(set(glob.glob(os.path.join(my_path,'*'))))

I have tried to use the above suggestions and my program crashed, than I figured out the file I’m trying to identify was used and when trying to use ‘os.path.getctime’ it crashed. what finally worked for me was:

files_before = glob.glob(os.path.join(my_path,'*'))
**code where new file is created**
new_file = set(files_before).symmetric_difference(set(glob.glob(os.path.join(my_path,'*'))))

this codes gets the uncommon object between the two sets of file lists its not the most elegant, and if multiple files are created at the same time it would probably won’t be stable

## 回答 7

get_latest.bat

@echo off
for /f %%i in ('dir \\directory\in\question /b/a-d/od/t:c') do set LAST=%%i
%LAST%

\\directory\in\question您要调查的目录在哪里。

get_latest.py

from subprocess import Popen, PIPE
p = Popen("get_latest.bat", shell=True, stdout=PIPE,)
stdout, stderr = p.communicate()
print(stdout, stderr)

A much faster method on windows (0.05s), call a bat script that does this:

get_latest.bat

@echo off
for /f %%i in ('dir \\directory\in\question /b/a-d/od/t:c') do set LAST=%%i
%LAST%

where \\directory\in\question is the directory you want to investigate.

get_latest.py

from subprocess import Popen, PIPE
p = Popen("get_latest.bat", shell=True, stdout=PIPE,)
stdout, stderr = p.communicate()
print(stdout, stderr)

if it finds a file stdout is the path and stderr is None.

Use stdout.decode("utf-8").rstrip() to get the usable string representation of the file name.

## 回答 8

from pathlib import Path

def latest_file(path: Path, pattern: str = "*"):
files = path.glob(pattern)
return max(files, key=lambda x: x.stat().st_ctime)

I’ve been using this in Python 3, including pattern matching on the filename.

from pathlib import Path

def latest_file(path: Path, pattern: str = "*"):
files = path.glob(pattern)
return max(files, key=lambda x: x.stat().st_ctime)

# 如何在Python3中像printf一样打印？

## 问题：如何在Python3中像printf一样打印？

print "a=%d,b=%d" % (f(x,n),g(x,n))

print("a=%d,b=%d") % (f(x,n),g(x,n))

In Python 2 I used:

print "a=%d,b=%d" % (f(x,n),g(x,n))

I’ve tried:

print("a=%d,b=%d") % (f(x,n),g(x,n))

## 回答 0

print "Hi"

print ("Hi")

print("a=%d,b=%d" % (f(x,n),g(x,n)))

print('a={:d}, b={:d}'.format(f(x,n),g(x,n)))

Python 3.6引入了另一种字符串格式范例：f-strings

print(f'a={f(x,n):d}, b={g(x,n):d}')

In Python2, print was a keyword which introduced a statement:

print "Hi"

In Python3, print is a function which may be invoked:

print ("Hi")

In both versions, % is an operator which requires a string on the left-hand side and a value or a tuple of values or a mapping object (like dict) on the right-hand side.

So, your line ought to look like this:

print("a=%d,b=%d" % (f(x,n),g(x,n)))

Also, the recommendation for Python3 and newer is to use {}-style formatting instead of %-style formatting:

print('a={:d}, b={:d}'.format(f(x,n),g(x,n)))

Python 3.6 introduces yet another string-formatting paradigm: f-strings.

print(f'a={f(x,n):d}, b={g(x,n):d}')

## 回答 1

a, b = 1, 2

print("a={0},b={1}".format(a, b))

The most recommended way to do is to use format method. Read more about it here

a, b = 1, 2

print("a={0},b={1}".format(a, b))

## 回答 2

import sys
def printf(format, *args):
sys.stdout.write(format % args)

i = 7
pi = 3.14159265359
printf("hi there, i=%d, pi=%.2f\n", i, pi)
# hi there, i=7, pi=3.14

Simple printf() function from O’Reilly’s Python Cookbook.

import sys
def printf(format, *args):
sys.stdout.write(format % args)

Example output:

i = 7
pi = 3.14159265359
printf("hi there, i=%d, pi=%.2f\n", i, pi)
# hi there, i=7, pi=3.14

## 回答 3

print(f'{account:40s} ({ratio:3.2f}) -> AUD {splitAmount}')

PEP 498包含详细信息。而且…它用其他语言的格式说明符排序了我的烦恼-允许说明符本身可以是表达式！好极了！请参阅：格式说明符

Python 3.6 introduced f-strings for inline interpolation. What’s even nicer is it extended the syntax to also allow format specifiers with interpolation. Something I’ve been working on while I googled this (and came across this old question!):

print(f'{account:40s} ({ratio:3.2f}) -> AUD {splitAmount}')

PEP 498 has the details. And… it sorted my pet peeve with format specifiers in other langs — allows for specifiers that themselves can be expressions! Yay! See: Format Specifiers.

## 回答 4

print("foo %d, bar %d" % (1,2))

Simple Example:

print("foo %d, bar %d" % (1,2))

## 回答 5

def printf(format, *values):
print(format % values )

printf("Hello, this is my name %s and my age %d", "Martin", 20)

A simpler one.

def printf(format, *values):
print(format % values )

Then:

printf("Hello, this is my name %s and my age %d", "Martin", 20)

## 回答 6

print( "a=%d,b=%d" % (f(x,n), g(x,n)) )

Because your % is outside the print(...) parentheses, you’re trying to insert your variables into the result of your print call. print(...) returns None, so this won’t work, and there’s also the small matter of you already having printed your template by this time and time travel being prohibited by the laws of the universe we inhabit.

The whole thing you want to print, including the % and its operand, needs to be inside your print(...) call, so that the string can be built before it is printed.

print( "a=%d,b=%d" % (f(x,n), g(x,n)) )

I have added a few extra spaces to make it clearer (though they are not necessary and generally not considered good style).

## 回答 7

python中没有其他的printf单词…我很惊讶！最好的代码是

def printf(format, *args):
sys.stdout.write(format % args)

Other words printf absent in python… I’m surprised! Best code is

def printf(format, *args):
sys.stdout.write(format % args)

Because of this form allows not to print \n. All others no. That’s why print is bad operator. And also you need write args in special form. There is no disadvantages in function above. It’s a standard usual form of printf function.

## 回答 8

print("Name={}, balance={}".format(var-name, var-balance))
print("Name={}, balance={}".format(var-name, var-balance))

# 如何在AWS EC2实例上安装Python 3？

## 问题：如何在AWS EC2实例上安装Python 3？

sudo yum install python3

No package python3 available.

I’m trying to install python 3.x on an AWS EC2 instance and:

sudo yum install python3

doesn’t work:

No package python3 available.

I’ve googled around and I can’t find anyone else who has this problem so I’m asking here. Do I have to manually download and install it?

## 回答 0

sudo yum list | grep python3

sudo yum install python34 python34-pip

If you do a

sudo yum list | grep python3

you will see that while they don’t have a “python3” package, they do have a “python34” package, or a more recent release, such as “python36”. Installing it is as easy as:

sudo yum install python34 python34-pip

## 回答 1

sudo amazon-linux-extras install python3

virtualenv --python=python3 my_venv

python3 -m venv my_venv

[ec2-user@x ~]$amazon-linux-extras list 0 ansible2 disabled [ =2.4.2 ] 1 emacs disabled [ =25.3 ] 2 memcached1.5 disabled [ =1.5.1 ] 3 nginx1.12 disabled [ =1.12.2 ] 4 postgresql9.6 disabled [ =9.6.6 ] 5 python3=latest enabled [ =3.6.2 ] 6 redis4.0 disabled [ =4.0.5 ] 7 R3.4 disabled [ =3.4.3 ] 8 rust1 disabled [ =1.22.1 ] 9 vim disabled [ =8.0 ] 10 golang1.9 disabled [ =1.9.2 ] 11 ruby2.4 disabled [ =2.4.2 ] 12 nano disabled [ =2.9.1 ] 13 php7.2 disabled [ =7.2.0 ] 14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ] Note: This may be obsolete for current versions of Amazon Linux 2 since late 2018 (see comments), you can now directly install it via yum install python3. In Amazon Linux 2, there isn’t a python3[4-6] in the default yum repos, instead there’s the Amazon Extras Library. sudo amazon-linux-extras install python3 If you want to set up isolated virtual environments with it; using yum install‘d virtualenv tools don’t seem to reliably work. virtualenv --python=python3 my_venv Calling the venv module/tool is less finicky, and you could double check it’s what you want/expect with python3 --version beforehand. python3 -m venv my_venv Other things it can install (versions as of 18 Jan 18): [ec2-user@x ~]$ amazon-linux-extras list
0  ansible2   disabled  [ =2.4.2 ]
1  emacs   disabled  [ =25.3 ]
2  memcached1.5   disabled  [ =1.5.1 ]
3  nginx1.12   disabled  [ =1.12.2 ]
4  postgresql9.6   disabled  [ =9.6.6 ]
5  python3=latest  enabled  [ =3.6.2 ]
6  redis4.0   disabled  [ =4.0.5 ]
7  R3.4   disabled  [ =3.4.3 ]
8  rust1   disabled  [ =1.22.1 ]
9  vim   disabled  [ =8.0 ]
10  golang1.9   disabled  [ =1.9.2 ]
11  ruby2.4   disabled  [ =2.4.2 ]
12  nano   disabled  [ =2.9.1 ]
13  php7.2   disabled  [ =7.2.0 ]
14  lamp-mariadb10.2-php7.2   disabled  [ =10.2.10_7.2.0 ]

## 回答 2

wget https://www.python.org/ftp/python/3.4.2/Python-3.4.2.tgz
tar zxvf Python-3.4.2.tgz
cd Python-3.4.2
sudo yum install gcc
./configure --prefix=/opt/python3
make
sudo yum install openssl-devel
sudo make install
sudo ln -s /opt/python3/bin/python3 /usr/bin/python3
python3 (should start the interpreter if it's worked (quit() to exit)

Here are the steps I used to manually install python3 for anyone else who wants to do it as it’s not super straight forward. EDIT: It’s almost certainly easier to use the yum package manager (see other answers).

Note, you’ll probably want to do sudo yum groupinstall 'Development Tools' before doing this otherwise pip won’t install.

wget https://www.python.org/ftp/python/3.4.2/Python-3.4.2.tgz
tar zxvf Python-3.4.2.tgz
cd Python-3.4.2
sudo yum install gcc
./configure --prefix=/opt/python3
make
sudo yum install openssl-devel
sudo make install
sudo ln -s /opt/python3/bin/python3 /usr/bin/python3
python3 (should start the interpreter if it's worked (quit() to exit)

## 回答 3

EC2（在Amazon Linux AMI上）当前支持python3.4和python3.5。

sudo yum install python35
sudo yum install python35-pip

EC2 (on the Amazon Linux AMI) currently supports python3.4 and python3.5.

sudo yum install python35
sudo yum install python35-pip

## 回答 4

sudo yum install python36 python36-virtualenv python36-pip

As of Amazon Linux version 2017.09 python 3.6 is now available:

sudo yum install python36 python36-virtualenv python36-pip

## 回答 5

Amazon Linux现在支持python36。

python36-pip不可用。因此需要遵循不同的路线。

sudo yum install python36 python36-devel python36-libs python36-tools

# If you like to have pip3.6:
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py

Amazon Linux now supports python36.

python36-pip is not available. So need to follow a different route.

sudo yum install python36 python36-devel python36-libs python36-tools

# If you like to have pip3.6:
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py

## 回答 6

sudo amazon-linux-extras install epel

yum search python | grep "^python3..x8"

python34.x86_64 : Version 3 of the Python programming language aka Python 3000
python36.x86_64 : Interpreter of the Python programming language

As @NickT said, there’s no python3[4-6] in the default yum repos in Amazon Linux 2, as of today it uses 3.7 and looking at all answers here we can say it will be changed over time.

I was looking for python3.6 on Amazon Linux 2 but amazon-linux-extras shows a lot of options but no python at all. in fact, you can try to find the version you know in epel repo:

sudo amazon-linux-extras install epel

yum search python | grep "^python3..x8"

python34.x86_64 : Version 3 of the Python programming language aka Python 3000
python36.x86_64 : Interpreter of the Python programming language

## 回答 7

https://aws-labs.com/install-python-3-centos-7-2/

sudo yum install centos-release-scl

sudo yum install rh-python36

scl enable rh-python36 bash

python --version

sudo yum groupinstall Development Tools

mkdir ~/my_new_project
cd ~/my_new_project
python -m venv my_project_venv

source my_project_venv/bin/activate

Adding to all the answers already available for this question, I would like to add the steps I followed to install Python3 on AWS EC2 instance running CentOS 7. You can find the entire details at this link.

https://aws-labs.com/install-python-3-centos-7-2/

First, we need to enable SCL. SCL is a community project that allows you to build, install, and use multiple versions of software on the same system, without affecting system default packages.

sudo yum install centos-release-scl

Now that we have SCL repository, we can install the python3

sudo yum install rh-python36

To access Python 3.6 you need to launch a new shell instance using the Software Collection scl tool:

scl enable rh-python36 bash

If you check the Python version now you’ll notice that Python 3.6 is the default version

python --version

It is important to point out that Python 3.6 is the default Python version only in this shell session. If you exit the session or open a new session from another terminal Python 2.7 will be the default Python version.

Now, Install the python development tools by typing:

sudo yum groupinstall ‘Development Tools’

Now create a virtual environment so that the default python packages don’t get messed up.

mkdir ~/my_new_project
cd ~/my_new_project
python -m venv my_project_venv

To use this virtual environment,

source my_project_venv/bin/activate

Now, you have your virtual environment set up with python3.

## 回答 8

sudo apt-get install python3

sudo yum install python36

sudo zypper install python3

On Debian derivatives such as Ubuntu, use apt. Check the apt repository for the versions of Python available to you. Then, run a command similar to the following, substituting the correct package name:

sudo apt-get install python3

On Red Hat and derivatives, use yum. Check the yum repository for the versions of Python available to you. Then, run a command similar to the following, substituting the correct package name:

sudo yum install python36

On SUSE and derivatives, use zypper. Check the repository for the versions of Python available to you. Then. run a command similar to the following, substituting the correct package name:

sudo zypper install python3

# 如何使用python在Selenium中以编程方式使Firefox无头？

## 问题：如何使用python在Selenium中以编程方式使Firefox无头？

binary = FirefoxBinary('C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe', log_file=sys.stdout)
self.driver = webdriver.Firefox(firefox_binary=binary)

binary = FirefoxBinary('C:\\Program Files\\Nightly\\firefox.exe', log_file=sys.stdout)

I am running this code with python, selenium, and firefox but still get ‘head’ version of firefox:

binary = FirefoxBinary('C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe', log_file=sys.stdout)
self.driver = webdriver.Firefox(firefox_binary=binary)

I also tried some variations of binary:

binary = FirefoxBinary('C:\\Program Files\\Nightly\\firefox.exe', log_file=sys.stdout)

## 回答 0

from selenium import webdriver
from selenium.webdriver.firefox.options import Options

options = Options()
driver = webdriver.Firefox(options=options, executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe')
driver.quit()

$MOZ_HEADLESS=1 python manage.py test # testing example in Django with headless Firefox 要么$ export MOZ_HEADLESS=1   # this way you only have to set it once
$python manage.py test functional/tests/directory$ unset MOZ_HEADLESS      # if you want to disable headless mode

## 奥托罗

To invoke Firefox Browser headlessly, you can set the headless property through Options() class as follows:

from selenium import webdriver
from selenium.webdriver.firefox.options import Options

options = Options()
driver = webdriver.Firefox(options=options, executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe')
driver.quit()

There’s another way to accomplish headless mode. If you need to disable or enable the headless mode in Firefox, without changing the code, you can set the environment variable MOZ_HEADLESS to whatever if you want Firefox to run headless, or don’t set it at all.

This is very useful when you are using for example continuous integration and you want to run the functional tests in the server but still be able to run the tests in normal mode in your PC.

$MOZ_HEADLESS=1 python manage.py test # testing example in Django with headless Firefox or$ export MOZ_HEADLESS=1   # this way you only have to set it once
$python manage.py test functional/tests/directory$ unset MOZ_HEADLESS      # if you want to disable headless mode

## Outro

How to configure ChromeDriver to initiate Chrome browser in Headless mode through Selenium?

## 回答 1

from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver

options = FirefoxOptions()
driver = webdriver.Firefox(options=options)

The first answer does’t work anymore.

This worked for me:

from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver

options = FirefoxOptions()
driver = webdriver.Firefox(options=options)

## 回答 2

https://seleniumhq.github.io/selenium/docs/api/py/webdriver_firefox/selenium.webdriver.firefox.options.html

https://seleniumhq.github.io/selenium/docs/api/py/webdriver_firefox/selenium.webdriver.firefox.options.html

works for me

## 回答 3

FirefoxOptions firefoxOptions = new FirefoxOptions();

Just a note for people who may have found this later (and want java way of achieving this); FirefoxOptions is also capable of enabling the headless mode:

FirefoxOptions firefoxOptions = new FirefoxOptions();

## 回答 4

Used below code to set driver type based on need of Headless / Head for both Firefox and chrome:

// Can pass browser type

if brower.lower() == 'chrome':
driver = webdriver.Chrome('..\drivers\chromedriver')
ch_Options = Options()
driver = webdriver.Chrome('..\drivers\chromedriver',options=ch_Options)
elif brower.lower() == 'firefox':
driver = webdriver.Firefox(executable_path=r'..\drivers\geckodriver.exe')
ff_option = FFOption()
driver = webdriver.Firefox(executable_path=r'..\drivers\geckodriver.exe', options=ff_option)
elif brower.lower() == 'ie':
driver = webdriver.Ie('..\drivers\IEDriverServer')
else:
raise Exception('Invalid Browser Type')
Used below code to set driver type based on need of Headless / Head for both Firefox and chrome:

// Can pass browser type

if brower.lower() == 'chrome':
driver = webdriver.Chrome('..\drivers\chromedriver')
ch_Options = Options()
driver = webdriver.Chrome('..\drivers\chromedriver',options=ch_Options)
elif brower.lower() == 'firefox':
driver = webdriver.Firefox(executable_path=r'..\drivers\geckodriver.exe')
ff_option = FFOption()
driver = webdriver.Firefox(executable_path=r'..\drivers\geckodriver.exe', options=ff_option)
elif brower.lower() == 'ie':
driver = webdriver.Ie('..\drivers\IEDriverServer')
else:
raise Exception('Invalid Browser Type')

# 异步实际上是如何工作的？

## 问题：异步实际上是如何工作的？

Asycnio自己的文档甚至没有帮助。那里没有关于它如何工作的信息，只有一些有关如何使用它的指南，有时也会引起误解/写得很差。

1. 形式的过程定义async def foo(): ...实际上被解释为类继承的方法coroutine
2. 也许async def实际上是通过await语句分为多个方法，在这些方法上被调用的对象能够跟踪到目前为止执行所取得的进展。
3. 如果上述条件成立，那么从本质上讲，协程的执行归结为某个全局管理器调用循环对象的方法（循环？）。
4. 全局管理器以某种方式（如何？）知道何时由Python代码执行I / O操作（仅？），并且能够选择当前执行方法放弃控制后执行的待处理协程方法之一（命中该await语句） ）。

async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)

asyncio.gather(coro('first'), coro('second'))

# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)

def after(self, name):
print('after', name)

def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0

def __call__():
self.parts[self.pos](self.name)
self.pos += 1

def done(self):
return self.pos == len(self.parts)

# translated from asyncio.gather()
class AsyncIOManager:

def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()

This question is motivated by my another question: How to await in cdef?

There are tons of articles and blog posts on the web about asyncio, but they are all very superficial. I couldn’t find any information about how asyncio is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it’s thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.

Asycnio’s own documentation is even less helpful. There’s no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.

I’m familiar with Go’s implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn’t, I’m now trying to figure out why. My best guess so far is as follows, please correct me where I’m wrong:

1. Procedure definitions of the form async def foo(): ... are actually interpreted as methods of a class inheriting coroutine.
2. Perhaps, async def is actually split into multiple methods by await statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the await statement).

In other words, here’s my attempt at “desugaring” of some asyncio syntax into something more understandable:

async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)

asyncio.gather(coro('first'), coro('second'))

# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)

def after(self, name):
print('after', name)

def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0

def __call__():
self.parts[self.pos](self.name)
self.pos += 1

def done(self):
return self.pos == len(self.parts)

# translated from asyncio.gather()
class AsyncIOManager:

def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()

Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?

# asyncio如何工作？

## 生成器

>>> def test():
...     yield 1
...     yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration

### 与生成器通讯

>>> def test():
...     val = yield 1
...     print(val)
...     yield 2
...     yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception

gen.throw()另一方面，允许在生成器中引发Exception，但在同一位置引发了异常yield

### 从生成器返回值

>>> def test():
...     yield 1
...     return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
...     next(gen)
... except StopIteration as exc:
...     print(exc.value)
...
abc

## 看，一个新的关键字： yield from

Python 3.4附带了一个新关键字：yield from。什么是关键字允许我们做的，是通过对任何next()send()throw()成为最内嵌套的生成器。如果内部生成器返回一个值，则它也是的返回值yield from

>>> def inner():
...     inner_result = yield 2
...     print('inner', inner_result)
...     return 3
...
>>> def outer():
...     yield 1
...     val = yield from inner()
...     print('outer', val)
...     yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4

## 放在一起

yield from在Python 3.4中引入了new关键字之后，我们现在能够在生成器内部创建生成器，就像隧道一样，将数据从最内层生成器来回传递到最外层生成器。这为生成器- 协程产生了新的含义。

async def inner():
return 1

async def outer():
await inner()

### 期货

1. 待处理-未来未设置任何结果或exceptions。
2. 已取消-将来已使用取消 fut.cancel()
3. 完成-将来通过使用的结果集fut.set_result()或使用的异常集完成fut.set_exception()

# 异步

1. select.select 等待。
2. 准备好套接字，其中包含数据。
3. 来自套接字的数据被移入缓冲区。
4. future.set_result() 叫做。
7. 数据正在从缓冲区中读取，并返回给我们谦虚的用户。

# How does asyncio work?

Before answering this question we need to understand a few base terms, skip these if you already know any of them.

## Generators

Generators are objects that allow us to suspend the execution of a python function. User curated generators are implement using the keyword yield. By creating a normal function containing the yield keyword, we turn that function into a generator:

>>> def test():
...     yield 1
...     yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration

As you can see, calling next() on the generator causes the interpreter to load test’s frame, and return the yielded value. Calling next() again, cause the frame to load again into the interpreter stack, and continue on yielding another value.

By the third time next() is called, our generator was finished, and StopIteration was thrown.

### Communicating with a generator

A less-known feature of generators, is the fact that you can communicate with them using two methods: send() and throw().

>>> def test():
...     val = yield 1
...     print(val)
...     yield 2
...     yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception

Upon calling gen.send(), the value is passed as a return value from the yield keyword.

gen.throw() on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot yield was called.

### Returning values from generators

Returning a value from a generator, results in the value being put inside the StopIteration exception. We can later on recover the value from the exception and use it to our need.

>>> def test():
...     yield 1
...     return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
...     next(gen)
... except StopIteration as exc:
...     print(exc.value)
...
abc

## Behold, a new keyword: yield from

Python 3.4 came with the addition of a new keyword: yield from. What that keyword allows us to do, is pass on any next(), send() and throw() into an inner-most nested generator. If the inner generator returns a value, it is also the return value of yield from:

>>> def inner():
...     inner_result = yield 2
...     print('inner', inner_result)
...     return 3
...
>>> def outer():
...     yield 1
...     val = yield from inner()
...     print('outer', val)
...     yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4

I’ve written an article to further elaborate on this topic.

## Putting it all together

Upon introducing the new keyword yield from in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators – coroutines.

Coroutines are functions that can be stopped and resumed while being run. In Python, they are defined using the async def keyword. Much like generators, they too use their own form of yield from which is await. Before async and await were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with yield from instead of await).

async def inner():
return 1

async def outer():
await inner()

Like every iterator or generator that implement the __iter__() method, coroutines implement __await__() which allows them to continue on every time await coro is called.

There’s a nice sequence diagram inside the Python docs that you should check out.

In asyncio, apart from coroutine functions, we have 2 important objects: tasks and futures.

### Futures

Futures are objects that have the __await__() method implemented, and their job is to hold a certain state and result. The state can be one of the following:

1. PENDING – future does not have any result or exception set.
2. CANCELLED – future was cancelled using fut.cancel()
3. FINISHED – future was finished, either by a result set using fut.set_result() or by an exception set using fut.set_exception()

The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.

Another important feature of future objects, is that they contain a method called add_done_callback(). This method allows functions to be called as soon as the task is done – whether it raised an exception or finished.

Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine awaits a future, the future is passed all the way back to the task (just like in yield from), and the task receives it.

Next, the task binds itself to the future. It does so by calling add_done_callback() on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task’s callback will be called, and it will rise back up to existence.

# Asyncio

The final burning question we must answer is – how is the IO implemented?

Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop’s job is to call tasks every time they are ready and coordinate all that effort into one single working machine.

The IO part of the event loop is built upon a single crucial function called select. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon data being received it wakes up, and returns the sockets which received data, or the sockets whom are ready for writing.

When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its .send() buffer is full, or the .recv() buffer is empty, the socket is registered to the select function (by simply adding it to one of the lists, rlist for recv and wlist for send) and the appropriate function awaits a newly created future object, tied to that socket.

When all available tasks are waiting for futures, the event loop calls select and waits. When the one of the sockets has incoming data, or its send buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.

Now all the magic happens. The future is set to done, the task that added itself before with add_done_callback() rises up back to life, and calls .send() on the coroutine which resumes the inner-most coroutine (because of the await chain) and you read the newly received data from a nearby buffer it was spilled unto.

Method chain again, in case of recv():

1. select.select waits.
2. A ready socket, with data is returned.
3. Data from the socket is moved into a buffer.
4. future.set_result() is called.
6. Task calls .send() on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
7. Data is being read from the buffer and returned to our humble user.

In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses yield from capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it’s waiting for IO to complete (by using the OS select function).

And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio.

# 1.坚果壳中的协程与子程序

defvs 的区别async def只是为了清楚起见。实际的差别是returnyield。从此，awaityield from从单个调用到整个堆栈取不同。

## 1.1。子程序

def subfoo(bar):
qux = 3
return qux * bar

1. bar和分配堆栈空间qux
2. 递归执行第一个语句并跳转到下一个语句
3. 一次return，将其值推入调用堆栈
4. 清除堆栈（1.）和指令指针（2.）

root -\
:    \- subfoo --\
:/--<---return --/
|
V

## 1.2。协程作为持久子例程

def cofoo(bar):
qux = yield bar  # yield marks a break point
return qux

1. bar和分配堆栈空间qux
2. 递归执行第一个语句并跳转到下一个语句
1. 一次yield，将其值压入调用堆栈，但存储堆栈和指令指针
2. 一旦调用yield，恢复堆栈和指令指针并将参数推入qux
3. 一次return，将其值推入调用堆栈
4. 清除堆栈（1.）和指令指针（2.）

root -\
:    \- cofoo --\
:/--<+--yield --/
|    :
V    :

## 1.3。遍历调用栈

def wrap():
yield 'before'
yield from cofoo()
yield 'after'

## 1.4。协程一直向下

root -\
:    \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
|    :
:\ --+-- coro_a.send----------yield ---\
:                             coro_b <-/

## 1.5。Python的async和await

def foo():  # subroutine?
return None

def foo():  # coroutine?
yield from foofoo()  # generator? coroutine?

async def foo():  # coroutine!
await foofoo()  # coroutine!
return None

# 2.简单事件循环的剖析

loop -\
:    \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
|    :
|    :  # loop waits for event to happen
|    :
:\ --+-- send(reply) -------- yield --\
:        coroutine <--yield-- event <-/

## 2.1.2。定义事件

class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until

# used whenever someone awaits an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self

def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)

## 2.2.1。等待事件

import time

async def asleep(duration: float):
"""await that duration seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)

coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)

## 2.2.2。活动+睡眠

• AsyncSleep 可以从协程内部产生的事件
• time.sleep 可以等待而不会影响协程

## 2.3。天真的事件循环

1. 按照所需的唤醒时间对协程进行排序
2. 选择第一个想要唤醒的人
3. 等到这个时间点
4. 运行这个协程
5. 从1开始重复。

def run(*coroutines):
"""Cooperatively run all coroutines until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])

## 2.4。合作等待

AsyncSleep事件和run事件循环是定时事件的工作完全实现。

async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)

run(*(sleepy("coroutine %d" % j) for j in range(5)))

# 3. I / O事件循环

## 3.1。该select呼叫

Python已经有一个接口可以查询OS的读取I / O句柄。当调用带有读取或写入的句柄时，它返回准备读取或写入的句柄：

readable, writeable, _ = select.select(rlist, wlist, xlist, timeout)

write_target = open('/tmp/foo')
readable, writeable, _ = select.select([], [write_target], [])

select返回后，writeable包含我们的打开文件。

## 3.2。基本I / O事件

AsyncSleep请求类似，我们需要为I / O定义一个事件。使用底层select逻辑，事件必须引用可读对象-例如open文件。另外，我们存储要读取的数据量。

def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = ''

def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if read should not block
return self._buffer

def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)

AsyncSleep我们一样，我们大多只是存储底层系统调用所需的数据。这次__await__可以恢复多次-直到我们的需求amount被阅读为止。另外，我们return的I / O结果不只是恢复。

## 3.3。使用读取的I / O增强事件循环

# new
waiting_read = {}  # type: Dict[file, coroutine]

# old
time.sleep(max(0.0, until - time.time()))
# new

# new - reschedule waiting coroutine, run readable coroutine
waiting.append((until, coroutine))
waiting.sort()

# new
if isinstance(command, AsyncSleep):
...
...

## 3.4。把它放在一起

def run(*coroutines):
"""Cooperatively run all coroutines until completion"""
waiting_read = {}  # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
else:
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])

## 3.5。协同I / O

print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
result = return await AsyncRead(file, amount)
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')

id background round 1
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B

## 4.1。非阻塞I / O事件

class AsyncRecv:
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self._buffer = b''

def __await__(self):
while len(self._buffer) < self.amount:
try:
except BlockingIOError:
yield self
return self._buffer

def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)

## 4.2。解除阻塞事件循环

# old
# new
elif isinstance(command, AsyncRecv):

## 4.3。非阻塞I / O的丑陋一面

# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass

id background round 1
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5

# 附录

github上的示例代码

Talking about async/await and asyncio is not the same thing. The first is a fundamental, low-level construct (coroutines) while the later is a library using these constructs. Conversely, there is no single ultimate answer.

The following is a general description of how async/await and asyncio-like libraries work. That is, there may be other tricks on top (there are…) but they are inconsequential unless you build them yourself. The difference should be negligible unless you already know enough to not have to ask such a question.

# 1. Coroutines versus subroutines in a nut shell

Just like subroutines (functions, procedures, …), coroutines (generators, …) are an abstraction of call stack and instruction pointer: there is a stack of executing code pieces, and each is at a specific instruction.

The distinction of def versus async def is merely for clarity. The actual difference is return versus yield. From this, await or yield from take the difference from individual calls to entire stacks.

## 1.1. Subroutines

A subroutine represents a new stack level to hold local variables, and a single traversal of its instructions to reach an end. Consider a subroutine like this:

def subfoo(bar):
qux = 3
return qux * bar

When you run it, that means

1. allocate stack space for bar and qux
2. recursively execute the first statement and jump to the next statement
3. once at a return, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)

Notably, 4. means that a subroutine always starts at the same state. Everything exclusive to the function itself is lost upon completion. A function cannot be resumed, even if there are instructions after return.

root -\
:    \- subfoo --\
:/--<---return --/
|
V

## 1.2. Coroutines as persistent subroutines

A coroutine is like a subroutine, but can exit without destroying its state. Consider a coroutine like this:

def cofoo(bar):
qux = yield bar  # yield marks a break point
return qux

When you run it, that means

1. allocate stack space for bar and qux
2. recursively execute the first statement and jump to the next statement
1. once at a yield, push its value to the calling stack but store the stack and instruction pointer
2. once calling into yield, restore stack and instruction pointer and push arguments to qux
3. once at a return, push its value to the calling stack
4. clear the stack (1.) and instruction pointer (2.)

Note the addition of 2.1 and 2.2 – a coroutine can be suspended and resumed at predefined points. This is similar to how a subroutine is suspended during calling another subroutine. The difference is that the active coroutine is not strictly bound to its calling stack. Instead, a suspended coroutine is part of a separate, isolated stack.

root -\
:    \- cofoo --\
:/--<+--yield --/
|    :
V    :

This means that suspended coroutines can be freely stored or moved between stacks. Any call stack that has access to a coroutine can decide to resume it.

## 1.3. Traversing the call stack

So far, our coroutine only goes down the call stack with yield. A subroutine can go down and up the call stack with return and (). For completeness, coroutines also need a mechanism to go up the call stack. Consider a coroutine like this:

def wrap():
yield 'before'
yield from cofoo()
yield 'after'

When you run it, that means it still allocates the stack and instruction pointer like a subroutine. When it suspends, that still is like storing a subroutine.

However, yield from does both. It suspends stack and instruction pointer of wrap and runs cofoo. Note that wrap stays suspended until cofoo finishes completely. Whenever cofoo suspends or something is sent, cofoo is directly connected to the calling stack.

## 1.4. Coroutines all the way down

As established, yield from allows to connect two scopes across another intermediate one. When applied recursively, that means the top of the stack can be connected to the bottom of the stack.

root -\
:    \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
|    :
:\ --+-- coro_a.send----------yield ---\
:                             coro_b <-/

Note that root and coro_b do not know about each other. This makes coroutines much cleaner than callbacks: coroutines still built on a 1:1 relation like subroutines. Coroutines suspend and resume their entire existing execution stack up until a regular call point.

Notably, root could have an arbitrary number of coroutines to resume. Yet, it can never resume more than one at the same time. Coroutines of the same root are concurrent but not parallel!

## 1.5. Python’s async and await

The explanation has so far explicitly used the yield and yield from vocabulary of generators – the underlying functionality is the same. The new Python3.5 syntax async and await exists mainly for clarity.

def foo():  # subroutine?
return None

def foo():  # coroutine?
yield from foofoo()  # generator? coroutine?

async def foo():  # coroutine!
await foofoo()  # coroutine!
return None

The async for and async with statements are needed because you would break the yield from/await chain with the bare for and with statements.

# 2. Anatomy of a simple event loop

By itself, a coroutine has no concept of yielding control to another coroutine. It can only yield control to the caller at the bottom of a coroutine stack. This caller can then switch to another coroutine and run it.

This root node of several coroutines is commonly an event loop: on suspension, a coroutine yields an event on which it wants resume. In turn, the event loop is capable of efficiently waiting for these events to occur. This allows it to decide which coroutine to run next, or how to wait before resuming.

Such a design implies that there is a set of pre-defined events that the loop understands. Several coroutines await each other, until finally an event is awaited. This event can communicate directly with the event loop by yielding control.

loop -\
:    \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
|    :
|    :  # loop waits for event to happen
|    :
:\ --+-- send(reply) -------- yield --\
:        coroutine <--yield-- event <-/

The key is that coroutine suspension allows the event loop and events to directly communicate. The intermediate coroutine stack does not require any knowledge about which loop is running it, nor how events work.

## 2.1.1. Events in time

The simplest event to handle is reaching a point in time. This is a fundamental block of threaded code as well: a thread repeatedly sleeps until a condition is true. However, a regular sleep blocks execution by itself – we want other coroutines to not be blocked. Instead, we want tell the event loop when it should resume the current coroutine stack.

## 2.1.2. Defining an Event

An event is simply a value we can identify – be it via an enum, a type or other identity. We can define this with a simple class that stores our target time. In addition to storing the event information, we can allow to await a class directly.

class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until

# used whenever someone awaits an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self

def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)

This class only stores the event – it does not say how to actually handle it.

The only special feature is __await__ – it is what the await keyword looks for. Practically, it is an iterator but not available for the regular iteration machinery.

## 2.2.1. Awaiting an event

Now that we have an event, how do coroutines react to it? We should be able to express the equivalent of sleep by awaiting our event. To better see what is going on, we wait twice for half the time:

import time

async def asleep(duration: float):
"""await that duration seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)

We can directly instantiate and run this coroutine. Similar to a generator, using coroutine.send runs the coroutine until it yields a result.

coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)

This gives us two AsyncSleep events and then a StopIteration when the coroutine is done. Notice that the only delay is from time.sleep in the loop! Each AsyncSleep only stores an offset from the current time.

## 2.2.2. Event + Sleep

At this point, we have two separate mechanisms at our disposal:

• AsyncSleep Events that can be yielded from inside a coroutine
• time.sleep that can wait without impacting coroutines

Notably, these two are orthogonal: neither one affects or triggers the other. As a result, we can come up with our own strategy to sleep to meet the delay of an AsyncSleep.

## 2.3. A naive event loop

If we have several coroutines, each can tell us when it wants to be woken up. We can then wait until the first of them wants to be resumed, then for the one after, and so on. Notably, at each point we only care about which one is next.

This makes for a straightforward scheduling:

1. sort coroutines by their desired wake up time
2. pick the first that wants to wake up
3. wait until this point in time
4. run this coroutine
5. repeat from 1.

A trivial implementation does not need any advanced concepts. A list allows to sort coroutines by date. Waiting is a regular time.sleep. Running coroutines works just like before with coroutine.send.

def run(*coroutines):
"""Cooperatively run all coroutines until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])

Of course, this has ample room for improvement. We can use a heap for the wait queue or a dispatch table for events. We could also fetch return values from the StopIteration and assign them to the coroutine. However, the fundamental principle remains the same.

## 2.4. Cooperative Waiting

The AsyncSleep event and run event loop are a fully working implementation of timed events.

async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)

run(*(sleepy("coroutine %d" % j) for j in range(5)))

This cooperatively switches between each of the five coroutines, suspending each for 0.1 seconds. Even though the event loop is synchronous, it still executes the work in 0.5 seconds instead of 2.5 seconds. Each coroutine holds state and acts independently.

# 3. I/O event loop

An event loop that supports sleep is suitable for polling. However, waiting for I/O on a file handle can be done more efficiently: the operating system implements I/O and thus knows which handles are ready. Ideally, an event loop should support an explicit “ready for I/O” event.

## 3.1. The select call

Python already has an interface to query the OS for read I/O handles. When called with handles to read or write, it returns the handles ready to read or write:

readable, writeable, _ = select.select(rlist, wlist, xlist, timeout)

For example, we can open a file for writing and wait for it to be ready:

write_target = open('/tmp/foo')
readable, writeable, _ = select.select([], [write_target], [])

Once select returns, writeable contains our open file.

## 3.2. Basic I/O event

Similar to the AsyncSleep request, we need to define an event for I/O. With the underlying select logic, the event must refer to a readable object – say an open file. In addition, we store how much data to read.

def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = ''

def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if read should not block
return self._buffer

def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)

As with AsyncSleep we mostly just store the data required for the underlying system call. This time, __await__ is capable of being resumed multiple times – until our desired amount has been read. In addition, we return the I/O result instead of just resuming.

## 3.3. Augmenting an event loop with read I/O

The basis for our event loop is still the run defined previously. First, we need to track the read requests. This is no longer a sorted schedule, we only map read requests to coroutines.

# new
waiting_read = {}  # type: Dict[file, coroutine]

Since select.select takes a timeout parameter, we can use it in place of time.sleep.

# old
time.sleep(max(0.0, until - time.time()))
# new

This gives us all readable files – if there are any, we run the corresponding coroutine. If there are none, we have waited long enough for our current coroutine to run.

# new - reschedule waiting coroutine, run readable coroutine
waiting.append((until, coroutine))
waiting.sort()

Finally, we have to actually listen for read requests.

# new
if isinstance(command, AsyncSleep):
...
...

## 3.4. Putting it together

The above was a bit of a simplification. We need to do some switching to not starve sleeping coroutines if we can always read. We need to handle having nothing to read or nothing to wait for. However, the end result still fits into 30 LOC.

def run(*coroutines):
"""Cooperatively run all coroutines until completion"""
waiting_read = {}  # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
else:
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])

## 3.5. Cooperative I/O

The AsyncSleep, AsyncRead and run implementations are now fully functional to sleep and/or read. Same as for sleepy, we can define a helper to test reading:

print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')

Running this, we can see that our I/O is interleaved with the waiting task:

id background round 1
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B

## 4. Non-Blocking I/O

While I/O on files gets the concept across, it is not really suitable for a library like asyncio: the select call always returns for files, and both open and read may block indefinitely. This blocks all coroutines of an event loop – which is bad. Libraries like aiofiles use threads and synchronization to fake non-blocking I/O and events on file.

However, sockets do allow for non-blocking I/O – and their inherent latency makes it much more critical. When used in an event loop, waiting for data and retrying can be wrapped without blocking anything.

## 4.1. Non-Blocking I/O event

Similar to our AsyncRead, we can define a suspend-and-read event for sockets. Instead of taking a file, we take a socket – which must be non-blocking. Also, our __await__ uses socket.recv instead of file.read.

class AsyncRecv:
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self._buffer = b''

def __await__(self):
while len(self._buffer) < self.amount:
try:
except BlockingIOError:
yield self
return self._buffer

def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)

In contrast to AsyncRead, __await__ performs truly non-blocking I/O. When data is available, it always reads. When no data is available, it always suspends. That means the event loop is only blocked while we perform useful work.

## 4.2. Un-Blocking the event loop

As far as the event loop is concerned, nothing changes much. The event to listen for is still the same as for files – a file descriptor marked ready by select.

# old
# new
elif isinstance(command, AsyncRecv):

At this point, it should be obvious that AsyncRead and AsyncRecv are the same kind of event. We could easily refactor them to be one event with an exchangeable I/O component. In effect, the event loop, coroutines and events cleanly separate a scheduler, arbitrary intermediate code and the actual I/O.

## 4.3. The ugly side of non-blocking I/O

In principle, what you should do at this point is replicate the logic of read as a recv for AsyncRecv. However, this is much more ugly now – you have to handle early returns when functions block inside the kernel, but yield control to you. For example, opening a connection versus opening a file is much longer:

# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass

Long story short, what remains is a few dozen lines of Exception handling. The events and event loop already work at this point.

id background round 1
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5

Example code at github

## 回答 2

coro概念上讲，您的退货是正确的，但略微不完整。

await不会无条件地挂起，只有在遇到阻塞调用时才挂起。它如何知道呼叫正在阻塞？这由等待的代码决定。例如，可以将套接字读取的等待实现改为：

# sock must be in non-blocking mode
try:
return sock.recv(n)
except EWOULDBLOCK:
return SUSPEND

if data is SUSPEND:
return SUSPEND
self.pos += 1
self.parts[self.pos](...)

1. 默认情况下，所有操作都在同一线程中发生。

2. 事件循环负责安排协程，并在协程等待（通常会阻塞或超时的IO调用）准备就绪时将其唤醒。

Your coro desugaring is conceptually correct, but slightly incomplete.

await doesn’t suspend unconditionally, but only if it encounters a blocking call. How does it know that a call is blocking? This is decided by the code being awaited. For example, an awaitable implementation of socket read could be desugared to:

# sock must be in non-blocking mode
try:
return sock.recv(n)
except EWOULDBLOCK:
return SUSPEND

In real asyncio the equivalent code modifies the state of a Future instead of returning magic values, but the concept is the same. When appropriately adapted to a generator-like object, the above code can be awaited.

On the caller side, when your coroutine contains:

It desugars into something close to:

if data is SUSPEND:
return SUSPEND
self.pos += 1
self.parts[self.pos](...)

People familiar with generators tend to describe the above in terms of yield from which does the suspension automatically.

The suspension chain continues all the way up to the event loop, which notices that the coroutine is suspended, removes it from the runnable set, and goes on to execute coroutines that are runnable, if any. If no coroutines are runnable, the loop waits in select() until either a file descriptor a coroutine is interested in becomes ready for IO. (The event loop maintains a file-descriptor-to-coroutine mapping.)

In the above example, once select() tells the event loop that sock is readable, it will re-add coro to the runnable set, so it will be continued from the point of suspension.

In other words:

1. Everything happens in the same thread by default.

2. The event loop is responsible for scheduling the coroutines and waking them up when whatever they were waiting for (typically an IO call that would normally block, or a timeout) becomes ready.

For insight on coroutine-driving event loops, I recommend this talk by Dave Beazley, where he demonstrates coding an event loop from scratch in front of live audience.

## 回答 3

• 如何在单个线程中执行多个I / O？
• 如何实现协作式多任务处理？

I / O到底是什么意思？如果我的python过程称为C open（）过程，然后它向内核发送了中断，放弃了对它的控制，那么Python解释器如何知道这一点并能够继续运行其他代码，而内核代码则执行实际的I / O，直到唤醒原来发送中断的Python过程？原则上，Python解释器如何知道这种情况？

I / O是任何阻塞调用。在asyncio中，所有I / O操作都应经过事件循环，因为正如您所说，事件循环无法知道某个同步代码中正在执行阻塞调用。这意味着您不应该open在协程的上下文中使用同步。相反，请使用aiofiles这样的专用库，该库提供的异步版本open

It all boils down to the two main challenges that asyncio is addressing:

• How to perform multiple I/O in a single thread?
• How to implement cooperative multitasking?

The answer to the first point has been around for a long while and is called a select loop. In python, it is implemented in the selectors module.

The second question is related to the concept of coroutine, i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using generators and the yield from statement. That’s what is hiding behind the async/await syntax.

The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the documentation). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using yield while keeping its current state, so it can be restored later on. The yield from syntax allows for chaining them in a transparent way.

Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a future. This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.

How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?

No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.

What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?

An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you’re not supposed to use a synchronous open within the context of a coroutine. Instead, use a dedicated library such aiofiles which provides an asynchronous version of open.

# virtualenvwrapper和Python 3

## 问题：virtualenvwrapper和Python 3

virtualenv envpy331 --python=/usr/local/bin/python3.3

envpy331在我的主目录上创建了一个文件夹。

I installed python 3.3.1 on ubuntu lucid and successfully created a virtualenv as below

virtualenv envpy331 --python=/usr/local/bin/python3.3

this created a folder envpy331 on my home dir.

I also have virtualenvwrapper installed.But in the docs only 2.4-2.7 versions of python are supported..Has anyone tried to organize the python3 virtualenv ? If so, can you tell me how ?

## 回答 0

virtualenvwrapper的最新版本是Python3.2下进行测试。很有可能它也可以与Python3.3一起使用。

The latest version of virtualenvwrapper is tested under Python3.2. Chances are good it will work with Python3.3 too.

## 回答 1

which python3 #Output: /usr/bin/python3
mkvirtualenv --python=/usr/bin/python3 nameOfEnvironment

mkvirtualenv --python=which python3 nameOfEnvironment

If you already have python3 installed as well virtualenvwrapper the only thing you would need to do to use python3 with the virtual environment is creating an environment using:

which python3 #Output: /usr/bin/python3
mkvirtualenv --python=/usr/bin/python3 nameOfEnvironment

Or, (at least on OSX using brew):

mkvirtualenv --python=which python3 nameOfEnvironment

Start using the environment and you’ll see that as soon as you type python you’ll start using python3

## 回答 2

$export VIRTUALENV_PYTHON=/usr/bin/python3$ mkvirtualenv -a myproject myenv
Running virtualenv with interpreter /usr/bin/python3
New python executable in myenv/bin/python3
Also creating executable in myenv/bin/python
(myenv)$python Python 3.2.3 (default, Oct 19 2012, 19:53:16) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. You can make virtualenvwrapper use a custom Python binary instead of the one virtualenvwrapper is run with. To do that you need to use VIRTUALENV_PYTHON variable which is utilized by virtualenv:$ export VIRTUALENV_PYTHON=/usr/bin/python3
$mkvirtualenv -a myproject myenv Running virtualenv with interpreter /usr/bin/python3 New python executable in myenv/bin/python3 Also creating executable in myenv/bin/python (myenv)$ python
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
[GCC 4.7.2] on linux2

## 回答 3

virtualenvwrapper现在允许您指定不带路径的python可执行文件。

virtualenvwrapper now lets you specify the python executable without the path.

So (on OSX at least)mkvirtualenv --python=python3 nameOfEnvironment will suffice.

## 回答 4

On Ubuntu; using mkvirtualenv -p python3 env_name loads the virtualenv with python3.

Inside the env, use python --version to verify.

## 回答 5

alias mkvirtualenv3='mkvirtualenv --python=which python3'

alias mkvirtualenv3='mkvirtualenv --python=which python3'

Then use mkvirtualenv3 instead of mkvirtualenv when you want to create a python 3 environment.

## 回答 6

export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3

export VIRTUALENVWRAPPER_VIRTUALENV=/usr/bin/virtualenv-3.4

mkvirtualenv --python=/usr/bin/python3 nameOfEnvironment

I find that running

export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3

and

export VIRTUALENVWRAPPER_VIRTUALENV=/usr/bin/virtualenv-3.4

in the command line on Ubuntu forces mkvirtualenv to use python3 and virtualenv-3.4. One still has to do

mkvirtualenv --python=/usr/bin/python3 nameOfEnvironment

to create the environment. This is assuming that you have python3 in /usr/bin/python3 and virtualenv-3.4 in /usr/local/bin/virtualenv-3.4.

## 回答 7

This post on the bitbucket issue tracker of virtualenvwrapper may be of interest. It is mentioned there that most of virtualenvwrapper’s functions work with the venv virtual environments in Python 3.3.

## 回答 8

export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENV_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh 然后跑 source .bashrc 您可以为每个新环境指定python版本 mkvirtualenv --python=python2 env_name I added export VIRTUALENV_PYTHON=/usr/bin/python3 to my ~/.bashrc like this: export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENV_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

then run source .bashrc

and you can specify the python version for each new env mkvirtualenv --python=python2 env_name

# Django模型“未声明显式的app_label”

## 问题：Django模型“未声明显式的app_label”

Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label

INSTALLED_APPS = [
'DeleteNote.apps.DeletenoteConfig',
'LibrarySync.apps.LibrarysyncConfig',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]

from django.apps import AppConfig

class DeletenoteConfig(AppConfig):
name = 'DeleteNote'

from django.apps import AppConfig

class LibrarysyncConfig(AppConfig):
name = 'LibrarySync'

I’m at wit’s end. After a dozen hours of troubleshooting, probably more, I thought I was finally in business, but then I got:

Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label

There is SO LITTLE info on this on the web, and no solution out there has resolved my issue. Any advice would be tremendously appreciated.

I’m using Python 3.4 and Django 1.10.

From my settings.py:

INSTALLED_APPS = [
'DeleteNote.apps.DeletenoteConfig',
'LibrarySync.apps.LibrarysyncConfig',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]

And my apps.py files look like this:

from django.apps import AppConfig

class DeletenoteConfig(AppConfig):
name = 'DeleteNote'

and

from django.apps import AppConfig

class LibrarysyncConfig(AppConfig):
name = 'LibrarySync'

## 回答 0

settings.py

INSTALLED_APPS = [
'myAppName.apps.myAppNameConfig',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]

myAppName / apps.py

class myAppNameConfig(AppConfig):
name = 'myAppName'
verbose_name = 'A Much Better Name'

Are you missing putting in your application name into the settings file? The myAppNameConfig is the default class generated at apps.py by the .manage.py createapp myAppName command. Where myAppName is the name of your app.

settings.py

INSTALLED_APPS = [
'myAppName.apps.myAppNameConfig',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]

This way, the settings file finds out what you want to call your application. You can change how it looks later in the apps.py file by adding the following code in

myAppName/apps.py

class myAppNameConfig(AppConfig):
name = 'myAppName'
verbose_name = 'A Much Better Name'

## 回答 1

|-- myproject
|-- __init__.py
|-- manage.py
|-- myproject
|-- ...
|-- app1
|-- models.py
|-- app2
|-- models.py

|-- myproject
|-- manage.py
|-- myproject
|-- ...
|-- app1
|-- models.py
|-- app2
|-- models.py

I get the same error and I don´t know how to figure out this problem. It took me many hours to notice that I have a init.py at the same direcory as the manage.py from django.

Before:

|-- myproject
|-- __init__.py
|-- manage.py
|-- myproject
|-- ...
|-- app1
|-- models.py
|-- app2
|-- models.py

After:

|-- myproject
|-- manage.py
|-- myproject
|-- ...
|-- app1
|-- models.py
|-- app2
|-- models.py

It is quite confused that you get this “doesn’t declare an explicit app_label” error. But deleting this init file solved my problem.

## 回答 2

I had exactly the same error when running tests with PyCharm. I’ve fixed it by explicitly setting DJANGO_SETTINGS_MODULE environment variable. If you’re using PyCharm, just hit Edit Configurations button and choose Environment Variables.

Set the variable to your_project_name.settings and that should fix the thing.

It seems like this error occurs, because PyCharm runs tests with its own manage.py.

## 回答 3

# don't do this
from project.someapp.someModule import something_using_a_model
# do this
from someapp.someModule import something_using_a_model

something_using_a_model()

I got this one when I used ./manage.py shell then I accidentally imported from the root project level directory

# don't do this
from project.someapp.someModule import something_using_a_model
# do this
from someapp.someModule import something_using_a_model

something_using_a_model()

## 回答 4

from someModule import someClass

from .someModule import someClass

from user import User

some lines...
tell you there is an import error
some lines...
Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label

as a noob using Python3 ,I find it might be an import error instead of a Django error

wrong:

from someModule import someClass

right:

from .someModule import someClass

this happens a few days ago but I really can’t reproduce it…I think only people new to Django may encounter this.here’s what I remember:

try to register a model in admin.py:

from user import User

try to run server, error looks like this

some lines...
tell you there is an import error
some lines...
Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label

change user to .user ,problem solved

## 回答 5

apps.py

from django.apps import AppConfig

class SalesClientConfig(AppConfig):
name = 'portal.sales_client'
verbose_name = 'Sales Client'

I had the same problem just now. I’ve fixed mine by adding a namespace on the app name. Hope someone find this helpful.

apps.py

from django.apps import AppConfig

class SalesClientConfig(AppConfig):
name = 'portal.sales_client'
verbose_name = 'Sales Client'

## 回答 6

|-- myproject
|-- manage.py
|-- myproject
|-- myapp
|-- models.py  # defines model: MyModel
|-- tests
|-- test_models.py

test_models.pyMyModel以这种方式导入的文件中：

from models import MyModel

from myapp.models import MyModel

PS：也许这有点晚了，但是在我的代码中没有其他人找到如何解决此问题的答案，我想分享我的解决方案。

I got this error on importing models in tests, i.e. given this Django project structure:

|-- myproject
|-- manage.py
|-- myproject
|-- myapp
|-- models.py  # defines model: MyModel
|-- tests
|-- test_models.py

in file test_models.py I imported MyModel in this way:

from models import MyModel

The problem was fixed if it is imported in this way:

from myapp.models import MyModel

Hope this helps!

PS: Maybe this is a bit late, but I not found in others answers how to solve this problem in my code and I want to share my solution.

## 回答 7

@Xeberdee正确的所有内容，请按照以下说明操作，看看是否可以解决问题，如果不是，这是我的问题：

class AlgoExplainedConfig(AppConfig):
name = 'algo_explained'
verbose_name = "Explain_Algo"
....

class AlgoExplainedConfig(AppConfig):
name = '**algorithms_explained**.algo_explained'
verbose_name = "Explain_Algo"

After keep on running into this issue and keep on coming back to this question I thought I’d share what my problem was.

Everything that @Xeberdee is correct so follow that and see if that solves the issue, if not this was my issue:

In my apps.py this is what I had:

class AlgoExplainedConfig(AppConfig):
name = 'algo_explained'
verbose_name = "Explain_Algo"
....

And all I did was I added the project name in front of my app name like this:

class AlgoExplainedConfig(AppConfig):
name = '**algorithms_explained**.algo_explained'
verbose_name = "Explain_Algo"

and that solved my problem and I was able to run the makemigrations and migrate command after that! good luck

## 回答 8

apps/
myapp/
models/
__init__.py
foo.py
bar.py

models/__init__.py我使用速记语法导入模型时：

from .foo import *
from .bar import *

from myapp.models import Foo, Bar

from myapp.models.foo import *
from myapp.models.bar import *

I had this error today trying to run Django tests because I was using the shorthand from .models import * syntax in one of my files. The issue was that I had a file structure like so:

apps/
myapp/
models/
__init__.py
foo.py
bar.py

and in models/__init__.py I was importing my models using the shorthand syntax:

from .foo import *
from .bar import *

In my application I was importing models like so:

from myapp.models import Foo, Bar

This caused the Django model doesn't declare an explicit app_label when running ./manage.py test.

To fix the problem, I had to explicitly import from the full path in models/__init__.py:

from myapp.models.foo import *
from myapp.models.bar import *

That took care of the error.

## 回答 9

|-- manage.py
|-- config
|-- settings.py
|-- urls.py
|-- biz_portal
|-- apps
|-- portal
|-- models.py
|-- urls.py
|-- views.py
|-- apps.py

config / settings.py：

INSTALLED_APPS = [
...
"apps.portal.apps.PortalConfig",
]

biz_portal / apps / portal / apps.py：

class PortalConfig(AppConfig):
name = 'apps.portal'

config / urls.py：

urlpatterns = [
path('', include('apps.portal.urls')),
...
]

In my case, this was happening because I used a relative module path in project-level urls.py, INSTALLED_APPS and apps.py instead of being rooted in the project root. i.e. absolute module paths throughout, rather than relative modules paths + hacks.

No matter how much I messed with the paths in INSTALLED_APPS and apps.py in my app, I couldn’t get both runserver and pytest to work til all three of those were rooted in the project root.

Folder structure:

|-- manage.py
|-- config
|-- settings.py
|-- urls.py
|-- biz_portal
|-- apps
|-- portal
|-- models.py
|-- urls.py
|-- views.py
|-- apps.py

With the following, I could run manage.py runserver and gunicorn with wsgi and use portal app views without trouble, but pytest would error with ModuleNotFoundError: No module named 'apps' despite DJANGO_SETTINGS_MODULE being configured correctly.

config/settings.py:

INSTALLED_APPS = [
...
"apps.portal.apps.PortalConfig",
]

biz_portal/apps/portal/apps.py:

class PortalConfig(AppConfig):
name = 'apps.portal'

config/urls.py:

urlpatterns = [
path('', include('apps.portal.urls')),
...
]

Changing the app reference in config/settings.py to biz_portal.apps.portal.apps.PortalConfig and PortalConfig.name to biz_portal.apps.portal allowed pytest to run (I don’t have tests for portal views yet) but runserver would error with

RuntimeError: Model class apps.portal.models.Business doesn’t declare an explicit app_label and isn’t in an application in INSTALLED_APPS

Finally I grepped for apps.portal to see what’s still using a relative path, and found that config/urls.py should also use biz_portal.apps.portal.urls.

## 回答 10

manage.py makemigrations myapp

manage.py makemigrations

I ran into this error when I tried generating migrations for a single app which had existing malformed migrations due to a git merge. e.g.

manage.py makemigrations myapp

When I deleted it’s migrations and then ran:

manage.py makemigrations

the error did not occur and the migrations generated successfully.

## 回答 11

class Meta:
app_label  = 'name_of_my_app'

I had a similar issue, but I was able to solve mine by specifying explicitly the app_label using Meta Class in my models class

class Meta:
app_label  = 'name_of_my_app'

## 回答 12

# webapp/settings.py
...
REST_FRAMEWORK = {
...
'UNAUTHENTICATED_USER': None
...
}

I got this error while trying to upgrade my Django Rest Framework app to DRF 3.6.3 and Django 1.11.1.

For anyone else in this situation, I found my solution in a GitHub issue, which was to unset the UNAUTHENTICATED_USER setting in the DRF settings:

# webapp/settings.py
...
REST_FRAMEWORK = {
...
'UNAUTHENTICATED_USER': None
...
}

## 回答 13

• 问题出python migrate.py startapp myApp在我的项目根文件夹中，然后将myApp移到一个子文件夹中mv myApp myFolderWithApps/
• 我写了myApp.models并运行 python migrate.py makemigrations。一切顺利。
• 然后我对另一个从myApp导入模型的应用做了同样的操作。b！执行迁移时，我遇到了这个错误。那是因为我不得不使用myFolderWithApps.myApp引用我的应用程序，但是我却忘记了更新MyApp / apps.py。因此，我在第二个应用程序中更正了myApp / apps.py，设置/ INSTALLED_APPS和导入路径。
• 但是随后错误不断发生：原因是我进行了迁移，试图使用错误的路径从myApp导入模型。我试图更正迁移文件，但我发现重置数据库和删除迁移从头开始更容易。

I just ran into this issue and figured out what was going wrong. Since no previous answer described the issue as it happened to me, I though I would post it for others:

• the issue came from using python migrate.py startapp myApp from my project root folder, then move myApp to a child folder with mv myApp myFolderWithApps/.
• I wrote myApp.models and ran python migrate.py makemigrations. All went well.
• then I did the same with another app that was importing models from myApp. Kaboom! I ran into this error, while performing makemigrations. That was because I had to use myFolderWithApps.myApp to reference my app, but I had forgotten to update MyApp/apps.py. So I corrected myApp/apps.py, settings/INSTALLED_APPS and my import path in my second app.
• but then the error kept happening: the reason was that I had migrations trying to import the models from myApp with the wrong path. I tried to correct the migration file, but I went at the point where it was easier to reset the DB and delete the migrations to start from scratch.

So to make a long story short: – the issue was initially coming from the wrong app name in apps.py of myApp, in settings and in the import path of my second app. – but it was not enough to correct the paths in these three places, as migrations had been created with imports referencing the wrong app name. Therefore, the same error kept happening while migrating (except this time from migrations).

So… check your migrations, and good luck!

## 回答 14

RuntimeError：模型类apps.core.models.University未声明显式> app_label，并且不在INSTALLED_APPS中的应用程序中。

luke_aus的答案通过纠正我的问题帮助了我 urls.py

from project.apps.views import SurgeryView

from apps.views import SurgeryView

I’ve got a similar error while building an API in Django rest_framework.

RuntimeError: Model class apps.core.models.University doesn’t declare an explicit > app_label and isn’t in an application in INSTALLED_APPS.

luke_aus’s answer helped me by correcting my urls.py

from

from project.apps.views import SurgeryView

to

from apps.views import SurgeryView

## 回答 15

from django.core.files.storage import Storage, DefaultStorage

class MyFileStorage(FileSystemStorage):

from django.core.files.storage import Storage, DefaultStorage, FileSystemStorage

In my case I got this error when porting code from Django 1.11.11 to Django 2.2. I was defining a custom FileSystemStorage derived class. In Django 1.11.11 I was having the following line in models.py:

from django.core.files.storage import Storage, DefaultStorage

and later in the file I had the class definition:

class MyFileStorage(FileSystemStorage):

However, in Django 2.2 I need to explicitly reference FileSystemStorage class when importing:

from django.core.files.storage import Storage, DefaultStorage, FileSystemStorage

and voilà!, the error dissapears.

Note, that everyone is reporting the last part of the error message spitted by Django server. However, if you scroll up you will find the reason in the middle of that error mambo-jambo.

## 回答 16

in my case I was able to find a fix and by looking at the everyone else’s code it may be the same issue.. I simply just had to add ‘django.contrib.sites’ to the list of installed apps in the settings.py file.

hope this helps someone. this is my first contribution to the coding community

## 回答 17

TL; DR：添加空白__init__.py为我解决了此问题。

TL;DR: Adding a blank __init__.py fixed the issue for me.

I got this error in PyCharm and realised that my settings file was not being imported at all. There was no obvious error telling me this, but when I put some nonsense code into the settings.py, it didn’t cause an error.

I had settings.py inside a local_settings folder. However, I’d fogotten to include a __init__.py in the same folder to allow it to be imported. Once I’d added this, the error went away.

## 回答 18

If you have got all the config right, it might just be an import mess. keep an eye on how you are importing the offending model.

The following won’t work from .models import Business. Use full import path instead: from myapp.models import Business

## 回答 19

If all else fails, and if you are seeing this error while trying to import in a PyCharm “Python console” (or “Django console”):

Try restarting the console.

This is pretty embarassing, but it took me a while before I realized I had forgotten to do that.

Here’s what happened:

Added a fresh app, then added a minimal model, then tried to import the model in the Python/Django console (PyCharm pro 2019.2). This raised the doesn't declare an explicit app_label error, because I had not added the new app to INSTALLED_APPS. So, I added the app to INSTALLED_APPS, tried the import again, but still got the same error.

Came here, read all the other answers, but nothing seemed to fit.

Finally it hit me that I had not yet restarted the Python console after adding the new app to INSTALLED_APPS.

Note: failing to restart the PyCharm Python console, after adding a new object to a module, is also a great way to get a very confusing ImportError: Cannot import name ...

## 回答 20

O … M … G我也遇到了这个错误，我花了将近2天的时间，现在终于设法解决了。老实说…错误与问题无关。就我而言，这只是语法问题。我试图独立运行一个在django上下文中使用某些django模型的python模块，但该模块本身不是django模型。但是我宣布全班不对

class Scrapper:
name = ""
...

class Scrapper(Website):
name = ""
...

O…M…G I was getting this error too and I spent almost 2 days on it and now I finally managed to solve it. Honestly…the error had nothing to do with what the problem was. In my case it was a simple matter of syntax. I was trying to run a python module standalone that used some django models in a django context, but the module itself wasn’t a django model. But I was declaring the class wrong

class Scrapper:
name = ""
...

I was doing

class Scrapper(Website):
name = ""
...

which is obviously wrong. The message is so misleading that I couldn’t help myself but think it was some issue with configuration or just using django in a wrong way since I’m very new to it.

I’ll share this here for someone newbie as me going through the same silliness can hopefully solve their issue.

## 回答 21

SECRET_KEY从环境变量中移出并在运行应用程序时忘记对其进行设置后，收到了此错误。如果你有这样的事情settings.py

SECRET_KEY = os.getenv('SECRET_KEY')

I received this error after I moved the SECRET_KEY to pull from an environment variable and forgot to set it when running the application. If you have something like this in your settings.py

SECRET_KEY = os.getenv('SECRET_KEY')

then make sure you are actually setting the environment variable.

## 回答 22

from ..api.serializers import AccountSerializer

class Account(AbstractBaseUser):
serializer_class = AccountSerializer
...

from ..models import Account

class AccountSerializer(serializers.ModelSerializer):
class Meta:
model = Account
fields = (
'id', 'email', 'date_created', 'date_modified',
...

Most probably you have dependent imports.

In my case I used a serializer class as a parameter in my model, and the serializer class was using this model: serializer_class = AccountSerializer

from ..api.serializers import AccountSerializer

class Account(AbstractBaseUser):
serializer_class = AccountSerializer
...

And in the “serializers” file:

from ..models import Account

class AccountSerializer(serializers.ModelSerializer):
class Meta:
model = Account
fields = (
'id', 'email', 'date_created', 'date_modified',
...

## 回答 23

Django似乎有一些奇怪的代码，在许多不同的情况下都可能会失败！

I got this error today and ended up here after googling. None of the existing answers seem relevant to my situation. The only thing I needed to do was to import a model from my __init__.py file in the top level of an app. I had to move my imports into the functions using the model.

Django seems to have some weird code that can fail like this in so many different scenarios!

## 回答 24

I got this error also today. The Message referenced to some specific app of my apps in INSTALLED_APPS. But in fact it had nothing to do with this specific App. I used a new virtual Environment and forgot to install some Libraries, that i used in this project. After i installed the additional Libraries, it worked.

## 回答 25

project_root_directory
└── src
├── chat
├── migrations
└── templates
├── django_channels
└── templates

project_root_directory
├── chat
├── migrations
└── templates
└── chat
├── django_channels
└── templates

For PyCharm users: I had an error using not “clean” project structure.

Was:

project_root_directory
└── src
├── chat
│   ├── migrations
│   └── templates
├── django_channels
└── templates

Now:

project_root_directory
├── chat
│   ├── migrations
│   └── templates
│       └── chat
├── django_channels
└── templates

Here is a lot of good solutions, but I think, first of all, you should clean your project structure or tune PyCharm Django settings before setting DJANGO_SETTINGS_MODULE variables and so on.

Hope it’ll help someone. Cheers.

## 回答 26

1. 您已经对模型文件进行了修改，但尚未将其添加到数据库中，但是您正在尝试运行Python manage.py runserver。

2. 运行Python manage.py makemigrations

3. Python manage.py迁移

4. 现在，Python manage.py runserver和一切都应该没问题。

The issue is that:

1. You have made modifications to your models file, but not addedd them yet to the DB, but you are trying to run Python manage.py runserver.

2. Run Python manage.py makemigrations

3. Python manage.py migrate

4. Now Python manage.py runserver and all should be fine.