标签归档:deployment

如何为多个环境自定义requirements.txt?

问题:如何为多个环境自定义requirements.txt?

我有两个分支,开发和生产。每个都有依赖关系,其中一些是不同的。开发指向自身在开发中的依赖项。生产同样如此。我需要部署到Heroku,它期望每个分支的依赖性都在一个名为“ requirements.txt”的文件中。

最好的组织方式是什么?

我想到的是:

  • 维护单独的需求文件,每个分支中一个(必须在频繁合并中生存!)
  • 告诉Heroku我要使用哪个需求文件(环境变量?)
  • 编写部署脚本(创建临时分支,修改需求文件,提交,部署,删除临时分支)

I have two branches, Development and Production. Each has dependencies, some of which are different. Development points to dependencies that are themselves in development. Likewise for Production. I need to deploy to Heroku which expects each branch’s dependencies in a single file called ‘requirements.txt’.

What is the best way to organize?

What I’ve thought of:

  • Maintain separate requirements files, one in each branch (must survive frequent merges!)
  • Tell Heroku which requirements file I want to use (environment variable?)
  • Write deploy scripts (create temp branch, modify requirements file, commit, deploy, delete temp branch)

回答 0

您可以级联需求文件,并使用“ -r”标志告诉pip将一个文件的内容包含在另一个文件中。您可以将需求分解成模块化的文件夹层次结构,如下所示:

`-- django_project_root
|-- requirements
|   |-- common.txt
|   |-- dev.txt
|   `-- prod.txt
`-- requirements.txt

文件的内容如下所示:

common.txt:

# Contains requirements common to all environments
req1==1.0
req2==1.0
req3==1.0
...

dev.txt:

# Specifies only dev-specific requirements
# But imports the common ones too
-r common.txt
dev_req==1.0
...

prod.txt:

# Same for prod...
-r common.txt
prod_req==1.0
...

在Heroku之外,您现在可以设置如下环境:

pip install -r requirements/dev.txt

要么

pip install -r requirements/prod.txt

由于Heroku在项目根目录中专门查找“ requirements.txt”,因此应仅镜像prod,如下所示:

requirements.txt:

# Mirrors prod
-r requirements/prod.txt

You can cascade your requirements files and use the “-r” flag to tell pip to include the contents of one file inside another. You can break out your requirements into a modular folder hierarchy like this:

`-- django_project_root
|-- requirements
|   |-- common.txt
|   |-- dev.txt
|   `-- prod.txt
`-- requirements.txt

The files’ contents would look like this:

common.txt:

# Contains requirements common to all environments
req1==1.0
req2==1.0
req3==1.0
...

dev.txt:

# Specifies only dev-specific requirements
# But imports the common ones too
-r common.txt
dev_req==1.0
...

prod.txt:

# Same for prod...
-r common.txt
prod_req==1.0
...

Outside of Heroku, you can now setup environments like this:

pip install -r requirements/dev.txt

or

pip install -r requirements/prod.txt

Since Heroku looks specifically for “requirements.txt” at the project root, it should just mirror prod, like this:

requirements.txt:

# Mirrors prod
-r requirements/prod.txt

回答 1

今天发布原始问题和答案时不存在的可行选择是使用pipenv而不是pip管理依赖项。

使用pipenv,不再需要像pip一样手动管理两个单独的需求文件,而是通过命令行上的交互来管理开发和生产包本身。

要安装用于生产和开发的软件包:

pipenv install <package>

要仅为开发环境安装软件包:

pipenv install <package> --dev

通过这些命令,pipenv在两个文件(Pipfile和Pipfile.lock)中存储和管理环境配置。Heroku当前的Python buildpack本机支持pipenv,如果存在Pipfile.lock而不是requirements.txt,它将从Pipfile.lock进行配置。

有关该工具的完整文档,请参见pipenv链接。

A viable option today which didn’t exist when the original question and answer was posted is to use pipenv instead of pip to manage dependencies.

With pipenv, manually managing two separate requirement files like with pip is no longer necessary, and instead pipenv manages the development and production packages itself via interactions on the command line.

To install a package for use in both production and development:

pipenv install <package>

To install a package for the development environment only:

pipenv install <package> --dev

Via those commands, pipenv stores and manages the environment configuration in two files (Pipfile and Pipfile.lock). Heroku’s current Python buildpack natively supports pipenv and will configure itself from Pipfile.lock if it exists instead of requirements.txt.

See the pipenv link for full documentation of the tool.


回答 2

如果您的要求是能够在同一台计算机上的环境之间进行切换,则可能有必要为需要切换到的每个环境创建不同的virtualenv文件夹。

python3 -m venv venv_dev
source venv_dev/bin/activate
pip install -r pip/common.txt
pip install -r pip/dev.txt
exit
python3 -m venv venv_prod
source venv_prod/bin/activate
pip install -r pip/common.txt
exit
source venv_dev/bin/activate
# now we are in dev environment so your code editor and build systems will work.

# let's install a new dev package:
# pip install awesome
# pip freeze -r pip/temp.txt
# find that package, put it into pip/dev.txt
# rm pip/temp.txt

# pretty cumbersome, but it works. 

If your requirement is to be able to switch between environments on the same machine, it may be necessary to create different virtualenv folders for each environment you need to switch to.

python3 -m venv venv_dev
source venv_dev/bin/activate
pip install -r pip/common.txt
pip install -r pip/dev.txt
exit
python3 -m venv venv_prod
source venv_prod/bin/activate
pip install -r pip/common.txt
exit
source venv_dev/bin/activate
# now we are in dev environment so your code editor and build systems will work.

# let's install a new dev package:
# pip install awesome
# pip freeze -r pip/temp.txt
# find that package, put it into pip/dev.txt
# rm pip/temp.txt

# pretty cumbersome, but it works. 

在Docker中部署最小化Flask应用-服务器连接问题

问题:在Docker中部署最小化Flask应用-服务器连接问题

我有一个唯一依赖的应用程序是flask,它可以在docker外部正常运行并绑定到默认端口5000。这是完整的源代码:

from flask import Flask

app = Flask(__name__)
app.debug = True

@app.route('/')
def main():
    return 'hi'

if __name__ == '__main__':
    app.run()

问题是,当我在docker中部署此服务器时,服务器正在运行,但无法从容器外部访问。

以下是我的Dockerfile。该图像是装有烧瓶的ubuntu。焦油仅包含index.py上面列出的内容;

# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv

# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz

# Run server
EXPOSE 5000
CMD ["python", "index.py"]

这是我正在部署的步骤

$> sudo docker build -t perfektimprezy .

据我所知,上面的代码运行良好,图像中包含tar的内容/srv。现在,让我们在容器中启动服务器:

$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769

它真的在运行吗?

$> sudo docker ps
CONTAINER ID        IMAGE                   COMMAND             CREATED             STATUS              PORTS                    NAMES
1c50b67d45b1        perfektimprezy:latest   "python index.py"   5 seconds ago       Up 5 seconds        0.0.0.0:5000->5000/tcp   loving_wozniak

$> sudo docker logs 1c50b67d45b1
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat

是的,好像flask服务器正在运行。这是奇怪的地方。让我们向服务器发出请求:

 $> curl 127.0.0.1:5000 -v
 * Rebuilt URL to: 127.0.0.1:5000/
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
 > GET / HTTP/1.1
 > User-Agent: curl/7.35.0
 > Host: 127.0.0.1:5000
 > Accept: */*
 >
 * Empty reply from server
 * Connection #0 to host 127.0.0.1 left intact
 curl: (52) Empty reply from server

空回复…但是该流程正在运行吗?

$> sudo docker top 1c50b67d45b1
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                2084                812                 0                   10:26               ?                   00:00:00            python index.py
root                2117                2084                0                   10:26               ?                   00:00:00            /usr/bin/python index.py

现在,让我们进入服务器并检查…

$> sudo docker exec -it 1c50b67d45b1 bash
root@1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:47677         127.0.0.1:5000          TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
root@1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT

很好…但是不是从外面来的:(我做错了什么?

I have an app who’s only dependency is flask, which runs fine outside docker and binds to the default port 5000. Here is the full source:

from flask import Flask

app = Flask(__name__)
app.debug = True

@app.route('/')
def main():
    return 'hi'

if __name__ == '__main__':
    app.run()

The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container.

Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the index.py listed above;

# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv

# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz

# Run server
EXPOSE 5000
CMD ["python", "index.py"]

Here are the steps I am doing to deploy

$> sudo docker build -t perfektimprezy .

As far as I know the above runs fine, the image has the contents of the tar in /srv. Now, let’s start the server in a container:

$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769

Is it actually running?

$> sudo docker ps
CONTAINER ID        IMAGE                   COMMAND             CREATED             STATUS              PORTS                    NAMES
1c50b67d45b1        perfektimprezy:latest   "python index.py"   5 seconds ago       Up 5 seconds        0.0.0.0:5000->5000/tcp   loving_wozniak

$> sudo docker logs 1c50b67d45b1
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat

Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server:

 $> curl 127.0.0.1:5000 -v
 * Rebuilt URL to: 127.0.0.1:5000/
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
 > GET / HTTP/1.1
 > User-Agent: curl/7.35.0
 > Host: 127.0.0.1:5000
 > Accept: */*
 >
 * Empty reply from server
 * Connection #0 to host 127.0.0.1 left intact
 curl: (52) Empty reply from server

Empty reply… But is the process running?

$> sudo docker top 1c50b67d45b1
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                2084                812                 0                   10:26               ?                   00:00:00            python index.py
root                2117                2084                0                   10:26               ?                   00:00:00            /usr/bin/python index.py

Now let’s ssh into the server and check…

$> sudo docker exec -it 1c50b67d45b1 bash
root@1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:47677         127.0.0.1:5000          TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
root@1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT

It’s fine… but not from the outside :( What am I doing wrong?


回答 0

问题是您只绑定到localhost接口,0.0.0.0如果要从外部访问容器,则应该绑定到。如果您更改:

if __name__ == '__main__':
    app.run()

if __name__ == '__main__':
    app.run(host='0.0.0.0')

它应该工作。

The problem is you are only binding to the localhost interface, you should be binding to 0.0.0.0 if you want the container to be accessible from outside. If you change:

if __name__ == '__main__':
    app.run()

to

if __name__ == '__main__':
    app.run(host='0.0.0.0')

It should work.


回答 1

当使用flask命令代替时app.run,您可以传递--host选项来更改主机。Docker中的行将是:

CMD ["flask", "run", "--host", "0.0.0.0"]

要么

CMD flask run --host 0.0.0.0

When using the flask command instead of app.run, you can pass the --host option to change the host. The line in Docker would be:

CMD ["flask", "run", "--host", "0.0.0.0"]

or

CMD flask run --host 0.0.0.0

回答 2

您的Docker容器具有多个网络接口。例如,我的容器具有以下内容:

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

如果运行docker network inspect bridge,您可以在上面的输出中看到容器通过第二个接口连接到该网桥。该默认网桥还连接到主机上的Docker进程。

因此,您将必须运行以下命令:

CMD flask run --host 172.17.0.2

从主机访问在Docker容器中运行的Flask应用。用172.17.0.2您的容器的特定IP地址替换。

Your Docker container has more than one network interface. For example, my container has the following:

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

if you run docker network inspect bridge, you can see that your container is connected to that bridge with the second interface in the above output. This default bridge is also connected to the Docker process on your host.

Therefore you would have to run the command:

CMD flask run --host 172.17.0.2

To access your Flask app running in a Docker container from your host machine. Replace 172.17.0.2 with whatever the particular IP address is of your container.


回答 3

以其他答案为基础:

假设您有两台计算机。每台计算机都有一个网络接口(例如WiFi),这是它的公共IP。每台计算机都有一个回送/本地主机接口,位于127.0.0.1。这意味着“仅这台计算机”。

如果在计算机A上列出了127.0.0.1,则在计算机B上运行时,您将无法通过127.0.0.1进行连接。毕竟,您要求侦听计算机A的本地专用地址。

Docker的设置与此类似。从技术上讲,它是同一台计算机,但是Linux内核允许每个容器使用其自己的隔离网络堆栈运行。因此,容器中的127.0.0.1与主机以外的其他计算机上的127.0.0.1相同-您无法连接到它。

较长的版本,带有图表:https : //pythonspeed.com/articles/docker-connection-refused/

To build on other answers:

Imagine you have two computers. Each computer has a network interface (WiFi, say), which is its public IP. Each computer has a loopback/localhost interface, at 127.0.0.1. This means “just this computer.”

If you listed on 127.0.0.1 on computer A, you would not expect to be able to connect to that via 127.0.0.1 when running on computer B. After all, you asked to listen on computer A’s local, private address.

Docker is similar setup; technically it’s the same computer, but the Linux kernel is allowing each container to run with its own isolated network stack. So 127.0.0.1 in a container is the same as 127.0.0.1 on a different computer than your host—you can’t connect to it.

Longer version, with diagrams: https://pythonspeed.com/articles/docker-connection-refused/


回答 4

首先,您需要在python脚本中更改以下代码:

app.run()

app.run(host="0.0.0.0")

其次,在您的docker文件中,最后一行应类似于

CMD ["flask", "run", "-h", "0.0.0.0", "-p", "5000"]

而在主机上,如果0.0.0.0:5000不起作用,那么您应该尝试localhost:5000

注-CMD命令必须正确。因为CMD命令提供了执行容器的默认值。

First of all in your python script you need to change code from

app.run()

to

app.run(host="0.0.0.0")

Second, In your docker file, last line should be like

CMD ["flask", "run", "-h", "0.0.0.0", "-p", "5000"]

And on host machine if 0.0.0.0:5000 doesn’t work then you should try with localhost:5000

Note – The CMD command has to be proper. Because CMD command provide defaults for executing container.


相当于python的Maven

问题:相当于python的Maven

我是Java开发人员/ python初学者,但我缺少maven功能,特别是依赖管理和构建自动化(我是说您没有构建,但是如何创建用于部署的软件包?)

是否有等效的Python实现这些功能?
注意:我使用python 2.x

谢谢。

I’m a java developer/python beginner, and I’m missing my maven features, particularly dependency management and build automation (I mean you don’t build, but how to create a package for deployment?)

Is there a python equivalent to achieve these features?
Note: I use python 2.x

Thanks.


回答 0

Python使用distutils和setuptools进行依赖和打包。

以下是介绍基本知识的教程: http //docs.activestate.com/activepython/3.2/diveintopython3/html/packaging.html

简而言之,您将拥有setup.py文件,该文件包含依赖项和脚本编译/安装信息,并且您可以使用它来构建egg,dist tarball,binary tarball等。

Python uses distutils and setuptools for dependency and packaging.

Heres a tutorial which explains basics: http://docs.activestate.com/activepython/3.2/diveintopython3/html/packaging.html

In short, you will have setup.py file, which has dependency and script compilation/installation information, and you can build eggs, dist tarballs, binary tarballs, etc with it.


回答 1

没有直接匹配。但是,您可以获得的最接近的:

  • zc.buildout:它可以设置封闭环境,下载/处理依赖项,初始化脚本等。它还基于插件(或称其为“配方”)。几年前,当它处于beta阶段时,我就使用了它,也许它从那时起就已经发展了。与Maven一样,学习曲线也很丰富,但它也是最强大的。

其他产品是Maven / zc.buildout的子集:

您可能知道Ant和Shell脚本,因此您还可以检查以下Python工具:

  • 织物摊铺机:具有更多风味的命令行任务运行程序。它们将您的传统命令行执行包装在python中,并允许以更强大的方式管理各种任务(任务依赖关系,解释输出,在远程服务器中运行命令等)。基本上,您无法使用Shell脚本执行任何操作,但是在python中,它的神秘性要低得多。

There is no direct match. However, the closest you can get:

  • zc.buildout: It can setup closed environments, download/handle dependencies, initialize scripts, etc. It also builds on plugins (or “recipes”, as they call them). I used it a few years ago when it was in beta stages, probably it has evolved since then. There is learning curve, as with Maven, but it’s also the most powerful.

Other offerings are subsets of Maven/zc.buildout:

You probably know Ant and shell scripting, so you could check also these Python tools:

  • Fabric or Paver: command-line task runners with added flavors. They wrap your traditional command-line execution in python, and allow to manage various tasks in a more powerful way (task dependencies, interpreting output, running commands in remote server, etc.). Basically nothing you couldn’t do with shell scripting, but in python, it’s much less cryptic.

回答 2

我想指出PyBuilder,它受maven的启发很大,但是使用python而不是XML进行配置,因此它实际上是可读的,恕我直言。

有一个用于依赖性管理的插件(在后台使用pip并区分构建和运行时依赖性),并且与maven一样,您可以使用单个命令在整个构建生命周期中运行。

I’d like to point out PyBuilder which is heavily inspired by maven but uses python instead of XML for configuration, so it’s actually readable, IMHO.

There is a plugin for dependency management (uses pip under the hood and differentiates between build and runtime dependencies) and, not unlike maven, you can run through the full build lifecycle with a single command.


回答 3

对于部署,除了distutils / setuptoos外,还请查看pip包(使用下面的setuptools)。它可以回滚失败的安装,也可以卸载(easy_install / setuptools中缺少的东西)。另外,您可以通过需求文本文件指定依赖关系。

For deployment, in addition to distutils/setuptoos, also take a look at the pip package (uses setuptools underneath). It can rollback failed installations and also uninstall (something missing from easy_install/setuptools). In addition, you can specify dependencies through a requirements text file.


回答 4

最好使用virtualenv创建独立的项目环境,并使用pip / easy_install来管理依赖项。

It’s good to use virtualenv to create standalone project environment and use pip/easy_install to management dependencies.


如何使用Python创建可直接执行的跨平台GUI应用程序?

问题:如何使用Python创建可直接执行的跨平台GUI应用程序?

Python可在多种平台上工作,并且可用于桌面和Web应用程序,因此,我得出结论,有一种方法可以将其编译为Mac,Windows和Linux的可执行文件。

问题是我不知道从哪里开始或如何用它编写GUI,请问有人可以向我指出正确的方向吗?

Python works on multiple platforms and can be used for desktop and web applications, thus I conclude that there is some way to compile it into an executable for Mac, Windows and Linux.

The problem being I have no idea where to start or how to write a GUI with it, can anybody shed some light on this and point me in the right direction please?


回答 0

首先,您将需要一些具有Python绑定的GUI库,然后(如果需要)一些程序,它将Python脚本转换为独立的可执行文件。

具有Python绑定的跨平台GUI库(Windows,Linux,Mac)

当然,有很多,但是我在野外看到的最受欢迎的是:

完整列表位于http://wiki.python.org/moin/GuiProgramming

单个可执行文件(所有平台)

  • PyInstaller-最活跃的(也可以与一起使用PyQt
  • fbs-如果您在上方选择了Qt

单个可执行文件(Windows)

单个可执行文件(Linux)

  • 冻结 -与py2exe相同,但以Linux平台为目标

单个可执行文件(Mac)

  • py2app-再次像py2exe一样工作,但以Mac OS为目标

First you will need some GUI library with Python bindings and then (if you want) some program that will convert your python scripts into standalone executables.

Cross-platform GUI libraries with Python bindings (Windows, Linux, Mac)

Of course, there are many, but the most popular that I’ve seen in wild are:

Complete list is at http://wiki.python.org/moin/GuiProgramming

Single executable (all platforms)

  • PyInstaller – the most active(Could also be used with PyQt)
  • fbs – if you chose Qt above

Single executable (Windows)

  • py2exe – used to be the most popular

Single executable (Linux)

  • Freeze – works the same way like py2exe but targets Linux platform

Single executable (Mac)

  • py2app – again, works like py2exe but targets Mac OS

回答 1

另一个系统(尚未在接受的答案中提及)是PyInstaller,该系统在py2exe无法使用时可用于我的PyQt项目。我发现它更易于使用。

http://www.pyinstaller.org/

Pyinstaller基于Gordon McMillan的Python Installer。不再可用。

Another system (not mentioned in the accepted answer yet) is PyInstaller, which worked for a PyQt project of mine when py2exe would not. I found it easier to use.

http://www.pyinstaller.org/

Pyinstaller is based on Gordon McMillan’s Python Installer. Which is no longer available.


回答 2

pyfree.py是py2exe的替代工具,它可以为Windows和Linux生成可执行文件。它比py2exe更新,并且可以很好地处理鸡蛋。我发现它无需配置即可针对多种应用程序神奇地工作得更好。

An alternative tool to py2exe is bbfreeze which generates executables for windows and linux. It’s newer than py2exe and handles eggs quite well. I’ve found it magically works better without configuration for a wide variety of applications.


回答 3

还有PyGTK,它基本上是Gnome Toolkit的Python包装器。我发现比起Tkinter来,我的想法更容易缠起来,这是因为以前几乎没有GUI编程知识。它工作得很好,并且有一些很好的教程。不幸的是,目前还没有适用于Windows的Python 2.6安装程序,并且可能有一段时间没有安装。

There’s also PyGTK, which is basically a Python wrapper for the Gnome Toolkit. I’ve found it easier to wrap my mind around than Tkinter, coming from pretty much no knowledge of GUI programming previously. It works pretty well and has some good tutorials. Unfortunately there isn’t an installer for Python 2.6 for Windows yet, and may not be for a while.


回答 4

由于默认情况下现在几乎在所有非Windows操作系统上都安装了python,因此您真正需要确保的唯一事情就是已安装了您使用的所有非标准库。

话虽如此,有可能构建包括python解释器和您使用的任何库的可执行文件。但是,这可能会创建一个大型可执行文件。

MacOS X甚至在Xcode IDE中提供了对创建完整独立GUI应用程序的支持。这些可以由运行OS X的任何用户运行。

Since python is installed on nearly every non-Windows OS by default now, the only thing you really need to make sure of is that all of the non-standard libraries you use are installed.

Having said that, it is possible to build executables that include the python interpreter, and any libraries you use. This is likely to create a large executable, however.

MacOS X even includes support in the Xcode IDE for creating full standalone GUI apps. These can be run by any user running OS X.


回答 5

对于GUI本身:

PyQT几乎是参考。

开发快速用户界面的另一种方法是编写一个Web应用程序,使其在本地运行并在浏览器中显示该应用程序。

另外,如果您选择lubos hasko建议的Tkinter选项,则可能要尝试Portablepy使您的应用程序在Windows环境下运行而无需使用Python。

For the GUI itself:

PyQT is pretty much the reference.

Another way to develop a rapid user interface is to write a web app, have it run locally and display the app in the browser.

Plus, if you go for the Tkinter option suggested by lubos hasko you may want to try portablepy to have your app run on Windows environment without Python.


回答 6

我不确定这是否是最好的方法,但是当我在Windows上部署Ruby GUI应用程序(不是Python,但就.exe而言具有相同的“问题”)时,我只写了一个C#中的短启动程序,它调用我的主脚本。它编译为可执行文件,然后有一个应用程序可执行文件。

I’m not sure that this is the best way to do it, but when I’m deploying Ruby GUI apps (not Python, but has the same “problem” as far as .exe’s are concerned) on Windows, I just write a short launcher in C# that calls on my main script. It compiles to an executable, and I then have an application executable.


回答 7

# I'd use tkinter for python 3

import tkinter

tk = tkinter.Tk()
tk.geometry("400x300+500+300")
l = Label(tk,text="")
l.pack()
e = Entry(tk)
e.pack()

def click():
    e['text'] = 'You clicked the button'

b = Button(tk,text="Click me",command=click)
b.pack()

tk.mainloop()

# After this I would you py2exe
# search for the use of this module on stakoverflow
# otherwise I could edit this to let you know how to do it

py2exe

然后,您应该使用py2exe,例如,将运行该应用程序所需的所有文件都放入一个文件夹中,即使用户的计算机上没有python(我在说的是Windows …对于Apple os来说也没有必要我认为,由于可执行文件带有python,因此无需安装它。

建立这个档案

1)创建一个setup.py

使用此代码:

from distutils.core import setup
import py2exe

setup(console=['l4h.py'])

将其保存在文件夹中

2)将您的程序放在setup.py的同一文件夹中,在该文件夹中放入要使其可分发的程序:es:l4h.py

ps:更改文件名(从l4h更改为您想要的任何名称,这是一个示例)

3)从该文件夹运行cmd(在该文件夹上,右键单击+ Shift并在此处选择启动cmd)
4)在cmd:> python setup.py py2exe中编写
5)在dist文件夹中,您需要所有文件
6)您可以压缩并分发

py安装程序

从cmd安装

**

点安装pyinstaller

**

从文件所在的文件夹中的cmd运行它

**

pyinstaller file.py

**

# I'd use tkinter for python 3

import tkinter

tk = tkinter.Tk()
tk.geometry("400x300+500+300")
l = Label(tk,text="")
l.pack()
e = Entry(tk)
e.pack()

def click():
    e['text'] = 'You clicked the button'

b = Button(tk,text="Click me",command=click)
b.pack()

tk.mainloop()

# After this I would you py2exe
# search for the use of this module on stakoverflow
# otherwise I could edit this to let you know how to do it

py2exe

Then you should use py2exe, for example, to bring in one folder all the files needed to run the app, even if the user has not python on his pc (I am talking of windows… for the apple os there is no need of an executable file, I think, as it come with python in it without any need of installing it.

Create this file

1) Create a setup.py

with this code:

from distutils.core import setup
import py2exe

setup(console=['l4h.py'])

save it in a folder

2) Put your program in the same folder of setup.py put in this folder the program you want to make it distribuitable: es: l4h.py

ps: change the name of the file (from l4h to anything you want, that is an example)

3) Run cmd from that folder (on the folder, right click + shift and choose start cmd here)
4) write in cmd:>python setup.py py2exe
5) in the dist folder there are all the files you need
6) you can zip it and distribute it

Pyinstaller

Install it from cmd

**

pip install pyinstaller

**

Run it from the cmd from the folder where the file is

**

pyinstaller file.py

**


回答 8

PySimpleGUI包装了tkinter并在Python 3和2.7上运行。它还可以在Qt,WxPython和Web浏览器中运行,并且所有平台都使用相同的源代码。

您可以创建自定义GUI,以利用在tkinter中找到的所有相同窗口小部件(滑块,复选框,单选按钮等)。该代码往往非常紧凑和可读。

#!/usr/bin/env python
import sys
if sys.version_info[0] >= 3:
    import PySimpleGUI as sg
else:
    import PySimpleGUI27 as sg

layout = [[ sg.Text('My Window') ],
          [ sg.Button('OK')]]

window = sg.Window('My window').Layout(layout)
button, value = window.Read()

PySimpleGUI文档中所述,要生成.EXE文件,请运行:

pyinstaller -wF MyGUIProgram.py

PySimpleGUI wraps tkinter and works on Python 3 and 2.7. It also runs on Qt, WxPython and in a web browser, using the same source code for all platforms.

You can make custom GUIs that utilize all of the same widgets that you find in tkinter (sliders, checkboxes, radio buttons, …). The code tends to be very compact and readable.

#!/usr/bin/env python
import sys
if sys.version_info[0] >= 3:
    import PySimpleGUI as sg
else:
    import PySimpleGUI27 as sg

layout = [[ sg.Text('My Window') ],
          [ sg.Button('OK')]]

window = sg.Window('My window').Layout(layout)
button, value = window.Read()

As explained in the PySimpleGUI Documentation, to build the .EXE file you run:

pyinstaller -wF MyGUIProgram.py


回答 9

您不需要编译为Mac / Windows / Linux python。它是一种解释语言,因此您只需要在您选择的系统上安装Python解释器即可(这三个平台都可用)。

至于可以跨平台工作的GUI库,Python的Tk / Tcl小部件库可以很好地工作,我相信跨平台就足够了。

Tkinter是Tk / Tcl的python接口

从python项目网页:

Tkinter不是唯一的Python GuiProgramming工具包。但是,它是最常用的一种,并且几乎是唯一可以在Unix,Mac和Windows之间移植的一种

You don’t need to compile python for Mac/Windows/Linux. It is an interpreted language, so you simply need to have the Python interpreter installed on the system of your choice (it is available for all three platforms).

As for a GUI library that works cross platform, Python’s Tk/Tcl widget library works very well, and I believe is sufficiently cross platform.

Tkinter is the python interface to Tk/Tcl

From the python project webpage:

Tkinter is not the only GuiProgramming toolkit for Python. It is however the most commonly used one, and almost the only one that is portable between Unix, Mac and Windows


回答 10

您可以appJar用于基本的GUI开发。

from appJar import gui

num=1

def myfcn(btnName):   
    global num
    num +=1
    win.setLabel("mylabel", num)

win = gui('Test')

win.addButtons(["Set"],  [myfcn])
win.addLabel("mylabel", "Press the Button")

win.go()

请参阅appJar网站上的文档

pip install appjar通过命令行进行安装。

You can use appJar for basic GUI development.

from appJar import gui

num=1

def myfcn(btnName):   
    global num
    num +=1
    win.setLabel("mylabel", num)

win = gui('Test')

win.addButtons(["Set"],  [myfcn])
win.addLabel("mylabel", "Press the Button")

win.go()

See documentation at appJar site.

Installation is made with pip install appjar from command line.


回答 11

!!! KIVY !!!

看到没有人提到Kivy,我感到非常惊讶!!!

我曾经使用Tkinter进行过一个项目,尽管他们确实提倡它有很多改进,但仍然给我Windows 98的感觉,所以我改用Kivy

我一直在关注一个教程系列如果有帮助, …

只是为了了解一下奇异果的外观,请看以下内容(我正在从事的项目):

而且我现在差不多已经工作了一个星期了!您问Kivy有什么好处?检查一下一下

我之所以选择它,是因为它的外观以及它也可以在移动设备中使用。

!!! KIVY !!!

I was amazed seeing that no one mentioned Kivy!!!

I have once done a project using Tkinter, although they do advocate that it has improved a lot, it still gives me a feel of windows 98, so I switched to Kivy.

I have been following a tutorial series if it helps…

Just to give an idea of how kivy looks, see this (The project I am working on):

And I have been working on it for barely a week now ! The benefits for Kivy you ask? Check this

The reason why I chose this is, its look and that it can be used in mobile as well.


如何在Django中管理本地和生产设置?

问题:如何在Django中管理本地和生产设置?

建议使用什么方式处理本地开发和生产服务器的设置?它们中的某些(例如常量等)都可以更改/访问,但是其中一些(例如静态文件的路径)需要保持不同,因此,每次部署新代码时都不应覆盖它们。

当前,我将所有常量添加到中settings.py。但是每次我在本地更改一些常量时,都必须将其复制到生产服务器并编辑文件以进行生产特定更改… :(

编辑:这个问题似乎没有标准答案,我已经接受了最受欢迎的方法。

What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.

Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes… :(

Edit: looks like there is no standard answer to this question, I’ve accepted the most popular method.


回答 0

settings.py

try:
    from local_settings import *
except ImportError as e:
    pass

您可以覆盖local_settings.py;中的需要;然后,它应该脱离版本控制。但是由于您提到了复制,我猜您没有使用;)

In settings.py:

try:
    from local_settings import *
except ImportError as e:
    pass

You can override what needed in local_settings.py; it should stay out of your version control then. But since you mention copying I’m guessing you use none ;)


回答 1

Django的两个摘要:Django 1.5最佳实践建议对您的设置文件使用版本控制并将文件存储在单独的目录中:

project/
    app1/
    app2/
    project/
        __init__.py
        settings/
            __init__.py
            base.py
            local.py
            production.py
    manage.py

base.py文件包含常用的设置(如MEDIA_ROOT或ADMIN),而local.pyproduction.py有网站特定的设置:

在基本文件中settings/base.py

INSTALLED_APPS = (
    # common apps...
)

在本地开发设置文件中settings/local.py

from project.settings.base import *

DEBUG = True
INSTALLED_APPS += (
    'debug_toolbar', # and other apps for local development
)

在文件生产设置文件中settings/production.py

from project.settings.base import *

DEBUG = False
INSTALLED_APPS += (
    # other apps for production site
)

然后,在运行django时,添加以下--settings选项:

# Running django for local development
$ ./manage.py runserver 0:8000 --settings=project.settings.local

# Running django shell on the production site
$ ./manage.py shell --settings=project.settings.production

该书的作者还在Github上提供了一个示例项目布局模板

Two Scoops of Django: Best Practices for Django 1.5 suggests using version control for your settings files and storing the files in a separate directory:

project/
    app1/
    app2/
    project/
        __init__.py
        settings/
            __init__.py
            base.py
            local.py
            production.py
    manage.py

The base.py file contains common settings (such as MEDIA_ROOT or ADMIN), while local.py and production.py have site-specific settings:

In the base file settings/base.py:

INSTALLED_APPS = (
    # common apps...
)

In the local development settings file settings/local.py:

from project.settings.base import *

DEBUG = True
INSTALLED_APPS += (
    'debug_toolbar', # and other apps for local development
)

In the file production settings file settings/production.py:

from project.settings.base import *

DEBUG = False
INSTALLED_APPS += (
    # other apps for production site
)

Then when you run django, you add the --settings option:

# Running django for local development
$ ./manage.py runserver 0:8000 --settings=project.settings.local

# Running django shell on the production site
$ ./manage.py shell --settings=project.settings.production

The authors of the book have also put up a sample project layout template on Github.


回答 2

代替settings.py使用以下布局:

.
└── settings/
    ├── __init__.py  <= not versioned
    ├── common.py
    ├── dev.py
    └── prod.py

common.py 是大多数配置的所在地。

prod.py 从common导入所有内容,并覆盖需要覆盖的所有内容:

from __future__ import absolute_import # optional, but I like it
from .common import *

# Production overrides
DEBUG = False
#...

同样,dev.py从中导入所有内容common.py并覆盖其需要覆盖的所有内容。

最后,__init__.py是您决定加载哪些设置的地方,也是存储机密的地方(因此,不应对该文件进行版本控制):

from __future__ import absolute_import
from .prod import *  # or .dev if you want dev

##### DJANGO SECRETS
SECRET_KEY = '(3gd6shenud@&57...'
DATABASES['default']['PASSWORD'] = 'f9kGH...'

##### OTHER SECRETS
AWS_SECRET_ACCESS_KEY = "h50fH..."

我喜欢这个解决方案的地方是:

  1. 除秘密外,一切都在您的版本控制系统中
  2. 大多数配置都集中在一个位置:common.py
  3. 特定于产品的东西进入了prod.py,特定于开发器的东西进入了dev.py。这很简单。
  4. 您可以从覆盖的东西common.pyprod.py或者dev.py,你可以覆盖任何东西__init__.py
  5. 这是简单易懂的python。没有重新导入的黑客。

Instead of settings.py, use this layout:

.
└── settings/
    ├── __init__.py  <= not versioned
    ├── common.py
    ├── dev.py
    └── prod.py

common.py is where most of your configuration lives.

prod.py imports everything from common, and overrides whatever it needs to override:

from __future__ import absolute_import # optional, but I like it
from .common import *

# Production overrides
DEBUG = False
#...

Similarly, dev.py imports everything from common.py and overrides whatever it needs to override.

Finally, __init__.py is where you decide which settings to load, and it’s also where you store secrets (therefore this file should not be versioned):

from __future__ import absolute_import
from .prod import *  # or .dev if you want dev

##### DJANGO SECRETS
SECRET_KEY = '(3gd6shenud@&57...'
DATABASES['default']['PASSWORD'] = 'f9kGH...'

##### OTHER SECRETS
AWS_SECRET_ACCESS_KEY = "h50fH..."

What I like about this solution is:

  1. Everything is in your versioning system, except secrets
  2. Most configuration is in one place: common.py.
  3. Prod-specific things go in prod.py, dev-specific things go in dev.py. It’s simple.
  4. You can override stuff from common.py in prod.py or dev.py, and you can override anything in __init__.py.
  5. It’s straightforward python. No re-import hacks.

回答 3

我使用Harper Shelby发布的“ if DEBUG”样式的设置的稍微修改的版本。显然,取决于环境(win / linux / etc),可能需要对代码进行一些调整。

我以前使用的是“ if DEBUG”,但发现偶尔需要将DEUBG设置为False进行测试。我真正想区分的是环境是生产环境还是开发环境,这使我可以自由选择DEBUG级别。

PRODUCTION_SERVERS = ['WEBSERVER1','WEBSERVER2',]
if os.environ['COMPUTERNAME'] in PRODUCTION_SERVERS:
    PRODUCTION = True
else:
    PRODUCTION = False

DEBUG = not PRODUCTION
TEMPLATE_DEBUG = DEBUG

# ...

if PRODUCTION:
    DATABASE_HOST = '192.168.1.1'
else:
    DATABASE_HOST = 'localhost'

我仍然会考虑以这种方式设置正在进行的工作。我还没有一种方法可以处理涵盖所有基础的Django设置,但同时也没有设置的麻烦(我对5x设置文件方法并不感到失望)。

I use a slightly modified version of the “if DEBUG” style of settings that Harper Shelby posted. Obviously depending on the environment (win/linux/etc.) the code might need to be tweaked a bit.

I was in the past using the “if DEBUG” but I found that occasionally I needed to do testing with DEUBG set to False. What I really wanted to distinguish if the environment was production or development, which gave me the freedom to choose the DEBUG level.

PRODUCTION_SERVERS = ['WEBSERVER1','WEBSERVER2',]
if os.environ['COMPUTERNAME'] in PRODUCTION_SERVERS:
    PRODUCTION = True
else:
    PRODUCTION = False

DEBUG = not PRODUCTION
TEMPLATE_DEBUG = DEBUG

# ...

if PRODUCTION:
    DATABASE_HOST = '192.168.1.1'
else:
    DATABASE_HOST = 'localhost'

I’d still consider this way of settings a work in progress. I haven’t seen any one way to handling Django settings that covered all the bases and at the same time wasn’t a total hassle to setup (I’m not down with the 5x settings files methods).


回答 4

我使用settings_local.py和settings_production.py。尝试了几个选项之后,我发现,简单地拥有两个设置文件就容易又快速,很容易在复杂的解决方案上浪费时间。

当对您的Django项目使用mod_python / mod_wsgi时,需要将其指向您的设置文件。如果将其指向本地服务器上的app / settings_local.py和生产服务器上的app / settings_production.py,那么生活会变得很轻松。只需编辑适当的设置文件并重新启动服务器(Django开发服务器将自动重新启动)。

I use a settings_local.py and a settings_production.py. After trying several options I’ve found that it’s easy to waste time with complex solutions when simply having two settings files feels easy and fast.

When you use mod_python/mod_wsgi for your Django project you need to point it to your settings file. If you point it to app/settings_local.py on your local server and app/settings_production.py on your production server then life becomes easy. Just edit the appropriate settings file and restart the server (Django development server will restart automatically).


回答 5

TL; DR:诀窍是os.environment在导入settings/base.py任何文件之前进行修改settings/<purpose>.py,这将大大简化事情。


仅考虑所有这些相互交织的文件,就让我头疼。合并,导入(有时是有条件的),覆盖,修补已设置的内容,以防DEBUG稍后设置更改。什么样的恶梦!

这些年来,我经历了所有不同的解决方案。它们都有些起作用,但是管理起来非常痛苦。WTF!我们真的需要所有麻烦吗?我们从一个settings.py文件开始。现在,我们需要一个文档来将所有这些以正确的顺序正确地组合在一起!

我希望我最终能通过以下解决方法达到(我的)最佳解决方案。

让我们回顾一下目标(一些普通的,一些我的)

  1. 保守秘密-不要将其存储在仓库中!

  2. 通过环境设置(12因子样式)设置/读取密钥和机密。

  3. 具有合理的后备默认设置。理想情况下,对于本地开发,除默认值外,您不需要任何其他内容。

  4. …但请尝试确保默认生产的安全。最好不要在本地错过设置替代,而不必记住要调整默认设置以确保生产安全。

  5. 能够以DEBUG可能影响其他设置的方式打开/关闭(例如,是否使用javascript压缩)。

  6. 在目标设置(例如本地/测试/过渡/生产)之间切换时,仅应基于,而仅此DJANGO_SETTINGS_MODULE而已。

  7. …但是可以通过环境设置(例如)进行进一步的参数设置DATABASE_URL

  8. …还允许他们使用不同的用途设置,并在本地并排运行它们,例如 在本地开发人员机器上进行生产设置,以访问生产数据库或对压缩样式表进行烟雾测试。

  9. 如果未明确设置环境变量(至少需要一个空值),则失败,例如在生产中尤其如此。EMAIL_HOST_PASSWORD

  10. DJANGO_SETTINGS_MODULEdjango-admin startproject期间响应manage.py中的默认设置

  11. 条件语句保持到最低限度,如果条件旨意环境类型(例如,用于生产一系列的日志文件,它的自转),在相关的旨意设置文件中的设置优先。

  1. 不要让django从文件中读取DJANGO_SETTINGS_MODULE设置。
    啊! 想想这是元数据。如果您需要一个文件(例如docker env),请在启动django进程之前将其读入环境。

  2. 不要在您的项目/应用代码中覆盖DJANGO_SETTINGS_MODULE,例如。基于主机名或进程名。
    如果您懒于设置环境变量(例如setup.py test),请在运行项目代码之前在工具中进行设置。

  3. 避免使用django读取其设置的魔术和修补程序,对设置进行预处理,但之后不要干预。

  4. 没有复杂的基于逻辑的废话。配置应该是固定的,不能实时计算。提供备用默认值仅是这里的逻辑。
    您是否真的要调试,为什么要在本地设置正确的设置集,而在远程服务器上的生产环境中(在一台一百台计算机上)计算的结果却有所不同?哦! 单元测试?进行设定?认真吗

我的策略是由优秀的Django的ENVIRON与使用的ini样式文件,提供os.environment当地发展的默认设置,一些最起码的,短settings/<purpose>.py的是有一个文件 import settings/base.py os.environment是从一个设置INI文件。这有效地为我们提供了一种设置注入。

这里的窍门是os.environment在导入之前进行修改settings/base.py

要查看完整的示例,请执行回购:https : //github.com/wooyek/django-settings-strategy

.
   manage.py
├───data
└───website
    ├───settings
          __init__.py   <-- imports local for compatibility
          base.py       <-- almost all the settings, reads from proces environment 
          local.py      <-- a few modifications for local development
          production.py <-- ideally is empty and everything is in base 
          testing.py    <-- mimics production with a reasonable exeptions
          .env          <-- for local use, not kept in repo
       __init__.py
       urls.py
       wsgi.py

设置/.env

本地开发的默认设置。一个秘密文件,主要用于设置所需的环境变量。如果在本地开发中不需要它们,请将它们设置为空值。我们在这里提供默认值,settings/base.py如果环境中缺少其他默认值,则不会在其他任何机器上失败。

设置/ local.py

这里发生的是从中加载环境settings/.env,然后从中导入通用设置settings/base.py。之后,我们可以覆盖一些以简化本地开发。

import logging
import environ

logging.debug("Settings loading: %s" % __file__)

# This will read missing environment variables from a file
# We wan to do this before loading a base settings as they may depend on environment
environ.Env.read_env(DEBUG='True')

from .base import *

ALLOWED_HOSTS += [
    '127.0.0.1',
    'localhost',
    '.example.com',
    'vagrant',
    ]

# https://docs.djangoproject.com/en/1.6/topics/email/#console-backend
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

LOGGING['handlers']['mail_admins']['email_backend'] = 'django.core.mail.backends.dummy.EmailBackend'

# Sync task testing
# http://docs.celeryproject.org/en/2.5/configuration.html?highlight=celery_always_eager#celery-always-eager

CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True

settings / production.py

对于生产环境,我们不应该期望环境文件,但是如果我们正在测试某些东西,那么拥有一个环境文件会更容易。但是无论如何,免得内联提供很少的默认值,因此settings/base.py可以做出相应的响应。

environ.Env.read_env(Path(__file__) / "production.env", DEBUG='False', ASSETS_DEBUG='False')
from .base import *

这里的主要关注点是DEBUGASSETS_DEBUG覆盖,os.environ仅当它们从环境和文件中丢失时,它们才会应用于python 。

这些将是我们的生产默认值,无需将它们放在环境或文件中,但是如果需要可以覆盖它们。整齐!

设置/ base.py

这些是您最常用的django设置,有一些条件和很多从环境中读取它们的条件。几乎所有内容都在这里,使所有目标环境保持一致并尽可能相似。

主要区别如下(我希望这些是自我解释):

import environ

# https://github.com/joke2k/django-environ
env = environ.Env()

# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))

# Where BASE_DIR is a django source root, ROOT_DIR is a whole project root
# It may differ BASE_DIR for eg. when your django project code is in `src` folder
# This may help to separate python modules and *django apps* from other stuff
# like documentation, fixtures, docker settings
ROOT_DIR = BASE_DIR

# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env('DEBUG', default=False)

INTERNAL_IPS = [
    '127.0.0.1',
]

ALLOWED_HOSTS = []

if 'ALLOWED_HOSTS' in os.environ:
    hosts = os.environ['ALLOWED_HOSTS'].split(" ")
    BASE_URL = "https://" + hosts[0]
    for host in hosts:
        host = host.strip()
        if host:
            ALLOWED_HOSTS.append(host)

SECURE_SSL_REDIRECT = env.bool('SECURE_SSL_REDIRECT', default=False)

# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases

if "DATABASE_URL" in os.environ:  # pragma: no cover
    # Enable database config through environment
    DATABASES = {
        # Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
        'default': env.db(),
    }

    # Make sure we use have all settings we need
    # DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
    DATABASES['default']['TEST'] = {'NAME': os.environ.get("DATABASE_TEST_NAME", None)}
    DATABASES['default']['OPTIONS'] = {
        'options': '-c search_path=gis,public,pg_catalog',
        'sslmode': 'require',
    }
else:
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            # 'ENGINE': 'django.contrib.gis.db.backends.spatialite',
            'NAME': os.path.join(ROOT_DIR, 'data', 'db.dev.sqlite3'),
            'TEST': {
                'NAME': os.path.join(ROOT_DIR, 'data', 'db.test.sqlite3'),
            }
        }
    }

STATIC_ROOT = os.path.join(ROOT_DIR, 'static')

# django-assets
# http://django-assets.readthedocs.org/en/latest/settings.html

ASSETS_LOAD_PATH = STATIC_ROOT
ASSETS_ROOT = os.path.join(ROOT_DIR, 'assets', "compressed")
ASSETS_DEBUG = env('ASSETS_DEBUG', default=DEBUG)  # Disable when testing compressed file in DEBUG mode
if ASSETS_DEBUG:
    ASSETS_URL = STATIC_URL
    ASSETS_MANIFEST = "json:{}".format(os.path.join(ASSETS_ROOT, "manifest.json"))
else:
    ASSETS_URL = STATIC_URL + "assets/compressed/"
    ASSETS_MANIFEST = "json:{}".format(os.path.join(STATIC_ROOT, 'assets', "compressed", "manifest.json"))
ASSETS_AUTO_BUILD = ASSETS_DEBUG
ASSETS_MODULES = ('website.assets',)

最后一位显示此处的功率。ASSETS_DEBUG有一个合理的默认值,可以将其覆盖settings/production.py,甚至可以由环境设置覆盖!好极了!

实际上,我们具有不同的重要性等级:

  1. settings / .py-根据目的设置默认值,不存储秘密
  2. settings / base.py-主要由环境控制
  3. 过程环境设置-12要素宝贝!
  4. settings / .env-本地默认设置,易于启动

TL;DR: The trick is to modify os.environment before you import settings/base.py in any settings/<purpose>.py, this will greatly simplify things.


Just thinking about all these intertwining files gives me a headache. Combining, importing (sometimes conditionally), overriding, patching of what was already set in case DEBUG setting changed later on. What a nightmare!

Through the years I went through all different solutions. They all somewhat work, but are so painful to manage. WTF! Do we really need all that hassle? We started with just one settings.py file. Now we need a documentation just to correctly combine all these together in a correct order!

I hope I finally hit the (my) sweet spot with the solution below.

Let’s recap the goals (some common, some mine)

  1. Keep secrets a secret — don’t store them in a repo!

  2. Set/read keys and secrets through environment settings, 12 factor style.

  3. Have sensible fallback defaults. Ideally for local development you don’t need anything more beside defaults.

  4. …but try to keep defaults production safe. It’s better to miss a setting override locally, than having to remember to adjust default settings safe for production.

  5. Have the ability to switch DEBUG on/off in a way that can have an effect on other settings (eg. using javascript compressed or not).

  6. Switching between purpose settings, like local/testing/staging/production, should be based only on DJANGO_SETTINGS_MODULE, nothing more.

  7. …but allow further parameterization through environment settings like DATABASE_URL.

  8. …also allow them to use different purpose settings and run them locally side by side, eg. production setup on local developer machine, to access production database or smoke test compressed style sheets.

  9. Fail if an environment variable is not explicitly set (requiring an empty value at minimum), especially in production, eg. EMAIL_HOST_PASSWORD.

  10. Respond to default DJANGO_SETTINGS_MODULE set in manage.py during django-admin startproject

  11. Keep conditionals to a minimum, if the condition is the purposed environment type (eg. for production set log file and it’s rotation), override settings in associated purposed settings file.

Do not’s

  1. Do not let django read DJANGO_SETTINGS_MODULE setting form a file.
    Ugh! Think of how meta this is. If you need to have a file (like docker env) read that into the environment before staring up a django process.

  2. Do not override DJANGO_SETTINGS_MODULE in your project/app code, eg. based on hostname or process name.
    If you are lazy to set environment variable (like for setup.py test) do it in tooling just before you run your project code.

  3. Avoid magic and patching of how django reads it’s settings, preprocess the settings but do not interfere afterwards.

  4. No complicated logic based nonsense. Configuration should be fixed and materialized not computed on the fly. Providing a fallback defaults is just enough logic here.
    Do you really want to debug, why locally you have correct set of settings but in production on a remote server, on one of hundred machines, something computed differently? Oh! Unit tests? For settings? Seriously?

Solution

My strategy consists of excellent django-environ used with ini style files, providing os.environment defaults for local development, some minimal and short settings/<purpose>.py files that have an import settings/base.py AFTER the os.environment was set from an INI file. This effectively give us a kind of settings injection.

The trick here is to modify os.environment before you import settings/base.py.

To see the full example go do the repo: https://github.com/wooyek/django-settings-strategy

.
│   manage.py
├───data
└───website
    ├───settings
    │   │   __init__.py   <-- imports local for compatibility
    │   │   base.py       <-- almost all the settings, reads from proces environment 
    │   │   local.py      <-- a few modifications for local development
    │   │   production.py <-- ideally is empty and everything is in base 
    │   │   testing.py    <-- mimics production with a reasonable exeptions
    │   │   .env          <-- for local use, not kept in repo
    │   __init__.py
    │   urls.py
    │   wsgi.py

settings/.env

A defaults for local development. A secret file, to mostly set required environment variables. Set them to empty values if they are not required in local development. We provide defaults here and not in settings/base.py to fail on any other machine if the’re missing from the environment.

settings/local.py

What happens in here, is loading environment from settings/.env, then importing common settings from settings/base.py. After that we can override a few to ease local development.

import logging
import environ

logging.debug("Settings loading: %s" % __file__)

# This will read missing environment variables from a file
# We wan to do this before loading a base settings as they may depend on environment
environ.Env.read_env(DEBUG='True')

from .base import *

ALLOWED_HOSTS += [
    '127.0.0.1',
    'localhost',
    '.example.com',
    'vagrant',
    ]

# https://docs.djangoproject.com/en/1.6/topics/email/#console-backend
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

LOGGING['handlers']['mail_admins']['email_backend'] = 'django.core.mail.backends.dummy.EmailBackend'

# Sync task testing
# http://docs.celeryproject.org/en/2.5/configuration.html?highlight=celery_always_eager#celery-always-eager

CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True

settings/production.py

For production we should not expect an environment file, but it’s easier to have one if we’re testing something. But anyway, lest’s provide few defaults inline, so settings/base.py can respond accordingly.

environ.Env.read_env(Path(__file__) / "production.env", DEBUG='False', ASSETS_DEBUG='False')
from .base import *

The main point of interest here are DEBUG and ASSETS_DEBUG overrides, they will be applied to the python os.environ ONLY if they are MISSING from the environment and the file.

These will be our production defaults, no need to put them in the environment or file, but they can be overridden if needed. Neat!

settings/base.py

These are your mostly vanilla django settings, with a few conditionals and lot’s of reading them from the environment. Almost everything is in here, keeping all the purposed environments consistent and as similar as possible.

The main differences are below (I hope these are self explanatory):

import environ

# https://github.com/joke2k/django-environ
env = environ.Env()

# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))

# Where BASE_DIR is a django source root, ROOT_DIR is a whole project root
# It may differ BASE_DIR for eg. when your django project code is in `src` folder
# This may help to separate python modules and *django apps* from other stuff
# like documentation, fixtures, docker settings
ROOT_DIR = BASE_DIR

# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env('DEBUG', default=False)

INTERNAL_IPS = [
    '127.0.0.1',
]

ALLOWED_HOSTS = []

if 'ALLOWED_HOSTS' in os.environ:
    hosts = os.environ['ALLOWED_HOSTS'].split(" ")
    BASE_URL = "https://" + hosts[0]
    for host in hosts:
        host = host.strip()
        if host:
            ALLOWED_HOSTS.append(host)

SECURE_SSL_REDIRECT = env.bool('SECURE_SSL_REDIRECT', default=False)

# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases

if "DATABASE_URL" in os.environ:  # pragma: no cover
    # Enable database config through environment
    DATABASES = {
        # Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
        'default': env.db(),
    }

    # Make sure we use have all settings we need
    # DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
    DATABASES['default']['TEST'] = {'NAME': os.environ.get("DATABASE_TEST_NAME", None)}
    DATABASES['default']['OPTIONS'] = {
        'options': '-c search_path=gis,public,pg_catalog',
        'sslmode': 'require',
    }
else:
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            # 'ENGINE': 'django.contrib.gis.db.backends.spatialite',
            'NAME': os.path.join(ROOT_DIR, 'data', 'db.dev.sqlite3'),
            'TEST': {
                'NAME': os.path.join(ROOT_DIR, 'data', 'db.test.sqlite3'),
            }
        }
    }

STATIC_ROOT = os.path.join(ROOT_DIR, 'static')

# django-assets
# http://django-assets.readthedocs.org/en/latest/settings.html

ASSETS_LOAD_PATH = STATIC_ROOT
ASSETS_ROOT = os.path.join(ROOT_DIR, 'assets', "compressed")
ASSETS_DEBUG = env('ASSETS_DEBUG', default=DEBUG)  # Disable when testing compressed file in DEBUG mode
if ASSETS_DEBUG:
    ASSETS_URL = STATIC_URL
    ASSETS_MANIFEST = "json:{}".format(os.path.join(ASSETS_ROOT, "manifest.json"))
else:
    ASSETS_URL = STATIC_URL + "assets/compressed/"
    ASSETS_MANIFEST = "json:{}".format(os.path.join(STATIC_ROOT, 'assets', "compressed", "manifest.json"))
ASSETS_AUTO_BUILD = ASSETS_DEBUG
ASSETS_MODULES = ('website.assets',)

The last bit shows the power here. ASSETS_DEBUG has a sensible default, which can be overridden in settings/production.py and even that that can be overridden by an environment setting! Yay!

In effect we have a mixed hierarchy of importance:

  1. settings/.py – sets defaults based on purpose, does not store secrets
  2. settings/base.py – is mostly controlled by environment
  3. process environment settings – 12 factor baby!
  4. settings/.env – local defaults for easy startup

回答 6

我在django-split-settings的帮助下管理我的配置。

它是默认设置的替代品。它很简单,但可配置。并且不需要重构您的现有设置。

这是一个小示例(文件example/settings/__init__.py):

from split_settings.tools import optional, include
import os

if os.environ['DJANGO_SETTINGS_MODULE'] == 'example.settings':
    include(
        'components/default.py',
        'components/database.py',
        # This file may be missing:
        optional('local_settings.py'),

        scope=globals()
    )

而已。

更新资料

我写了一篇博客文章,介绍如何使用来管理django设置django-split-sttings。看一看!

I manage my configurations with the help of django-split-settings.

It is a drop-in replacement for the default settings. It is simple, yet configurable. And refactoring of your exisitng settings is not required.

Here’s a small example (file example/settings/__init__.py):

from split_settings.tools import optional, include
import os

if os.environ['DJANGO_SETTINGS_MODULE'] == 'example.settings':
    include(
        'components/default.py',
        'components/database.py',
        # This file may be missing:
        optional('local_settings.py'),

        scope=globals()
    )

That’s it.

Update

I wrote a blog post about managing django‘s settings with django-split-sttings. Have a look!


回答 7

这些解决方案中的大多数问题是您在本地设置之前之后应用了本地设置。

因此,不可能覆盖诸如

  • 特定于环境的设置定义了内存缓存池的地址,在主设置文件中,此值用于配置缓存后端
  • 特定于环境的设置将应用程序/中间件添加或删除为默认设置

与此同时。

可以使用带有ConfigParser类的“ ini”样式的配置文件来实现一种解决方案。它支持多个文件,惰性字符串插值,默认值和许多其他功能。一旦加载了许多文件,就可以加载更多文件,并且它们的值将覆盖先前的文件(如果有)。

您加载一个或多个配置文件,具体取决于机器地址,环境变量甚至以前加载的配置文件中的值。然后,您只需使用解析后的值来填充设置。

我成功使用的一种策略是:

  • 加载默认defaults.ini文件
  • 检查机器名称,并加载与反向FQDN匹配的所有文件,从最短匹配到最长匹配(因此,我先加载net.ini,然后加载,然后net.domain.ini再加载net.domain.webserver01.ini,每个可能覆盖前一个值)。此帐户也用于开发人员的计算机,因此每个人都可以设置其首选的数据库驱动程序等以进行本地开发
  • 检查是否声明了“集群名称”,在这种情况下cluster.cluster_name.ini,请检查load ,它可以定义数据库和缓存IP之类的内容

作为可以实现此目标的示例,您可以为每个环境定义一个“子域”值,然后在默认设置(如hostname: %(subdomain).whatever.net)中使用它来定义django需要工作的所有必需的主机名和cookie。

这是我可以得到的DRY,大多数(现有)文件只有3或4个设置。最重要的是,我必须管理客户配置,因此存在另外一组配置文件(带有数据库名称,用户和密码,分配的子域等),每个客户一个或多个。

可以根据需要将其缩放为低或高,您只需将要在每个环境中配置的密钥放入配置文件中,然后在需要新配置时,将先前的值放入默认配置中,然后覆盖它即可。在必要时。

该系统已经证明是可靠的,并且可以与版本控制一起很好地工作。它已长期用于管理两个单独的应用程序集群(每台机器15个或更多django站点的单独实例),拥有50多个客户,这些集群根据sysadmin的心情来改变大小和成员。 。

The problem with most of these solutions is that you either have your local settings applied before the common ones, or after them.

So it’s impossible to override things like

  • the env-specific settings define the addresses for the memcached pool, and in the main settings file this value is used to configure the cache backend
  • the env-specific settings add or remove apps/middleware to the default one

at the same time.

One solution can be implemented using “ini”-style config files with the ConfigParser class. It supports multiple files, lazy string interpolation, default values and a lot of other goodies. Once a number of files have been loaded, more files can be loaded and their values will override the previous ones, if any.

You load one or more config files, depending on the machine address, environment variables and even values in previously loaded config files. Then you just use the parsed values to populate the settings.

One strategy I have successfully used has been:

  • Load a default defaults.ini file
  • Check the machine name, and load all files which matched the reversed FQDN, from the shortest match to the longest match (so, I loaded net.ini, then net.domain.ini, then net.domain.webserver01.ini, each one possibly overriding values of the previous). This account also for developers’ machines, so each one could set up its preferred database driver, etc. for local development
  • Check if there is a “cluster name” declared, and in that case load cluster.cluster_name.ini, which can define things like database and cache IPs

As an example of something you can achieve with this, you can define a “subdomain” value per-env, which is then used in the default settings (as hostname: %(subdomain).whatever.net) to define all the necessary hostnames and cookie things django needs to work.

This is as DRY I could get, most (existing) files had just 3 or 4 settings. On top of this I had to manage customer configuration, so an additional set of configuration files (with things like database names, users and passwords, assigned subdomain etc) existed, one or more per customer.

One can scale this as low or as high as necessary, you just put in the config file the keys you want to configure per-environment, and once there’s need for a new config, put the previous value in the default config, and override it where necessary.

This system has proven reliable and works well with version control. It has been used for long time managing two separate clusters of applications (15 or more separate instances of the django site per machine), with more than 50 customers, where the clusters were changing size and members depending on the mood of the sysadmin…


回答 8

我也在与Laravel合作,我喜欢在那里的实现。我试图模仿它,并将其与T.Stone提出的解决方案结合起来(见上):

PRODUCTION_SERVERS = ['*.webfaction.com','*.whatever.com',]

def check_env():
    for item in PRODUCTION_SERVERS:
        match = re.match(r"(^." + item + "$)", socket.gethostname())
        if match:
            return True

if check_env():
    PRODUCTION = True
else:
    PRODUCTION = False

DEBUG = not PRODUCTION

也许这样的事情会帮助您。

I am also working with Laravel and I like the implementation there. I tried to mimic it and combining it with the solution proposed by T. Stone (look above):

PRODUCTION_SERVERS = ['*.webfaction.com','*.whatever.com',]

def check_env():
    for item in PRODUCTION_SERVERS:
        match = re.match(r"(^." + item + "$)", socket.gethostname())
        if match:
            return True

if check_env():
    PRODUCTION = True
else:
    PRODUCTION = False

DEBUG = not PRODUCTION

Maybe something like this would help you.


回答 9

请记住,settings.py是实时代码文件。假设您没有在生产环境中设置DEBUG(这是最佳做法),则可以执行以下操作:

if DEBUG:
    STATIC_PATH = /path/to/dev/files
else:
    STATIC_PATH = /path/to/production/files

这很基本,但是从理论上讲,您可以仅根据DEBUG的值-或要使用的任何其他变量或代码检查,将复杂性提高到任何水平。

Remember that settings.py is a live code file. Assuming that you don’t have DEBUG set on production (which is a best practice), you can do something like:

if DEBUG:
    STATIC_PATH = /path/to/dev/files
else:
    STATIC_PATH = /path/to/production/files

Pretty basic, but you could, in theory, go up to any level of complexity based on just the value of DEBUG – or any other variable or code check you wanted to use.


回答 10

对于我的大多数项目,我使用以下模式:

  1. 在我存储所有环境通用设置的位置创建settings_base.py
  2. 每当需要使用具有特定要求的新环境时,我都会创建一个新的设置文件(例如settings_local.py),该文件将继承settings_base.py的内容并覆盖/添加适当的设置变量(from settings_base import *

(要使用自定义设置运行manage.py文件只需使用–settings命令选项:manage.py <command> --settings=settings_you_wish_to_use.py

For most of my projects I use following pattern:

  1. Create settings_base.py where I store settings that are common for all environments
  2. Whenever I need to use new environment with specific requirements I create new settings file (eg. settings_local.py) which inherits contents of settings_base.py and overrides/adds proper settings variables (from settings_base import *)

(To run manage.py with custom settings file you simply use –settings command option: manage.py <command> --settings=settings_you_wish_to_use.py)


回答 11

我对这个问题的解决方案在某种程度上也是这里已经提到的一些解决方案的混合:

  • 我保留了一个名为dev和prod中local_settings.py内容的文件USING_LOCAL = TrueUSING_LOCAL = False
  • settings.py我对该文件进行导入以获取USING_LOCAL设置

然后,我将所有与环境相关的设置都基于该设置:

DEBUG = USING_LOCAL
if USING_LOCAL:
    # dev database settings
else:
    # prod database settings

我宁愿拥有两个需要维护的单独的settings.py文件,因为与将它们分散在多个文件中相比,可以将设置结构化为一个文件。这样,当我更新设置时,我不会忘记在两种环境下都进行设置。

当然,每种方法都有其缺点,这一方法也不exceptions。这里的问题是,local_settings.py每当我将更改推送到生产环境时,我都无法覆盖文件,这意味着我不能盲目地复制所有文件,但这是我可以忍受的。

My solution to that problem is also somewhat of a mix of some solutions already stated here:

  • I keep a file called local_settings.py that has the content USING_LOCAL = True in dev and USING_LOCAL = False in prod
  • In settings.py I do an import on that file to get the USING_LOCAL setting

I then base all my environment-dependent settings on that one:

DEBUG = USING_LOCAL
if USING_LOCAL:
    # dev database settings
else:
    # prod database settings

I prefer this to having two separate settings.py files that I need to maintain as I can keep my settings structured in a single file easier than having them spread across several files. Like this, when I update a setting I don’t forget to do it for both environments.

Of course that every method has its disadvantages and this one is no exception. The problem here is that I can’t overwrite the local_settings.py file whenever I push my changes into production, meaning I can’t just copy all files blindly, but that’s something I can live with.


回答 12

我使用了上面提到的jpartogi的一种变体,发现它略短一些:

import platform
from django.core.management import execute_manager 

computername = platform.node()

try:
  settings = __import__(computername + '_settings')
except ImportError: 
  import sys
  sys.stderr.write("Error: Can't find the file '%r_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % (computername, __file__))
  sys.exit(1)

if __name__ == "__main__":
  execute_manager(settings)

基本上在每台计算机(开发或生产)上,我都有适当的hostname_settings.py文件,该文件会动态加载。

I use a variation of what jpartogi mentioned above, that I find a little shorter:

import platform
from django.core.management import execute_manager 

computername = platform.node()

try:
  settings = __import__(computername + '_settings')
except ImportError: 
  import sys
  sys.stderr.write("Error: Can't find the file '%r_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % (computername, __file__))
  sys.exit(1)

if __name__ == "__main__":
  execute_manager(settings)

Basically on each computer (development or production) I have the appropriate hostname_settings.py file that gets dynamically loaded.


回答 13

也有Django Classy Settings。我个人是它的忠实拥护者。它是由Django IRC上最活跃的人之一构建的。您将使用环境变量进行设置。

http://django-classy-settings.readthedocs.io/en/latest/

There is also Django Classy Settings. I personally am a big fan of it. It’s built by one of the most active people on the Django IRC. You would use environment vars to set things.

http://django-classy-settings.readthedocs.io/en/latest/


回答 14

1-在您的应用程序内创建一个新文件夹,并为其命名设置。

2-现在__init__.py在其中创建一个新文件,并在其中写入

from .base import *

try:
    from .local import *
except:
    pass

try:
    from .production import *
except:
    pass

3 -创建在设置三个新文件夹的名称local.pyproduction.pybase.py

4-在内部base.py,复制上一个settings.py文件夹的所有内容,并用其他不同的名称重命名old_settings.py

5-在base.py中,更改BASE_DIR路径以指向新的设置路径

旧路径-> BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

新路径-> BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

这样,可以在生产和本地开发之间构建项目目录并对其进行管理。

1 – Create a new folder inside your app and name settings to it.

2 – Now create a new __init__.py file in it and inside it write

from .base import *

try:
    from .local import *
except:
    pass

try:
    from .production import *
except:
    pass

3 – Create three new files in the settings folder name local.py and production.py and base.py.

4 – Inside base.py, copy all the content of previous settings.py folder and rename it with something different, let’s say old_settings.py.

5 – In base.py change your BASE_DIR path to point to your new path of setting

Old path-> BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

New path -> BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

This way, the project dir can be structured and can be manageable among production and local development.


回答 15

为了settings在不同的环境上使用不同的配置,请创建不同的设置文件。然后,在部署脚本中,使用--settings=<my-settings.py>参数启动服务器,通过该参数可以在不同的环境上使用不同的设置

使用这种方法的好处

  1. 您的设置将根据每个环境进行模块化

  2. 您可以在中导入master_settings.py包含基本配置的内容,environmnet_configuration.py并覆盖要在该环境中更改的值。

  3. 如果您有庞大的团队,则每个开发人员可能都有自己的团队,可以将其local_settings.py添加到代码存储库中,而无需修改服务器配置。您可以添加这些本地设置,.gitnore如果您使用的git或者.hginore,如果你的Mercurial版本控制(或任何其他)。这样,本地设置甚至不会成为保持干净的实际代码库的一部分。

In order to use different settings configuration on different environment, create different settings file. And in your deployment script, start the server using --settings=<my-settings.py> parameter, via which you can use different settings on different environment.

Benefits of using this approach:

  1. Your settings will be modular based on each environment

  2. You may import the master_settings.py containing the base configuration in the environmnet_configuration.py and override the values that you want to change in that environment.

  3. If you have huge team, each developer may have their own local_settings.py which they can add to the code repository without any risk of modifying the server configuration. You can add these local settings to .gitnore if you use git or .hginore if you Mercurial for Version Control (or any other). That way local settings won’t even be the part of actual code base keeping it clean.


回答 16

我的设置如下拆分

settings/
     |
     |- base.py
     |- dev.py
     |- prod.py  

我们有3个环境

  • 开发者
  • 分期
  • 生产

现在显然,登台和生产应该具有最大可能的相似环境。所以我们都坚持prod.py

但是在某些情况下,我不得不确定正在运行的服务器是生产服务器。@T。斯通的答案帮助我写了以下支票。

from socket import gethostname, gethostbyname  
PROD_HOSTS = ["webserver1", "webserver2"]

DEBUG = False
ALLOWED_HOSTS = [gethostname(), gethostbyname(gethostname()),]


if any(host in PROD_HOSTS for host in ALLOWED_HOSTS):
    SESSION_COOKIE_SECURE = True
    CSRF_COOKIE_SECURE = True  

I had my settings split as follows

settings/
     |
     |- base.py
     |- dev.py
     |- prod.py  

We have 3 environments

  • dev
  • staging
  • production

Now obviously staging and production should have the maximum possible similar environment. So we kept prod.py for both.

But there was a case where I had to identify running server is a production server. @T. Stone ‘s answer helped me write check as follows.

from socket import gethostname, gethostbyname  
PROD_HOSTS = ["webserver1", "webserver2"]

DEBUG = False
ALLOWED_HOSTS = [gethostname(), gethostbyname(gethostname()),]


if any(host in PROD_HOSTS for host in ALLOWED_HOSTS):
    SESSION_COOKIE_SECURE = True
    CSRF_COOKIE_SECURE = True  

回答 17

我在manage.py中对其进行区分,并创建了两个单独的设置文件:local_settings.py和prod_settings.py。

在manage.py中,我检查服务器是本地服务器还是生产服务器。如果是本地服务器,则将加载local_settings.py,如果是生产服务器,则将加载prod_settings.py。基本上是这样的:

#!/usr/bin/env python
import sys
import socket
from django.core.management import execute_manager 

ipaddress = socket.gethostbyname( socket.gethostname() )
if ipaddress == '127.0.0.1':
    try:
        import local_settings # Assumed to be in the same directory.
        settings = local_settings
    except ImportError:
        import sys
        sys.stderr.write("Error: Can't find the file 'local_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
        sys.exit(1)
else:
    try:
        import prod_settings # Assumed to be in the same directory.
        settings = prod_settings    
    except ImportError:
        import sys
        sys.stderr.write("Error: Can't find the file 'prod_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file prod_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
        sys.exit(1)

if __name__ == "__main__":
    execute_manager(settings)

我发现将设置文件分为两个单独的文件比在设置文件中进行大量的ifs更为容易。

I differentiate it in manage.py and created two separate settings file: local_settings.py and prod_settings.py.

In manage.py I check whether the server is local server or production server. If it is a local server it would load up local_settings.py and it is a production server it would load up prod_settings.py. Basically this is how it would look like:

#!/usr/bin/env python
import sys
import socket
from django.core.management import execute_manager 

ipaddress = socket.gethostbyname( socket.gethostname() )
if ipaddress == '127.0.0.1':
    try:
        import local_settings # Assumed to be in the same directory.
        settings = local_settings
    except ImportError:
        import sys
        sys.stderr.write("Error: Can't find the file 'local_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
        sys.exit(1)
else:
    try:
        import prod_settings # Assumed to be in the same directory.
        settings = prod_settings    
    except ImportError:
        import sys
        sys.stderr.write("Error: Can't find the file 'prod_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file prod_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
        sys.exit(1)

if __name__ == "__main__":
    execute_manager(settings)

I found it to be easier to separate the settings file into two separate file instead of doing lots of ifs inside the settings file.


回答 18

如果愿意,也可以选择维护其他文件:如果使用git或任何其他VCS将代码从本地推送到服务器,则可以将设置文件添加到.gitignore。

这样您就可以在两个地方都拥有不同的内容,而不会出现任何问题。因此,在服务器上,您可以配置独立版本的settings.py,对本地所做的任何更改都不会反映在服务器上,反之亦然。

另外,它还会从github上删除settings.py文件,这是一个很大的错误,我看到很多新手都在这样做。

As an alternative to maintain different file if you wiil: If you are using git or any other VCS to push codes from local to server, what you can do is add the settings file to .gitignore.

This will allow you to have different content in both places without any problem. SO on server you can configure an independent version of settings.py and any changes made on the local wont reflect on server and vice versa.

In addition, it will remove the settings.py file from github also, the big fault, which i have seen many newbies doing.


回答 19

制作settings.py的多个版本是12 Factor App方法的反模式。使用python-decoupledjango-environ代替。

Making multiple versions of settings.py is an anti pattern for 12 Factor App methodology. use python-decouple or django-environ instead.


回答 20

我认为最好的解决方案是@T建议的。斯通,但我不知道为什么不在Django中使用DEBUG标志。我为我的网站编写以下代码:

if DEBUG:
    from .local_settings import *

总是简单的解决方案比复杂的解决方案好。

I think the best solution is suggested by @T. Stone, but I don’t know why just don’t use the DEBUG flag in Django. I Write the below code for my website:

if DEBUG:
    from .local_settings import *

Always the simple solutions are better than complex ones.


回答 21

我发现这里的回复非常有帮助。(是否已更明确地解决了此问题?上次答复是一年前。)在考虑了列出的所有方法之后,我想出了一个我未在此处列出的解决方案。

我的标准是:

  • 一切都应该在源代码控制中。我不喜欢随便摆姿势。
  • 理想情况下,将设置保存在一个文件中。如果我没看对的话我会忘记的:)
  • 无需手动编辑即可部署。应该能够使用单个结构命令进行测试/推送/部署。
  • 避免将开发设置泄漏到生产中。
  • 保持尽可能接近“标准”(*咳嗽*)Django布局。

我以为打开主机是有道理的,但后来发现这里的真正问题是针对不同环境的不同设置,并且花了很多时间。我把这个代码在结束我的settings.py文件中:

try:
    os.environ['DJANGO_DEVELOPMENT_SERVER'] # throws error if unset
    DEBUG = True
    TEMPLATE_DEBUG = True
    # This is naive but possible. Could also redeclare full app set to control ordering. 
    # Note that it requires a list rather than the generated tuple.
    INSTALLED_APPS.extend([
        'debug_toolbar',
        'django_nose',
    ])
    # Production database settings, alternate static/media paths, etc...
except KeyError: 
    print 'DJANGO_DEVELOPMENT_SERVER environment var not set; using production settings'

这样,该应用程序默认为生产设置,这意味着您将开发环境明确“列入白名单”。与在相反的情况下忘记在本地设置环境变量相比,如果忘记在生产环境中设置某些内容并使用某些开发设置,则要安全得多。

从外壳或.bash_profile或任何地方进行本地开发时:

$ export DJANGO_DEVELOPMENT_SERVER=yep

(或者,如果您是在Windows上进行开发,则可以通过“控制面板”或目前称为“控制面板”的工具进行设置。Windows总是使它晦涩难懂,因此您可以设置环境变量。)

使用这种方法,开发人员设置全部集中在一个(标准)位置,并在需要时仅覆盖生产设置。任何与开发设置有关的更改都应该完全安全地进行源代码控制,而不会影响生产。

I found the responses here very helpful. (Has this been more definitively solved? The last response was a year ago.) After considering all the approaches listed, I came up with a solution that I didn’t see listed here.

My criteria were:

  • Everything should be in source control. I don’t like fiddly bits lying around.
  • Ideally, keep settings in one file. I forget things if I’m not looking right at them :)
  • No manual edits to deploy. Should be able to test/push/deploy with a single fabric command.
  • Avoid leaking development settings into production.
  • Keep as close as possible to “standard” (*cough*) Django layout as possible.

I thought switching on the host machine made some sense, but then figured the real issue here is different settings for different environments, and had an aha moment. I put this code at the end of my settings.py file:

try:
    os.environ['DJANGO_DEVELOPMENT_SERVER'] # throws error if unset
    DEBUG = True
    TEMPLATE_DEBUG = True
    # This is naive but possible. Could also redeclare full app set to control ordering. 
    # Note that it requires a list rather than the generated tuple.
    INSTALLED_APPS.extend([
        'debug_toolbar',
        'django_nose',
    ])
    # Production database settings, alternate static/media paths, etc...
except KeyError: 
    print 'DJANGO_DEVELOPMENT_SERVER environment var not set; using production settings'

This way, the app defaults to production settings, which means you are explicitly “whitelisting” your development environment. It is much safer to forget to set the environment variable locally than if it were the other way around and you forgot to set something in production and let some dev settings be used.

When developing locally, either from the shell or in a .bash_profile or wherever:

$ export DJANGO_DEVELOPMENT_SERVER=yep

(Or if you’re developing on Windows, set via the Control Panel or whatever its called these days… Windows always made it so obscure that you could set environment variables.)

With this approach, the dev settings are all in one (standard) place, and simply override the production ones where needed. Any mucking around with development settings should be completely safe to commit to source control with no impact on production.


Caprover 可扩展的PaaS(自动Docker+nginx)

CapRover

适用于NodeJS、Python、PHP、Ruby、Go应用程序的最简单的应用程序/数据库部署平台和Web服务器软件包

不需要码头工人,nginx知识!



这是什么?

CapRover是一款极其易于使用的应用程序/数据库部署和Web服务器管理器,适用于NodeJS、Python、PHP、ASP.NET、Ruby、MySQL、MongoDB、Postgres、WordPress(等等)申请!

它的速度非常快,而且非常健壮,因为它在其简单易用的界面背后使用了Docker、nginx、LetsEncrypt和NetData

✔用于自动化和脚本编写的CLI

✔便于访问和方便的Web GUI

✔不能锁定!删除CapRover,您的应用程序将继续工作!

✔引擎盖下的码头工人蜂拥而至,进行集装箱化和集群化

✔Nginx(完全可定制的模板)在引擎盖下,用于负载均衡

✔让我们在幕后加密以获得免费的SSL(HTTPS)

我是认真的!谁应该关心CapRover?

  • 不喜欢花费数小时和数天时间设置服务器、构建工具、向服务器发送代码、构建服务器、获取SSL证书、安装证书、反复更新nginx的[web]开发人员
  • 开发人员使用昂贵的服务,如Heroku、Microsoft Azure等,并希望将其成本降低到原来的1/4(Heroku对其1 GB实例每月收费25美元,而同一服务器在Valltr上的收费是5美元!)
  • 喜欢写更多关于showResults(getUserList())而且不是很多$ apt-get install libstdc++6 > /dev/null
  • 喜欢在服务器上安装MySQL、MongoDB等的开发人员,方法是从下拉菜单中选择并单击Install!
  • 设置CapRover服务器需要多少服务器/坞站/Linux知识?答:复制粘贴知识!!有关要复制和粘贴的内容的信息,请转到“入门”;-)

了解更多信息!

有关更多详细信息和文档,请访问https://CapRover.com/

贡献者

这个项目的存在要归功于所有做出贡献的人。[Contribute]

支持者

感谢我们所有的支持者!🙏

Ray 一个开放源码框架,为构建分布式应用程序提供简单、通用的API

Ray为构建分布式应用程序提供了简单、通用的API,为构建分布式应用程序提供简单、通用的API。Ray与RLlib(一个可伸缩的强化学习库)和Tune(一个可伸缩的超参数调整库)可以打包在一起。

Ray附带以下库,用于加速机器学习工作负载:

  • Tune:可伸缩的超参数调整
  • RLlib:可扩展强化学习
  • RaySGD:分布式培训包装器
  • Ray Serve:可扩展、可编程的服务

也有很多community integrations和Ray在一起,包括DaskMARSModinHorovodHugging FaceScikit-learn,以及其他。请查看full list of Ray distributed libraries here

使用以下选项安装Ray:pip install ray有关夜间车轮的信息,请参阅Installation page

快速入门

并行执行Python函数

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

要使用Ray的演员模型,请执行以下操作:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray程序可以在一台计算机上运行,也可以无缝扩展到大型群集。要在云中执行上述Ray脚本,只需下载this configuration file,然后运行:

ray submit [CLUSTER.YAML] example.py --start

阅读有关以下内容的更多信息launching clusters

调整快速入门

Tune是一个用于任何规模的超参数调优的库

要运行此示例,您需要安装以下软件:

$ pip install "ray[tune]"

此示例运行并行格网搜索以优化示例目标函数

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results.
df = analysis.results_df

如果安装了TensorBoard,则自动可视化所有试验结果:

tensorboard --logdir ~/ray_results

RLlib快速入门

RLlib是构建在Ray之上的用于强化学习的开源库,它为各种应用程序提供了高可伸缩性和统一的API

pip install tensorflow  # or tensorflow-gpu
pip install "ray[rllib]"
import gym
from gym.spaces import Discrete, Box
from ray import tune

class SimpleCorridor(gym.Env):
    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = Discrete(2)
        self.observation_space = Box(0.0, self.end_pos, shape=(1, ))

    def reset(self):
        self.cur_pos = 0
        return [self.cur_pos]

    def step(self, action):
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        elif action == 1:
            self.cur_pos += 1
        done = self.cur_pos >= self.end_pos
        return [self.cur_pos], 1 if done else 0, done, {}

tune.run(
    "PPO",
    config={
        "env": SimpleCorridor,
        "num_workers": 4,
        "env_config": {"corridor_length": 5}})

Ray Serve快速入门

Ray Serve是一个构建在Ray之上的可伸缩的模型服务库。它是:

  • 框架不可知性:使用相同的工具包提供各种服务,从使用PyTorch或TensorFlow&Kera等框架构建的深度学习模型到Scikit-Learning模型或任意业务逻辑
  • Python优先:在纯Python中配置声明性服务的模型,不需要YAML或JSON配置
  • 以性能为导向:启用批处理、流水线和GPU加速以提高模型的吞吐量
  • 原生合成:允许您通过将多个模型组合在一起来驱动单个预测来创建“模型管道”
  • 水平可扩展:随着您添加更多的机器,Serve可以线性扩展。使您的ML支持的服务能够处理不断增长的流量

要运行此示例,您需要安装以下软件:

$ pip install scikit-learn
$ pip install "ray[serve]"

此示例Run服务于一个SCRICKIT-LEARN梯度增强分类器

from ray import serve
import pickle
import requests
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier

# Train model
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

# Define Ray Serve model,
class BoostingModel:
    def __init__(self):
        self.model = model
        self.label_list = iris_dataset["target_names"].tolist()

    def __call__(self, flask_request):
        payload = flask_request.json["vector"]
        print("Worker: received flask request with data", payload)

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model
client = serve.start()
client.create_backend("iris:v1", BoostingModel)
client.create_endpoint("iris_classifier", backend="iris:v1", route="/iris")

# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result:
# {
#  "result": "versicolor"
# }

更多信息

较旧的文档:

参与其中