标签归档:host

如何在Fabric文件中设置目标主机

问题:如何在Fabric文件中设置目标主机

我想使用Fabric将我的Web应用程序代码部署到开发,登台和生产服务器。我的fabfile:

def deploy_2_dev():
  deploy('dev')

def deploy_2_staging():
  deploy('staging')

def deploy_2_prod():
  deploy('prod')

def deploy(server):
  print 'env.hosts:', env.hosts
  env.hosts = [server]
  print 'env.hosts:', env.hosts

样本输出:

host:folder user$ fab deploy_2_dev
env.hosts: []
env.hosts: ['dev']
No hosts found. Please specify (single) host string for connection:

当我创建Fabric文档中set_hosts()所示的任务时,env.hosts设置正确。但是,这不是一个可行的选择,装饰器也不是。在命令行上传递主机最终会导致某种形式的shell脚本调用fabfile,我更愿意使用一个工具来正确完成这项工作。

它在Fabric文档中说“ env.hosts仅仅是Python列表对象”。根据我的观察,这根本不是事实。

谁能解释这是怎么回事?如何设置要部署到的主机?

I want to use Fabric to deploy my web app code to development, staging and production servers. My fabfile:

def deploy_2_dev():
  deploy('dev')

def deploy_2_staging():
  deploy('staging')

def deploy_2_prod():
  deploy('prod')

def deploy(server):
  print 'env.hosts:', env.hosts
  env.hosts = [server]
  print 'env.hosts:', env.hosts

Sample output:

host:folder user$ fab deploy_2_dev
env.hosts: []
env.hosts: ['dev']
No hosts found. Please specify (single) host string for connection:

When I create a set_hosts() task as shown in the Fabric docs, env.hosts is set properly. However, this is not a viable option, neither is a decorator. Passing hosts on the command line would ultimately result in some kind of shell script that calls the fabfile, I would prefer having one single tool do the job properly.

It says in the Fabric docs that ‘env.hosts is simply a Python list object’. From my observations, this is simply not true.

Can anyone explain what is going on here ? How can I set the host to deploy to ?


回答 0

我通过声明每个环境的实际功能来做到这一点。例如:

def test():
    env.user = 'testuser'
    env.hosts = ['test.server.com']

def prod():
    env.user = 'produser'
    env.hosts = ['prod.server.com']

def deploy():
    ...

使用以上功能,我将键入以下内容以部署到我的测试环境:

fab test deploy

…以及以下内容部署到生产环境:

fab prod deploy

这样做的好处是,testand prod函数可以在任何 fab函数之前使用,而不仅仅是部署。这是非常有用的。

I do this by declaring an actual function for each environment. For example:

def test():
    env.user = 'testuser'
    env.hosts = ['test.server.com']

def prod():
    env.user = 'produser'
    env.hosts = ['prod.server.com']

def deploy():
    ...

Using the above functions, I would type the following to deploy to my test environment:

fab test deploy

…and the following to deploy to production:

fab prod deploy

The nice thing about doing it this way is that the test and prod functions can be used before any fab function, not just deploy. It is incredibly useful.


回答 1

使用roledefs

from fabric.api import env, run

env.roledefs = {
    'test': ['localhost'],
    'dev': ['user@dev.example.com'],
    'staging': ['user@staging.example.com'],
    'production': ['user@production.example.com']
} 

def deploy():
    run('echo test')

用-R选择角色:

$ fab -R test deploy
[localhost] Executing task 'deploy'
...

Use roledefs

from fabric.api import env, run

env.roledefs = {
    'test': ['localhost'],
    'dev': ['user@dev.example.com'],
    'staging': ['user@staging.example.com'],
    'production': ['user@production.example.com']
} 

def deploy():
    run('echo test')

Choose role with -R:

$ fab -R test deploy
[localhost] Executing task 'deploy'
...

回答 2

这是serverhorror答案的简单版本:

from fabric.api import settings

def mystuff():
    with settings(host_string='192.0.2.78'):
        run("hostname -f")

Here’s a simpler version of serverhorror’s answer:

from fabric.api import settings

def mystuff():
    with settings(host_string='192.0.2.78'):
        run("hostname -f")

回答 3

自己被卡住了,但终于想通了。你根本无法从设置env.hosts配置的任务。每个任务执行N次,对指定的每个Host执行一次,因此该设置基本上不在任务范围之内。

查看上面的代码,您可以简单地执行以下操作:

@hosts('dev')
def deploy_dev():
    deploy()

@hosts('staging')
def deploy_staging():
    deploy()

def deploy():
    # do stuff...

看起来它可以满足您的预期。

或者,您可以在全局范围内编写一些自定义代码,以手动解析参数,并在定义任务功能之前设置env.hosts。由于一些原因,这实际上就是我设置我的方法。

Was stuck on this myself, but finally figured it out. You simply can’t set the env.hosts configuration from within a task. Each task is executed N times, once for each Host specified, so the setting is fundamentally outside of task scope.

Looking at your code above, you could simply do this:

@hosts('dev')
def deploy_dev():
    deploy()

@hosts('staging')
def deploy_staging():
    deploy()

def deploy():
    # do stuff...

Which seems like it would do what you’re intending.

Or you can write some custom code in the global scope that parses the arguments manually, and sets env.hosts before your task function is defined. For a few reasons, that’s actually how I’ve set mine up.


回答 4

从fab 1.5开始,这是动态设置主机的记录方法。

http://docs.fabfile.org/en/1.7/usage/execution.html#dynamic-hosts

引用下面的文档。

将execute与动态设置的主机列表一起使用

Fabric常见的中级到高级用例是在运行时对目标主机列表的参数化(当使用Roles不足时)。execute可以使这一过程变得非常简单,如下所示:

from fabric.api import run, execute, task

# For example, code talking to an HTTP API, or a database, or ...
from mylib import external_datastore

# This is the actual algorithm involved. It does not care about host
# lists at all.
def do_work():
    run("something interesting on a host")

# This is the user-facing task invoked on the command line.
@task
def deploy(lookup_param):
    # This is the magic you don't get with @hosts or @roles.
    # Even lazy-loading roles require you to declare available roles
    # beforehand. Here, the sky is the limit.
    host_list = external_datastore.query(lookup_param)
    # Put this dynamically generated host list together with the work to be
    # done.
    execute(do_work, hosts=host_list)

Since fab 1.5 this is a documented way to dynamically set hosts.

http://docs.fabfile.org/en/1.7/usage/execution.html#dynamic-hosts

Quote from the doc below.

Using execute with dynamically-set host lists

A common intermediate-to-advanced use case for Fabric is to parameterize lookup of one’s target host list at runtime (when use of Roles does not suffice). execute can make this extremely simple, like so:

from fabric.api import run, execute, task

# For example, code talking to an HTTP API, or a database, or ...
from mylib import external_datastore

# This is the actual algorithm involved. It does not care about host
# lists at all.
def do_work():
    run("something interesting on a host")

# This is the user-facing task invoked on the command line.
@task
def deploy(lookup_param):
    # This is the magic you don't get with @hosts or @roles.
    # Even lazy-loading roles require you to declare available roles
    # beforehand. Here, the sky is the limit.
    host_list = external_datastore.query(lookup_param)
    # Put this dynamically generated host list together with the work to be
    # done.
    execute(do_work, hosts=host_list)

回答 5

相反,一些其他的答案,它可以修改env任务中的环境变量。但是,这env仅用于使用该fabric.tasks.execute功能执行的后续任务。

from fabric.api import task, roles, run, env
from fabric.tasks import execute

# Not a task, plain old Python to dynamically retrieve list of hosts
def get_stressors():
    hosts = []
    # logic ...
    return hosts

@task
def stress_test():
    # 1) Dynamically generate hosts/roles
    stressors = get_stressors()
    env.roledefs['stressors'] = map(lambda x: x.public_ip, stressors)

    # 2) Wrap sub-tasks you want to execute on new env in execute(...)
    execute(stress)

    # 3) Note that sub-tasks not nested in execute(...) will use original env
    clean_up()

@roles('stressors')
def stress():
    # this function will see any changes to env, as it was wrapped in execute(..)
    run('echo "Running stress test..."')
    # ...

@task
def clean_up():
    # this task will NOT see any dynamic changes to env

如果不将子任务包装在中execute(...),则将使用模块级env设置或从fabCLI 传递的任何内容。

Contrary to some other answers, it is possible to modify the env environment variables within a task. However, this env will only be used for subsequent tasks executed using the fabric.tasks.execute function.

from fabric.api import task, roles, run, env
from fabric.tasks import execute

# Not a task, plain old Python to dynamically retrieve list of hosts
def get_stressors():
    hosts = []
    # logic ...
    return hosts

@task
def stress_test():
    # 1) Dynamically generate hosts/roles
    stressors = get_stressors()
    env.roledefs['stressors'] = map(lambda x: x.public_ip, stressors)

    # 2) Wrap sub-tasks you want to execute on new env in execute(...)
    execute(stress)

    # 3) Note that sub-tasks not nested in execute(...) will use original env
    clean_up()

@roles('stressors')
def stress():
    # this function will see any changes to env, as it was wrapped in execute(..)
    run('echo "Running stress test..."')
    # ...

@task
def clean_up():
    # this task will NOT see any dynamic changes to env

Without wrapping sub-tasks in execute(...), your module-level env settings or whatever is passed from the fab CLI will be used.


回答 6

您需要host_string以身作则:

from fabric.context_managers import settings as _settings

def _get_hardware_node(virtualized):
    return "localhost"

def mystuff(virtualized):
    real_host = _get_hardware_node(virtualized)
    with _settings(
        host_string=real_host):
        run("echo I run on the host %s :: `hostname -f`" % (real_host, ))

You need to set host_string an example would be:

from fabric.context_managers import settings as _settings

def _get_hardware_node(virtualized):
    return "localhost"

def mystuff(virtualized):
    real_host = _get_hardware_node(virtualized)
    with _settings(
        host_string=real_host):
        run("echo I run on the host %s :: `hostname -f`" % (real_host, ))

回答 7

解释为什么它甚至是一个问题。fab命令利用Fabric库来在主机列表上运行任务。如果尝试更改任务中的主机列表,则本质上是在迭代列表时尝试更改列表。或者,在没有定义主机的情况下,请在一个空列表上循环,而在该空列表中,您设置要循环的列表的代码将永远不会执行。

使用env.host_string只能通过直接向函数指定要连接的主机来解决此问题。这会导致一些问题,如果您要在其上执行许多主机,则会重新构建执行循环。

人们能够在运行时设置主机的最简单方法是,使env成为一个独立的任务,即设置所有主机字符串,用户等。然后他们运行部署任务。看起来像这样:

fab production deploy

要么

fab staging deploy

暂存和生产就像您已完成的任务一样,但是它们本身不会调用下一个任务。之所以必须这样工作,是因为该任务必须完成并退出循环(对于主机,在env情况下为None,但此时是一个循环),然后使循环结束主机(现在由前面的任务定义)。

To explain why it’s even an issue. The command fab is leveraging fabric the library to run the tasks on the host lists. If you try and change the host list inside a task, you’re esentially attempting to change a list while iterating over it. Or in the case where you have no hosts defined, loop over an empty list where the code where you set the list to loop over is never executed.

The use of env.host_string is a work around for this behavior only in that it’s specifying directly to the functions what hosts to connect with. This causes some issues in that you’ll be remaking the execution loop if you want to have a number of hosts to execute on.

The simplest way the people make the ability to set hosts at run time, is to keep the env populatiing as a distinct task, that sets up all the host strings, users, etc. Then they run the deploy task. It looks like this:

fab production deploy

or

fab staging deploy

Where staging and production are like the tasks you have given, but they do not call the next task themselves. The reason it has to work like this, is that the task has to finish, and break out of the loop (of hosts, in the env case None, but it’s a loop of one at that point), and then have the loop over the hosts (now defined by the preceding task) anew.


回答 8

您需要在模块级别而不是在任务功能内修改env.hosts。我犯了同样的错误。

from fabric.api import *

def _get_hosts():
    hosts = []
    ... populate 'hosts' list ...
    return hosts

env.hosts = _get_hosts()

def your_task():
    ... your task ...

You need to modify env.hosts at the module level, not within a task function. I made the same mistake.

from fabric.api import *

def _get_hosts():
    hosts = []
    ... populate 'hosts' list ...
    return hosts

env.hosts = _get_hosts()

def your_task():
    ... your task ...

回答 9

非常简单 只需初始化env.host_string变量,以下所有命令将在此主机上执行。

from fabric.api import env, run

env.host_string = 'user@exmaple.com'

def foo:
    run("hostname -f")

It’s very simple. Just initialize the env.host_string variable and all of the following commands will be executed on this host.

from fabric.api import env, run

env.host_string = 'user@exmaple.com'

def foo:
    run("hostname -f")

回答 10

我对Fabric完全陌生,但是要使Fabric在多个主机上运行相同的命令(例如,在一个命令中部署到多个服务器),可以运行:

fab -H staging-server,production-server deploy 

在那里登台服务器生产服务器有2台服务器要运行对部署行动。这是一个简单的fabfile.py,它将显示操作系统名称。请注意,fabfile.py应该与运行fab命令的目录位于同一目录中。

from fabric.api import *

def deploy():
    run('uname -s')

至少适用于结构1.8.1。

I’m totally new to fabric, but to get fabric to run the same commands on multiple hosts (e.g. to deploy to multiple servers, in one command) you can run:

fab -H staging-server,production-server deploy 

where staging-server and production-server are 2 servers you want to run the deploy action against. Here’s a simple fabfile.py that will display the OS name. Note that the fabfile.py should be in the same directory as where you run the fab command.

from fabric.api import *

def deploy():
    run('uname -s')

This works with fabric 1.8.1 at least.


回答 11

因此,为了设置主机并使命令在所有主机上运行,​​您必须从以下内容开始:

def PROD():
    env.hosts = ['10.0.0.1', '10.0.0.2']

def deploy(version='0.0'):
    sudo('deploy %s' % version)

一旦定义了这些,然后在命令行上运行命令:

fab PROD deploy:1.5

什么将在PROD函数中列出的所有服务器上运行部署任务,因为它会在运行任务之前设置env.hosts。

So, in order to set the hosts, and have the commands run across all the hosts, you have to start with:

def PROD():
    env.hosts = ['10.0.0.1', '10.0.0.2']

def deploy(version='0.0'):
    sudo('deploy %s' % version)

Once those are defined, then run the command on the command line:

fab PROD deploy:1.5

What will run the deploy task across all of the servers listed in the PROD function, as it sets the env.hosts before running the task.


回答 12

您可以env.hoststring在执行子任务之前分配给。如果要遍历多个主机,请在循环中分配给此全局变量。

不幸的是,对于您和我来说,织物不是为此用例设计的。mainhttp://github.com/bitprophet/fabric/blob/master/fabric/main.py上查看该功能,以了解其工作原理。

You can assign to env.hoststring before executing a subtask. Assign to this global variable in a loop if you want to iterate over multiple hosts.

Unfortunately for you and me, fabric is not designed for this use case. Check out the main function at http://github.com/bitprophet/fabric/blob/master/fabric/main.py to see how it works.


回答 13

这是启用fab my_env_1 my_command用法的另一个“ summersault”模式:

使用这种模式,我们只需要使用字典一次定义环境。env_factory根据的键名创建函数ENVS。我放入ENVS了自己的目录和文件,secrets.config.py以将配置与结构代码分开。

缺点是,如所写,添加@task装饰器会破坏它

注意:由于后期绑定,我们在工厂使用def func(k=k):而不是。我们使用此解决方案获取正在运行的模块,并对其进行修补以定义功能。def func():

secrets.config.py

ENVS = {
    'my_env_1': {
        'HOSTS': [
            'host_1',
            'host_2',
        ],
        'MY_OTHER_SETTING': 'value_1',
    },
    'my_env_2': {
        'HOSTS': ['host_3'],
        'MY_OTHER_SETTING': 'value_2'
    }
}

fabfile.py

import sys
from fabric.api import env
from secrets import config


def _set_env(env_name):
    # can easily customize for various use cases
    selected_config = config.ENVS[env_name]
    for k, v in selected_config.items():
        setattr(env, k, v)


def _env_factory(env_dict):
    for k in env_dict:
        def func(k=k):
            _set_env(k)
        setattr(sys.modules[__name__], k, func)


_env_factory(config.ENVS)

def my_command():
    # do work

Here’s another “summersault” pattern that enables the fab my_env_1 my_command usage:

With this pattern, we only have to define environments one time using a dictionary. env_factory creates functions based on the keynames of ENVS. I put ENVS in its own directory and file secrets.config.py to separate config from the fabric code.

The drawback is that, as written, adding the @task decorator will break it.

Notes: We use def func(k=k): instead of def func(): in the factory because of late binding. We get the running module with this solution and patch it to define the function.

secrets.config.py

ENVS = {
    'my_env_1': {
        'HOSTS': [
            'host_1',
            'host_2',
        ],
        'MY_OTHER_SETTING': 'value_1',
    },
    'my_env_2': {
        'HOSTS': ['host_3'],
        'MY_OTHER_SETTING': 'value_2'
    }
}

fabfile.py

import sys
from fabric.api import env
from secrets import config


def _set_env(env_name):
    # can easily customize for various use cases
    selected_config = config.ENVS[env_name]
    for k, v in selected_config.items():
        setattr(env, k, v)


def _env_factory(env_dict):
    for k in env_dict:
        def func(k=k):
            _set_env(k)
        setattr(sys.modules[__name__], k, func)


_env_factory(config.ENVS)

def my_command():
    # do work

回答 14

当前,使用角色被认为是做到这一点的“正确”和“正确”的方式,这是您“应该”做到的。

就是说,如果您像大多数“想要”或“期望”一样,就是能够执行“扭曲系统”或即时切换目标系统的能力。

因此,仅出于娱乐目的(!),以下示例说明了许多人可能认为是危险的,但又能以某种方式彻底满足要求的操作,如下所示:

env.remote_hosts       = env.hosts = ['10.0.1.6']
env.remote_user        = env.user = 'bob'
env.remote_password    = env.password = 'password1'
env.remote_host_string = env.host_string

env.local_hosts        = ['127.0.0.1']
env.local_user         = 'mark'
env.local_password     = 'password2'

def perform_sumersault():
    env_local_host_string = env.host_string = env.local_user + '@' + env.local_hosts[0]
    env.password = env.local_password
    run("hostname -f")
    env.host_string = env.remote_host_string
    env.remote_password = env.password
    run("hostname -f")

然后运行:

fab perform_sumersault

Using roles is currently considered to be the “proper” and “correct” way of doing this and is what you “should” do it.

That said, if you are like most of what you “would like” or “desire” is the ability to perform a “twisted syster” or switching target systems on the fly.

So for entertainment purposes only (!) the following example illustrates what many might consider to a risky, and yet somehow thoroughly satisfying, manoeuvre that goes something like this:

env.remote_hosts       = env.hosts = ['10.0.1.6']
env.remote_user        = env.user = 'bob'
env.remote_password    = env.password = 'password1'
env.remote_host_string = env.host_string

env.local_hosts        = ['127.0.0.1']
env.local_user         = 'mark'
env.local_password     = 'password2'

def perform_sumersault():
    env_local_host_string = env.host_string = env.local_user + '@' + env.local_hosts[0]
    env.password = env.local_password
    run("hostname -f")
    env.host_string = env.remote_host_string
    env.remote_password = env.password
    run("hostname -f")

Then running:

fab perform_sumersault