https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

Ray为构建分布式应用程序提供了简单、通用的API,为构建分布式应用程序提供简单、通用的API。Ray与RLlib(一个可伸缩的强化学习库)和Tune(一个可伸缩的超参数调整库)可以打包在一起。

Ray附带以下库,用于加速机器学习工作负载:

  • Tune:可伸缩的超参数调整
  • RLlib:可扩展强化学习
  • RaySGD:分布式培训包装器
  • Ray Serve:可扩展、可编程的服务

也有很多community integrations和Ray在一起,包括DaskMARSModinHorovodHugging FaceScikit-learn,以及其他。请查看full list of Ray distributed libraries here

使用以下选项安装Ray:pip install ray有关夜间车轮的信息,请参阅Installation page

快速入门

并行执行Python函数

import ray ray.init()

@ray.remote def f(x):
    return x * x futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

要使用Ray的演员模型,请执行以下操作:

import ray ray.init()

@ray.remote class Counter(object):
    def __init__(self):
        self.n = 0 def increment(self):
        self.n += 1 def read(self):
        return self.n counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray程序可以在一台计算机上运行,也可以无缝扩展到大型群集。要在云中执行上述Ray脚本,只需下载this configuration file,然后运行:

ray submit [CLUSTER.YAML] example.py --start

阅读有关以下内容的更多信息launching clusters

调整快速入门

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune是一个用于任何规模的超参数调优的库

要运行此示例,您需要安装以下软件:

$ pip install "ray[tune]"

此示例运行并行格网搜索以优化示例目标函数

from ray import tune def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1 def training_function(config):
    # Hyperparameters alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure. intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune. tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice([1, 2, 3])
    })

print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

# Get a dataframe for analyzing trial results. df = analysis.results_df

如果安装了TensorBoard,则自动可视化所有试验结果:

tensorboard --logdir ~/ray_results

RLlib快速入门

https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg

RLlib是构建在Ray之上的用于强化学习的开源库,它为各种应用程序提供了高可伸缩性和统一的API

pip install tensorflow  # or tensorflow-gpu
pip install "ray[rllib]"
import gym from gym.spaces import Discrete, Box from ray import tune class SimpleCorridor(gym.Env):
    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0 self.action_space = Discrete(2)
        self.observation_space = Box(0.0, self.end_pos, shape=(1, ))

    def reset(self):
        self.cur_pos = 0 return [self.cur_pos]

    def step(self, action):
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1 elif action == 1:
            self.cur_pos += 1 done = self.cur_pos >= self.end_pos return [self.cur_pos], 1 if done else 0, done, {}

tune.run(
    "PPO",
    config={
        "env": SimpleCorridor,
        "num_workers": 4,
        "env_config": {"corridor_length": 5}})

Ray Serve快速入门

Ray Serve是一个构建在Ray之上的可伸缩的模型服务库。它是:

  • 框架不可知性:使用相同的工具包提供各种服务,从使用PyTorch或TensorFlow&Kera等框架构建的深度学习模型到Scikit-Learning模型或任意业务逻辑
  • Python优先:在纯Python中配置声明性服务的模型,不需要YAML或JSON配置
  • 以性能为导向:启用批处理、流水线和GPU加速以提高模型的吞吐量
  • 原生合成:允许您通过将多个模型组合在一起来驱动单个预测来创建“模型管道”
  • 水平可扩展:随着您添加更多的机器,Serve可以线性扩展。使您的ML支持的服务能够处理不断增长的流量

要运行此示例,您需要安装以下软件:

$ pip install scikit-learn
$ pip install "ray[serve]"

此示例Run服务于一个SCRICKIT-LEARN梯度增强分类器

from ray import serve import pickle import requests from sklearn.datasets import load_iris from sklearn.ensemble import GradientBoostingClassifier # Train model iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])

# Define Ray Serve model, class BoostingModel:
    def __init__(self):
        self.model = model self.label_list = iris_dataset["target_names"].tolist()

    def __call__(self, flask_request):
        payload = flask_request.json["vector"]
        print("Worker: received flask request with data", payload)

        prediction = self.model.predict([payload])[0]
        human_name = self.label_list[prediction]
        return {"result": human_name}


# Deploy model client = serve.start()
client.create_backend("iris:v1", BoostingModel)
client.create_endpoint("iris_classifier", backend="iris:v1", route="/iris")

# Query it! sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get("http://localhost:8000/iris", json=sample_request_input)
print(response.text)
# Result: # { #  "result": "versicolor" # }

更多信息

较旧的文档:

参与其中

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。