问题:TensorFlow,为什么选择python语言?

我最近开始研究深度学习和其他ML技术,并开始寻找简化构建网络并对其进行培训的框架,然后我发现TensorFlow在该领域经验不足,对我来说,速度似乎是如果与深度学习一起工作,那么使大型机器学习系统变得更大的重要因素,那么为什么Google选择python来制造TensorFlow?用一种可以编译且无法解释的语言来编写代码会更好吗?

使用Python而不是像C ++这样的语言进行机器学习有什么优势?

I recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found TensorFlow, having little experience in the field, for me, it seems that speed is a big factor for making a big ML system even more if working with deep learning, so why python was chosen by Google to make TensorFlow? Wouldn’t it be better to make it over an language that can be compiled and not interpreted?

What are the advantages of using Python over a language like C++ for machine learning?


回答 0

关于TensorFlow的最重要的认识是,在大多数情况下,内核不是用Python编写的:它是由高度优化的C ++和CUDA(Nvidia用于GPU编程的语言)结合而成。反过来,大多数情况是通过使用Eigen(高性能C ++和CUDA数值库)和NVidia的cuDNN(为NVidia GPU进行了非常优化的DNN库,用于诸如卷积之类的功能)而发生的。

TensorFlow的模型是程序员使用“某种语言”(很可能是Python!)来表达模型。该模型以TensorFlow构造编写,例如:

h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)
h2 = ...

在运行Python时实际上并未执行。相反,实际创建的是一个数据流图,该表示接受特定的输入,应用特定的操作,将结果作为输入提供给其他操作,等等。 该模型由快速的C ++代码执行,并且在大多数情况下,操作之间传递的数据永远不会复制回Python代码

然后,程序员通过拉上节点来“驱动”该模型的执行-通常在Python中进行训练,有时在Python中甚至在原始C ++中进行服务:

sess.run(eval_results)

这个Python(或C ++函数调用)使用对C ++的进程内调用或针对分布式版本的RPC来调用C ++ TensorFlow服务器以使其执行,然后将结果复制回去。

因此,话虽如此,让我们重新表述一下问题:为什么TensorFlow为什么选择Python作为表达和控制模型训练的第一种得到良好支持的语言?

答案很简单:对于许多数据科学家和机器学习专家来说,Python可能最舒适的语言,它易于集成并可以控制C ++后端,同时在内部和外部也广泛使用。和开放源代码。鉴于使用TensorFlow的基本模型,Python的性能并不那么重要,因此很自然。NumPy的巨大优势还在于它可以在Python中轻松进行预处理-同时具有高性能-在将其输入TensorFlow进行真正占用大量CPU的处理之前。

表示执行模型时不使用的模型也有很多复杂性-形状推断(例如,如果您做matmul(A,B),结果数据的形状是什么?)和自动梯度计算。事实证明,能够用Python表达这些内容真是太好了,尽管从长远来看,我认为它们可能会转移到C ++后端以使添加其他语言变得更加容易。

(当然,希望是将来支持其他语言来创建和表达模型。使用其他几种语言来运行推理已经非常简单了-C ++现在可以工作了,Facebook的某人贡献了Go绑定,我们现在正在对其进行审查。等)

The most important thing to realize about TensorFlow is that, for the most part, the core is not written in Python: It’s written in a combination of highly-optimized C++ and CUDA (Nvidia’s language for programming GPUs). Much of that happens, in turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia’s cuDNN (a very optimized DNN library for NVidia GPUs, for functions such as convolutions).

The model for TensorFlow is that the programmer uses “some language” (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as:

h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)
h2 = ...

is not actually executed when the Python is run. Instead, what’s actually created is a dataflow graph that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code.

Then the programmer “drives” the execution of this model by pulling on nodes — for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++:

sess.run(eval_results)

This one Python (or C++ function call) uses either an in-process call to C++ or an RPC for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results.

So, with that said, let’s re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models?

The answer to that is simple: Python is probably the most comfortable language for a large range of data scientists and machine learning experts that’s also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn’t that important, it was a natural fit. It’s also a huge plus that NumPy makes it easy to do pre-processing in Python — also with high performance — before feeding it in to TensorFlow for the truly CPU-heavy things.

There’s also a bunch of complexity in expressing the model that isn’t used when executing it — shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic gradient computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they’ll probably move to the C++ backend to make adding other languages easier.

(The hope, of course, is to support other languages in the future for creating and expressing models. It’s already quite straightforward to run inference using several other languages — C++ works now, someone from Facebook contributed Go bindings that we’re reviewing now, etc.)


回答 1

TF不是用python编写的。它是用C ++编写的(并使用高性能的数字CUDA代码),您可以通过查看他们的github进行检查。因此,核心不是用python编写的,而是TF提供了许多其他语言(python,C ++,Java,Go)的接口

在此处输入图片说明

如果您来自数据分析领域,则可以像numpy(不是用python编写,但提供了Python的接口)那样考虑它,或者如果您是Web开发人员,则可以将其视为数据库(PostgreSQL,MySQL,可以从Java,Python,PHP调用)


由于许多 原因, Python前端(人们使用TF编写模型的语言)最受欢迎。在我看来,主要原因是历史原因:大多数ML用户已经在使用它(另一个流行的选择是R),因此,如果您不提供python的接口,那么您的库很可能注定会变得晦涩难懂。


但是用python编写并不意味着您的模型是用python执行的。相反,如果您以正确的方式编写模型,则在评估TF图期间绝不会执行Python(tf.py_func()除外,该存在于调试中,应在实际模型中避免使用,因为它是在Python方面)。

例如,这与numpy不同。例如,如果您这样做np.linalg.eig(np.matmul(A, np.transpose(A))(是eig(AA')),则该操作将以某种快速语言(C ++或fortran)计算转置,将其返回给python,将其与python一起从python中取出,并以某种快速语言计算一个乘法并将其返回给python,然后计算特征值并将其返回给python。因此,尽管有效地计算了诸如matmul和eig之类的昂贵操作,但您仍然需要通过将结果移回python并强制执行来浪费时间。TF不会这样做,一旦定义了图,张量就不会在python中,而是在C ++ / CUDA /其他地方流动。

TF is not written in python. It is written in C++ (and uses high-performant numerical libraries and CUDA code) and you can check this by looking at their github. So the core is written not in python but TF provide an interface to many other languages (python, C++, Java, Go)

enter image description here

If you come from a data analysis world, you can think about it like numpy (not written in python, but provides an interface to Python) or if you are a web-developer – think about it as a database (PostgreSQL, MySQL, which can be invoked from Java, Python, PHP)


Python frontend (the language in which people write models in TF) is the most popular due to many reasons. In my opinion the main reason is historical: majority of ML users already use it (another popular choice is R) so if you will not provide an interface to python, your library is most probably doomed to obscurity.


But being written in python does not mean that your model is executed in python. On the contrary, if you written your model in the right way Python is never executed during the evaluation of the TF graph (except of tf.py_func(), which exists for debugging and should be avoided in real model exactly because it is executed on Python’s side).

This is different from for example numpy. For example if you do np.linalg.eig(np.matmul(A, np.transpose(A)) (which is eig(AA')), the operation will compute transpose in some fast language (C++ or fortran), return it to python, take it from python together with A, and compute a multiplication in some fast language and return it to python, then compute eigenvalues and return it to python. So nonetheless expensive operations like matmul and eig are calculated efficiently, you still lose time by moving the results to python back and force. TF does not do it, once you defined the graph your tensors flow not in python but in C++/CUDA/something else.


回答 2

Python允许您使用C和C ++创建扩展模块,与本机代码接口,并且仍然获得Python给您的优势。

TensorFlow使用Python,是的,但是它也包含大量的C ++

这样就可以使用更简单的界面进行实验,从而减少了用Python进行的人工操作,并通过对C ++中最重要的部分进行编程来提高性能。

Python allows you to create extension modules using C and C++, interfacing with native code, and still getting the advantages that Python gives you.

TensorFlow uses Python, yes, but it also contains large amounts of C++.

This allows a simpler interface for experimentation with less human-thought overhead with Python, and add performance by programming the most important parts in C++.


回答 3

您可以从此处查看的最新比率显示TensorFlow C ++内部需要约50%的代码,而Python需要约40%的代码。

C ++和Python都是Google的官方语言,所以也难怪为什么会这样。如果我必须在存在C ++和Python的地方提供快速回归…

C ++在计算代数内部,Python用于其他所有方面,包括测试。知道今天的测试无处不在,难怪Python代码对TF做出了如此大的贡献。

The latest ratio you can check from here shows inside TensorFlow C++ takes ~50% of code, and Python takes ~40% of code.

Both C++ and Python are the official languages at Google so there is no wonder why this is so. If I would have to provide fast regression where C++ and Python are present…

C++ is inside the computational algebra, and Python is used for everything else including for the testing. Knowing how ubiquitous the testing is today it is no wonder why Python code contributes that much to TF.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。