标签归档:bash

Python vs Bash-在性能方面,每个任务都胜过其他任务?

问题:Python vs Bash-在性能方面,每个任务都胜过其他任务?

显然,Python更加用户友好,在Google上进行的快速搜索显示了许多结果,这些结果表明,由于Python是字节编译的,因此通常速度更快。我什至发现声称可以在基于字典的操作上看到超过2000%的改进。

您对此事有何经验?每个人在哪种任务上都是明显的赢家?

Obviously Python is more user friendly, a quick search on google shows many results that say that, as Python is byte-compiled is usually faster. I even found this that claims that you can see an improvement of over 2000% on dictionary-based operations.

What is your experience on this matter? In which kind of task each one is a clear winner?


回答 0

典型的大型机流程…

Input Disk/Tape/User (runtime) --> Job Control Language (JCL) --> Output Disk/Tape/Screen/Printer
                                   |                          ^
                                   v                          |
                                   `--> COBOL Program --------' 

典型的Linux流程…

Input Disk/SSD/User (runtime) --> sh/bash/ksh/zsh/... ----------> Output Disk/SSD/Screen/Printer
                                   |                          ^
                                   v                          |
                                   `--> Python script --------'
                                   |                          ^
                                   v                          |
                                   `--> awk script -----------'
                                   |                          ^
                                   v                          |
                                   `--> sed script -----------'
                                   |                          ^
                                   v                          |
                                   `--> C/C++ program --------'
                                   |                          ^
                                   v                          |
                                   `--- Java program ---------'
                                   |                          ^
                                   v                          |
                                   :                          :

外壳是Linux的粘合剂

像sh / ksh / bash / … 这样的Linux shell 提供输入/输出/流控制指定功能,就像旧的大型机Job Control Language …一样,但是在类固醇上!它们本身就是图灵完整的语言,同时经过优化以有效地将数据和控制传递给O / S支持的以任何语言编写的其他执行过程和从其他执行过程进行传递。

大多数Linux应用程序,无论程序的大部分语言是哪种语言,都取决于shell脚本,而Bash已成为最常见的应用程序。单击桌面上的图标通常会运行一个简短的Bash脚本。该脚本直接或间接知道所有所需文件的位置,并设置变量和命令行参数,最后调用程序。这是shell最简单的用法。

然而,众所周知,如果没有成千上万的外壳脚本来启动系统,响应事件,控制执行优先级以及编译,配置和运行程序,几乎就不会是Linux。其中许多都是非常大而复杂的。

Shell提供了一种基础结构,使我们可以使用在运行时而不是编译时链接在一起的预构建组件。这些组件本身就是独立的程序,可以单独使用或以其他组合使用,而无需重新编译。调用它们的语法与Bash内置命令的语法没有区别,实际上,有许多内置命令,在系统上也有独立的可执行文件,这些命令通常具有其他选项。

PythonBash在性能上没有语言范围的差异。这完全取决于每个代码的编码方式以及调用哪些外部工具。

任何众所周知的工具,例如awk,sed,grep,bc,dc,tr等,都将以任何一种语言进行操作。然后,对于没有图形用户界面的任何事物,Bash都是首选的,因为从Bash之类的工具中调用和传递数据比从Python那里更容易,更有效。

性能

它的总体吞吐量和/或响应能力是否好于等效的Python取决于Bash shell脚本调用的程序及其对子任务的适用性。使事情复杂化的是,Python和大多数语言一样,也可以调用其他可执行文件,尽管它比较麻烦,因此不常用。

用户界面

一个领域的Python是明显的赢家是用户界面。这使其成为构建本地或客户端服务器应用程序的极佳语言,因为它本身支持GTK图形,并且比Bash直观得多。

Bash仅能理解文本。必须为GUI调用其他工具,并从这些工具传回数据。一个Python的脚本是一个选项。更快但更不灵活的选项是YAD,Zenity和GTKDialog之类的二进制文件。

虽然像Bash这样的shell 可以与YadGtkDialog(GTK +函数的嵌入式XML相似的接口)dialogxmessage等GUI很好地配合使用,但Python的功能更加强大,因此对于复杂的GUI窗口也更好。

摘要

使用Shell脚本进行构建就像使用台式机组装具有现成组件的计算机一样。

使用PythonC ++或大多数其他语言进行构建更像是通过像智能手机一样将芯片(库)和其他电子零件焊接在一起来构建计算机。

最好的结果通常是通过使用多种语言来获得的,每种语言都可以尽其所能。一个开发人员称此为“ 多语言编程 ”。

Typical mainframe flow…

Input Disk/Tape/User (runtime) --> Job Control Language (JCL) --> Output Disk/Tape/Screen/Printer
                                   |                          ^
                                   v                          |
                                   `--> COBOL Program --------' 

Typical Linux flow…

Input Disk/SSD/User (runtime) --> sh/bash/ksh/zsh/... ----------> Output Disk/SSD/Screen/Printer
                                   |                          ^
                                   v                          |
                                   `--> Python script --------'
                                   |                          ^
                                   v                          |
                                   `--> awk script -----------'
                                   |                          ^
                                   v                          |
                                   `--> sed script -----------'
                                   |                          ^
                                   v                          |
                                   `--> C/C++ program --------'
                                   |                          ^
                                   v                          |
                                   `--- Java program ---------'
                                   |                          ^
                                   v                          |
                                   :                          :

Shells are the glue of Linux

Linux shells like sh/ksh/bash/… provide input/output/flow-control designation facilities much like the old mainframe Job Control Language… but on steroids! They are Turing complete languages in their own right while being optimized to efficiently pass data and control to and from other executing processes written in any language the O/S supports.

Most Linux applications, regardless what language the bulk of the program is written in, depend on shell scripts and Bash has become the most common. Clicking an icon on the desktop usually runs a short Bash script. That script, either directly or indirectly, knows where all the files needed are and sets variables and command line parameters, finally calling the program. That’s a shell’s simplest use.

Linux as we know it however would hardly be Linux without the thousands of shell scripts that startup the system, respond to events, control execution priorities and compile, configure and run programs. Many of these are quite large and complex.

Shells provide an infrastructure that lets us use pre-built components that are linked together at run time rather than compile time. Those components are free-standing programs in their own right that can be used alone or in other combinations without recompiling. The syntax for calling them is indistinguishable from that of a Bash builtin command, and there are in fact numerous builtin commands for which there is also a stand-alone executable on the system, often having additional options.

There is no language-wide difference between Python and Bash in performance. It entirely depends on how each is coded and which external tools are called.

Any of the well known tools like awk, sed, grep, bc, dc, tr, etc. will leave doing those operations in either language in the dust. Bash then is preferred for anything without a graphical user interface since it is easier and more efficient to call and pass data back from a tool like those with Bash than Python.

Performance

It depends on which programs the Bash shell script calls and their suitability for the subtask they are given whether the overall throughput and/or responsiveness will be better or worse than the equivalent Python. To complicate matters Python, like most languages, can also call other executables, though it is more cumbersome and thus not as often used.

User Interface

One area where Python is the clear winner is user interface. That makes it an excellent language for building local or client-server applications as it natively supports GTK graphics and is far more intuitive than Bash.

Bash only understands text. Other tools must be called for a GUI and data passed back from them. A Python script is one option. Faster but less flexible options are the binaries like YAD, Zenity, and GTKDialog.

While shells like Bash work well with GUIs like Yad, GtkDialog (embedded XML-like interface to GTK+ functions), dialog, and xmessage, Python is much more capable and so better for complex GUI windows.

Summary

Building with shell scripts is like assembling a computer with off-the-shelf components the way desktop PCs are.

Building with Python, C++ or most any other language is more like building a computer by soldering the chips (libraries) and other electronic parts together the way smartphones are.

The best results are usually obtained by using a combination of languages where each can do what they do best. One developer calls this “polyglot programming“.


回答 1

通常,只有在python不可用的环境中,bash才能比python更好。:)

认真地讲,我每天都必须处理两种语言,并且如果可以选择的话,Python将比bash立即使用。las,我被迫在某些“小型”平台上使用bash,因为有人(错误地,恕我直言)认为python“太大”以致无法容纳。

虽然对于某些选择任务,bash的确可以比python快,但它的开发速度或维护速度都不可能如此之快(至少在经过10行左右的代码之后)。Bash的无处不在是python或ruby或lua等的唯一优点。

Generally, bash works better than python only in those environments where python is not available. :)

Seriously, I have to deal with both languages daily, and will take python instantly over bash if given the choice. Alas, I am forced to use bash on certain “small” platforms because someone has (mistakenly, IMHO) decided that python is “too large” to fit.

While it is true that bash might be faster than python for some select tasks, it can never be as quick to develop with, or as easy to maintain (at least after you get past 10 lines of code or so). Bash’s sole strong point wrt python or ruby or lua, etc., is its ubiquity.


回答 2

在bash和Python都是明智的选择的情况下,开发人员的效率对我而言更为重要。

有些任务很适合bash,另一些则适合Python。对于我来说,将其作为bash脚本启动并将其更改为Python并不罕见,因为它会持续数周的发展。

Python的一大优势是在处理文件名的极端情况下,尽管它具有globshutil,和其他常见的脚本的需求。

Developer efficiency matters much more to me in scenarios where both bash and Python are sensible choices.

Some tasks lend themselves well to bash, and others to Python. It also isn’t unusual for me to start something as a bash script and change it to Python as it evolves over several weeks.

A big advantage Python has is in corner cases around filename handling, while it has glob, shutil, subprocess, and others for common scripting needs.


回答 3

在编写脚本时,性能并不重要(在大多数情况下)。
如果您关心性能,“ Python vs Bash”是一个错误的问题。

Python
+易于编写
+易于维护
+代码重用(尝试在通用代码中找到通用的防错方法来包含文件sh,我敢)
+您也可以使用OOP!
+更轻松的参数解析。好吧,确实不容易。它仍然太罗to了,但python argparse内置了功能
。-丑陋的’subprocess’。尝试链接命令,不要哭了,您的代码将变得多么丑陋。特别是如果您关心退出代码。

Bash
+如前所述,无处不在。
+简单的命令链接。这就是您以简单的方式将不同的命令粘合在一起的方式。也Bash(不是sh)也有一些改进,例如pipefail,因此链接确实很短且富有表现力。
+不需要安装第三方程序。可以立即执行。
-天哪,到处都是陷阱。IFS,CDPATH ..数千种。

如果编写的脚本大于100 LOC:请选择Python
如果需要在脚本中进行路径操作:请选择Python(3)
如果需要一些类似于alias但有点复杂的:请选择Bash / sh

无论如何,一个人应该尽力让双方了解他们的能力。

也许可以通过打包和IDE支持点来扩展答案,但是我对此并不熟悉。

与往常一样,您必须选择粪便三明治和巨型水饺。请记住,仅在几年前,Perl就是新希望。现在在哪里。

When you writing scripts performance does not matter (in most cases).
If you care about performance ‘Python vs Bash’ is a false question.

Python:
+ easier to write
+ easier to maintain
+ easier code reuse (try to find universal error-proof way to include files with common code in sh, I dare you)
+ you can do OOP with it too!
+ easier arguments parsing. well, not easier, exactly. it still will be too wordy to my taste, but python have argparse facility built in.
– ugly ugly ‘subprocess’. try to chain commands and not to cry a river how ugly your code will become. especially if you care about exit codes.

Bash:
+ ubiquity, as was said earlier, indeed.
+ simple commands chaining. that’s how you glue together different commands in a simple way. Also Bash (not sh) have some improvements, like pipefail, so chaining is really short and expressive.
+ do not require 3rd-party programs to be installed. can be executed right away.
– god, it’s full of gotchas. IFS, CDPATH.. thousands of them.

If one writing a script bigger than 100 LOC: choose Python
If one need path manipulation in script: choose Python(3)
If one need somewhat like alias but slightly complicated: choose Bash/sh

Anyway, one should try both sides to get the idea what are they capable of.

Maybe answer can be extended with packaging and IDE support points, but I’m not familiar with this sides.

As always you have to choose from turd sandwich and giant douche. And remember, just a few years ago Perl was new hope. Where it is now.


回答 4

在进程启动时,性能方面的bash优于python。

以下是我的运行Linux Mint的核心i7笔记本电脑的一些测量结果:

Starting process                       Startup time

empty /bin/sh script                   1.7 ms
empty /bin/bash script                 2.8 ms
empty python script                    11.1 ms
python script with a few libs*         110 ms

* Python加载的库是:os,os.path,json,时间,请求,线程,子进程

这显示出巨大的差异,但是如果bash必须做任何明智的事情,因为它通常必须调用外部进程,则执行时间会迅速缩短。

如果您关心性能,请仅将bash用于:

  • 非常简单且经常调用的脚本
  • 主要调用其他进程的脚本
  • 当您需要手动管理操作和脚本之间的最小摩擦时-快速检查一些命令并将其放置在file.sh中

Performance-wise bash outperforms python in the process startup time.

Here are some measurements from my core i7 laptop running Linux Mint:

Starting process                       Startup time

empty /bin/sh script                   1.7 ms
empty /bin/bash script                 2.8 ms
empty python script                    11.1 ms
python script with a few libs*         110 ms

*Python loaded libs are: os, os.path, json, time, requests, threading, subprocess

This shows a huge difference however bash execution time degrades quickly if it has to do anything sensible since it usually must call external processes.

If you care about performance use bash only for:

  • really simple and frequently called scripts
  • scripts that mainly call other processes
  • when you need minimal friction between manual administrative actions and scripting – fast check a few commands and place them in the file.sh

回答 5

Bash主要是一种批处理/ shell脚本语言,对各种数据类型和围绕控制结构的各种怪癖的支持要少得多,更不用说兼容性问题了。

哪个更快?两者都不是,因为您这里没有将苹果与其他苹果进行比较。如果您必须对一个ascii文本文件进行排序,并且正在使用zcat,sort,uniq和sed之类的工具,那么您将明智地利用Python性能。

但是,如果您需要一个支持浮点和各种控制流的适当编程环境,那么Python无疑是明智之举。如果您在Bash和Python中写了一个递归算法,则Python版本将赢得一个数量级或更多。

Bash is primarily a batch / shell scripting language with far less support for various data types and all sorts of quirks around control structures — not to mention compatibility issues.

Which is faster? Neither, because you are not comparing apples to apples here. If you had to sort an ascii text file and you were using tools like zcat, sort, uniq, and sed then you will smoke Python performance wise.

However, if you need a proper programming environment that supports floating point and various control flow, then Python wins hands down. If you wrote say a recursive algorithm in Bash and Python, the Python version will win in an order of magnitude or more.


回答 6

如果您希望以最小的努力拼凑快速的实用程序,那么bash就是不错的选择。对于应用程序的包装器而言,bash不可估量。

任何可能使您一遍又一遍地添加改进的东西(尽管并非总是如此)可能更适合于Python之类的语言,因为包含超过1000行的Bash代码很难维护。当Bash代码变长时,它也很烦人调试。

根据我的经验,这类问题的部分问题是shell脚本通常都是自定义任务。在已经有免费解决方案的地方,遇到的shell脚本任务很少。

If you are looking to cobble together a quick utility with minimal effort, bash is good. For a wrapper round an application, bash is invaluable.

Anything that may have you coming back over and over to add improvements is probably (though not always) better suited to a language like Python as Bash code comprising over a 1000 lines gets very painful to maintain. Bash code is also irritating to debug when it gets long…….

Part of the problem with these kind of questions is, from my experience, that shell scripts are usually all custom tasks. There have been very few shell scripting tasks that I have come across where there is already a solution freely available.


回答 7

我相信有两种方案的Bash性能至少相等:

  • 命令行实用程序的脚本
  • 只需很短时间即可执行的脚本;在其中启动Python解释器需要比操作本身更多的时间

就是说,我通常并不真正关心脚本语言本身的性能。如果性能是一个真正的问题,那么您不必编写脚本而是编写程序(可能使用Python)。

There are 2 scenario’s where Bash performance is at least equal I believe:

  • Scripting of command line utilities
  • Scripts which take only a short time to execute; where starting the Python interpreter takes more time than the operation itself

That said, I usually don’t really concern myself with performance of the scripting language itself. If performance is a real issue you don’t script but program (possibly in Python).


回答 8

我之所以发布此最新答案,主要是因为Google喜欢这个问题。

我认为问题和背景确实应该与工作流程有关,而不是工具。总体理念始终是“使用正确的工具完成工作”。但是在此之前,许多人常常在工具迷路时忘记了这一点:“完成工作”。

当我遇到一个尚未完全定义的问题时,我几乎总是从Bash开始。我已经解决了大型Bash脚本中易读且可维护的一些棘手问题。

但是问题什么时候开始超过应该要求Bash做什么的呢?我有一些支票可以用来警告我:

  1. 我是否希望Bash具有2D(或更高)阵列?如果是的话,是时候意识到Bash不是很好的数据处理语言了。
  2. 与为其他实用程序准备数据相比,我是否正在做更多的工作?如果是,请再次意识到Bash不是一种出色的数据处理语言。
  3. 我的脚本仅仅是变得太大而无法管理吗?如果是,那么很重要的一点是要意识到,尽管Bash可以导入脚本库,但它缺少像其他语言一样的软件包系统。与大多数其他语言相比,它确实是一种“自己动手”的语言。再说一次,它具有大量的内置功能(有人说太多…)

清单继续。底线是,当您为添加功能而更加努力地保持脚本运行时,该离开Bash了。

假设您已决定将工作移至Python。如果您的Bash脚本干净,则初始转换非常简单。甚至还有几个转换器/翻译器将为您做第一遍。

下一个问题是:您放弃转向Python的什么?

  1. 必须将对外部实用程序的所有调用包装在subprocess模块(或等效模块)中的某些内容中。有多种方法可以做到这一点,直到3.7,它才花了点力气才将其改正(改进subprocess.run()了3.7,可以自行处理所有常见情况)。

  2. 令人惊讶的是,Python没有用于轮询键盘(stdin)的标准独立于平台的非阻塞实用程序(带有超时)。Bash read命令是一个很棒的工具,用于简单的用户交互。我最常见的用法是显示一个微调框,直到用户按下某个键为止,同时还运行轮询功能(每个微调框步骤都执行一次),以确保一切运行正常。这是一个比刚开始时要棘手的问题,所以我经常简单地打电话给Bash:昂贵,但这恰恰满足了我的需求。

  3. 如果您是在嵌入式或受内存限制的系统上进行开发,Python的内存占用量可能是Bash的很多倍(取决于手头的任务)。另外,内存中几乎总是有一个Bash实例,而Python可能并非如此。

  4. 对于只运行一次并快速退出的脚本,Python的启动时间可能比Bash的启动时间长得多。但是,如果脚本中包含大量计算,Python会迅速前进。

  5. Python具有地球上最全面的软件包系统。当Bash变得稍微复杂时,Python可能会提供一个程序包,使整个Bash块成为单个调用。但是,找到合适的软件包成为Pythonista的最大也是最艰巨的任务。幸运的是,Google和StackExchange是您的朋友。

I’m posting this late answer primarily because Google likes this question.

I believe the issue and context really should be about the workflow, not the tools. The overall philosophy is always “Use the right tool for the job.” But before this comes one that many often forget when they get lost in the tools: “Get the job done.”

When I have a problem that isn’t completely defined, I almost always start with Bash. I have solved some gnarly problems in large Bash scripts that are both readable and maintainable.

But when does the problem start to exceed what Bash should be asked to do? I have some checks I use to give me warnings:

  1. Am I wishing Bash had 2D (or higher) arrays? If yes, it’s time to realize that Bash is not a great data processing language.
  2. Am I doing more work preparing data for other utilities than I am actually running those utilities? If yes, time again to realize Bash is not a great data processing language.
  3. Is my script simply getting too large to manage? If yes, it is important to realize that while Bash can import script libraries, it lacks a package system like other languages. It’s really a “roll your own” language compared to most others. Then again, it has a enormous amount of functionality built-in (some say too much…)

The list goes on. Bottom-line, when you are working harder to keep your scripts running that you do adding features, it’s time to leave Bash.

Let’s assume you’ve decided to move your work to Python. If your Bash scripts are clean, the initial conversion is quite straightforward. There are even several converters / translators that will do the first pass for you.

The next question is: What do you give up moving to Python?

  1. All calls to external utilities must be wrapped in something from the subprocess module (or equivalent). There are multiple ways to do this, and until 3.7 it took some effort to get it right (3.7 improved subprocess.run() to handle all common cases on its own).

  2. Surprisingly, Python has no standard platform-independent non-blocking utility (with timeout) for polling the keyboard (stdin). The Bash read command is an awesome tool for simple user interaction. My most common use is to show a spinner until the user presses a key, while also running a polling function (with each spinner step) to make sure things are still running well. This is a harder problem than it would appear at first, so I often simply make a call to Bash: Expensive, but it does precisely what I need.

  3. If you are developing on an embedded or memory-constrained system, Python’s memory footprint can be many times larger than Bash’s (depending on the task at hand). Plus, there is almost always an instance of Bash already in memory, which may not be the case for Python.

  4. For scripts that run once and exit quickly, Python’s startup time can be much longer than Bash’s. But if the script contains significant calculations, Python quickly pulls ahead.

  5. Python has the most comprehensive package system on the planet. When Bash gets even slightly complex, Python probably has a package that makes whole chunks of Bash become a single call. However, finding the right package(s) to use is the biggest and most daunting part of becoming a Pythonista. Fortunately, Google and StackExchange are your friends.


回答 9

我不知道这是否正确,但是我发现python / ruby​​在具有大量数学计算的脚本中效果更好。否则你必须使用dc或其他“任意精度计算器”。这只是一个很大的痛苦。使用python,您可以更好地控制浮点数和整数,并且有时执行许多计算要容易得多。

特别是,我永远不会使用bash脚本来处理二进制信息或字节。相反,我会使用python(也许)或C ++或什至Node.JS之类的东西。

I don’t know if this is accurate, but I have found that python/ruby works much better for scripts that have a lot of mathematical computations. Otherwise you have to use dc or some other “arbitrary precision calculator”. It just becomes a very big pain. With python you have much more control over floats vs ints and it is much easier to perform a lot of computations and sometimes.

In particular, I would never work with a bash script to handle binary information or bytes. Instead I would use something like python (maybe) or C++ or even Node.JS.


回答 10

在性能方面,两者可以做同样的事情,所以问题就变成了节省更多开发时间的问题?

Bash依赖于调用其他命令,并通过管道传递它们来创建新命令。这样做的好处是,无论他们使用什么编程语言,都可以使用从其他人那里借来的代码快速创建新程序。

这也具有很好的抵抗子命令更改的副作用,因为它们之间的界面只是纯文本。

另外,Bash在如何编写方面非常宽容。这意味着它可以在更广泛的上下文中很好地工作,但是它也依赖于程序员以一种干净安全的方式进行编码的意图。否则,Bash不会阻止您制造混乱。

Python的样式更加结构化,因此凌乱的程序员不会那么凌乱。它也可以在Linux以外的操作系统上运行,如果需要这种可移植性,使其立即变得更合适。

但这并不是调用其他命令那么简单。因此,如果您的操作系统是Unix,那么您将发现在Bash上进行开发是最快的开发方法。

何时使用Bash:

  • 它是一个非图形程序,或者是图形程序的引擎。
  • 仅适用于Unix。

何时使用Python:

  • 这是一个图形程序。
  • 它可以在Windows上运行。

Performance wise both can do equally the same, so the question becomes which saves more development time?

Bash relies on calling other commands, and piping them for creating new ones. This has the advantage that you can quickly create new programs just with the code borrowed from other people, no matter what programming language they used.

This also has the side effect of resisting change in sub-commands pretty well, as the interface between them is just plain text.

Additionally Bash is very permissive on how you can write on it. This means it will work well for a wider variety of context, but it also relies on the programmer having the intention of coding in a clean safe manner. Otherwise Bash won’t stop you from building a mess.

Python is more structured on style, so a messy programmer won’t be as messy. It will also work on operating systems outside Linux, making it instantly more appropriate if you need that kind of portability.

But it isn’t as simple for calling other commands. So if your operating system is Unix most likely you will find that developing on Bash is the fastest way to develop.

When to use Bash:

  • It’s a non graphical program, or the engine of a graphical one.
  • It’s only for Unix.

When to use Python:

  • It’s a graphical program.
  • It shall work on Windows.

如何在Bash脚本中激活virtualenv源

问题:如何在Bash脚本中激活virtualenv源

如何创建Bash脚本来激活Python virtualenv?

我有一个类似的目录结构:

.env
    bin
        activate
        ...other virtualenv files...
src
    shell.sh
    ...my code...

我可以通过以下方式激活我的virtualenv:

user@localhost:src$ . ../.env/bin/activate
(.env)user@localhost:src$

但是,从Bash脚本执行相同操作将不会执行以下操作:

user@localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user@localhost:src$ ./shell.sh
user@localhost:src$ 

我究竟做错了什么?

How do you create a Bash script to activate a Python virtualenv?

I have a directory structure like:

.env
    bin
        activate
        ...other virtualenv files...
src
    shell.sh
    ...my code...

I can activate my virtualenv by:

user@localhost:src$ . ../.env/bin/activate
(.env)user@localhost:src$

However, doing the same from a Bash script does nothing:

user@localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user@localhost:src$ ./shell.sh
user@localhost:src$ 

What am I doing wrong?


回答 0

当您获取资源时,您会将激活脚本加载到活动的shell中。

当您在脚本中执行此操作时,将其加载到该外壳中,该外壳将在脚本完成后退出,并且返回到未激活的原始外壳。

最好的选择是在函数中执行此操作

activate () {
  . ../.env/bin/activate
}

或别名

alias activate=". ../.env/bin/activate"

希望这可以帮助。

When you source, you’re loading the activate script into your active shell.

When you do it in a script, you load it into that shell which exits when your script finishes and you’re back to your original, unactivated shell.

Your best option would be to do it in a function

activate () {
  . ../.env/bin/activate
}

or an alias

alias activate=". ../.env/bin/activate"

Hope this helps.


回答 1

您应该使用source调用bash脚本。

这是一个例子:

#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"

在您的shell上这样称呼它:

> source venv.sh

或按照@outmind建议:(请注意,这不适用于zsh)

> . venv.sh

到这里,shell指示将出现在您的提示符下。

You should call the bash script using source.

Here is an example:

#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"

On your shell just call it like that:

> source venv.sh

Or as @outmind suggested: (Note that this does not work with zsh)

> . venv.sh

There you go, the shell indication will be placed on your prompt.


回答 2

尽管它没有在shell提示符中添加“(.env)”前缀,但我发现此脚本可以按预期工作。

#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"

例如

user@localhost:~/src$ which pip
/usr/local/bin/pip
user@localhost:~/src$ which python
/usr/bin/python
user@localhost:~/src$ ./shell
user@localhost:~/src$ which pip
~/.env/bin/pip
user@localhost:~/src$ which python
~/.env/bin/python
user@localhost:~/src$ exit
exit

Although it doesn’t add the “(.env)” prefix to the shell prompt, I found this script works as expected.

#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"

e.g.

user@localhost:~/src$ which pip
/usr/local/bin/pip
user@localhost:~/src$ which python
/usr/bin/python
user@localhost:~/src$ ./shell
user@localhost:~/src$ which pip
~/.env/bin/pip
user@localhost:~/src$ which python
~/.env/bin/python
user@localhost:~/src$ exit
exit

回答 3

Sourcing在您当前的shell中运行shell命令。当像上述一样在脚本内部获取源代码时,会影响该脚本的环境,但是当脚本退出时,环境更改将被撤消,因为它们实际上已超出范围。

如果您打算在virtualenv中运行shell命令,则可以在获取激活脚本后在脚本中执行此操作。如果您打算与virtualenv内部的shell进行交互,则可以在脚本内部产生一个继承环境的子shell。

Sourcing runs shell commands in your current shell. When you source inside of a script like you are doing above, you are affecting the environment for that script, but when the script exits, the environment changes are undone, as they’ve effectively gone out of scope.

If your intent is to run shell commands in the virtualenv, you can do that in your script after sourcing the activate script. If your intent is to interact with a shell inside the virtualenv, then you can spawn a sub-shell inside your script which would inherit the environment.


回答 4

这是我经常使用的脚本。运行为$ source script_name

#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
    . $PWD/venv/bin/activate
}

activate

Here is the script that I use often. Run it as $ source script_name

#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
    . $PWD/venv/bin/activate
}

activate

回答 5

采购bash脚本的目的是什么?

  1. 如果您打算在多个virtualenv之间切换或快速输入一个virtualenv,您是否尝试过virtualenvwrapper?它提供了很多像utils的的workon venvmkvirtualenv venv等等。

  2. 如果您只是在某些virtualenv中运行python脚本,请使用/path/to/venv/bin/python script.py来运行它。

What does sourcing the bash script for?

  1. If you intend to switch between multiple virtualenvs or enter one virtualenv quickly, have you tried virtualenvwrapper? It provides a lot of utils like workon venv, mkvirtualenv venv and so on.

  2. If you just run a python script in certain virtualenv, use /path/to/venv/bin/python script.py to run it.


回答 6

您还可以使用子外壳来更好地包含您的用法-这是一个实际示例:

#!/bin/bash

commandA --args

# Run commandB in a subshell and collect its output in $VAR
# NOTE
#  - PATH is only modified as an example
#  - output beyond a single value may not be captured without quoting
#  - it is important to discard (or separate) virtualenv activation stdout
#    if the stdout of commandB is to be captured
#
VAR=$(
    PATH="/opt/bin/foo:$PATH"
    . /path/to/activate > /dev/null  # activate virtualenv
    commandB  # tool from /opt/bin/ which requires virtualenv
)

# Use the output from commandB later
commandC "$VAR"

此样式在以下情况下特别有用

  • 下的commandAcommandC存在其他版本/opt/bin
  • commandB存在于系统中PATH或非常普遍
  • 这些命令在virtualenv下失败
  • 一个需要各种不同的虚拟环境

You can also do this using a subshell to better contain your usage – here’s a practical example:

#!/bin/bash

commandA --args

# Run commandB in a subshell and collect its output in $VAR
# NOTE
#  - PATH is only modified as an example
#  - output beyond a single value may not be captured without quoting
#  - it is important to discard (or separate) virtualenv activation stdout
#    if the stdout of commandB is to be captured
#
VAR=$(
    PATH="/opt/bin/foo:$PATH"
    . /path/to/activate > /dev/null  # activate virtualenv
    commandB  # tool from /opt/bin/ which requires virtualenv
)

# Use the output from commandB later
commandC "$VAR"

This style is especially helpful when

  • a different version of commandA or commandC exists under /opt/bin
  • commandB exists in the system PATH or is very common
  • these commands fail under the virtualenv
  • one needs a variety of different virtualenvs

回答 7

您应该在一行中使用多个命令。例如:

os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")

当您在一行中激活虚拟环境时,我认为它会忘记其他命令行,并且可以通过在一行中使用多个命令来防止这种情况。它为我工作:)

You should use multiple commands in one line. for example:

os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")

when you activate your virtual environment in one line, I think it forgets for other command lines and you can prevent this by using multiple commands in one line. It worked for me :)


回答 8

在学习venv时,我创建了一个脚本来提醒我如何激活它。

#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
  echo "[info] Ctrl+d to deactivate"
  bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi

这样做的好处是可以更改提示。

When I was learning venv I created a script to remind me how to activate it.

#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
  echo "[info] Ctrl+d to deactivate"
  bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi

This has the advantage that it changes the prompt.


如何在* nix下的ipython中使用vi键?

问题:如何在* nix下的ipython中使用vi键?

当前在Bash中,我用于set -o vi在bash提示符中启用vi模式。

我如何在ipython中进行此操作?

注意:如果答案适用于所有* nix,我将从标题中删除OS X :)

Currently in Bash I use set -o vi to enable vi mode in my bash prompt.

How do I get this going in ipython?

Note: If an answer applies to all *nix, I’ll remove the OS X from the title :)


回答 0

如果有人最近在这里闲逛,那么IPython 5.0从readline切换到hint_toolkit,因此对此问题的更新答案是传递一个选项:

$ ipython --TerminalInteractiveShell.editing_mode=vi

…或在配置文件配置中进行全局设置(~/.ipython/profile_default/ipython_config.pyipython profile create如果没有,则使用创建):

c.TerminalInteractiveShell.editing_mode = 'vi'

In case someone’s wandering in here recently, IPython 5.0 switched from readline to prompt_toolkit, so an updated answer to this question is to pass an option:

$ ipython --TerminalInteractiveShell.editing_mode=vi

… or to set it globally in the profile configuration (~/.ipython/profile_default/ipython_config.py; create it with ipython profile create if you don’t have it) with:

c.TerminalInteractiveShell.editing_mode = 'vi'

回答 1

看来解决方案适用于许多其他与Readline兼容的应用程序:

~/.inputrc文件中设置以下内容:

set editing-mode vi
set keymap vi
set convert-meta on

资料来源:http : //www.jukie.net/bart/blog/20040326082602

Looks like a solution works for many other readline compatible apps:

Set the following in your ~/.inputrc file:

set editing-mode vi
set keymap vi
set convert-meta on

Source: http://www.jukie.net/bart/blog/20040326082602


回答 2

您也可以在Vi-mode和Emacs模式之间交互切换。根据readline文档在它们之间进行切换,您应该能够使用MCj组合键,但这似乎只允许我切换到vi模式-在Mac上(其中ESC被用作“ Meta”键) )是:ESC+ CTRL+ j。要切换回Emacs模式,可以使用Ce,但对我而言似乎不起作用-我不得不改用MCe-在Mac上是:ESC+ CTRL+ e

仅供参考我的〜/ .inputrc设置如下:

set meta-flag on
set input-meta on
set convert-meta off
set output-meta on

You can also interactively switch between Vi-mode and Emacs mode. According to the the readline docs to switch between them you are supposed to be able to use the M-C-j key combination but that only seems to allow me to switch to vi-mode – on my Mac (where ESC is used as the ‘Meta’ key) it is: ESC+CTRL+j. To switch back to Emacs mode one can use C-e but that didn’t appear to work for me – I had to instead do M-C-e – on my Mac it is: ESC+CTRL+e.

FYI my ~/.inputrc is set up as follows:

set meta-flag on
set input-meta on
set convert-meta off
set output-meta on

回答 3

ipython使用readline库,并且可以使用该~/.inputrc文件进行配置。你可以加

set editing-mode vi

到该文件,以使所有readline基础应用程序都使用vi样式键绑定而不是Emacs。

ipython uses the readline library and this is configurable using the ~/.inputrc file. You can add

set editing-mode vi

to that file to make all readline based applications use vi style keybindings instead of Emacs.


回答 4

我需要能够在IPython 5中交互地切换模式,我发现可以通过即时创建提示管理器来做到这一点:

a = get_ipython().configurables[0]; a.editing_mode='vi'; a.init_prompt_toolkit_cli()

I needed to be able to switch modes interactively in IPython 5 and I found you can do so by recreating the prompt manager on the fly:

a = get_ipython().configurables[0]; a.editing_mode='vi'; a.init_prompt_toolkit_cli()

回答 5

您可以在.ipython启动配置文件中设置vi。如果没有,则创建一个文件,方法是添加一个~/.ipython/profile_default/startup/名为的文件start.py。这是一个例子:

# Initializing script for ipython in ~/.ipython/profile_default/startup/
from IPython import get_ipython
ipython = get_ipython()

# If in ipython, set vi and load autoreload extension
if 'ipython' in globals():
    ipython.editing_mode = 'vi'
    ipython.magic('load_ext autoreload')
    ipython.magic('autoreload 2')
from Myapp.models import * 

最后一行是如果您将ipython与Django一起使用,并且要默认导入所有模型。

You may set vi in your .ipython start-up config file. Create one if you don’t have it by adding a file to ~/.ipython/profile_default/startup/ called something like start.py. Here’s an example:

# Initializing script for ipython in ~/.ipython/profile_default/startup/
from IPython import get_ipython
ipython = get_ipython()

# If in ipython, set vi and load autoreload extension
if 'ipython' in globals():
    ipython.editing_mode = 'vi'
    ipython.magic('load_ext autoreload')
    ipython.magic('autoreload 2')
from Myapp.models import * 

That last line is if you use ipython with Django, and want to import all your models by default.


Bash等同于Python的pass语句

问题:Bash等同于Python的pass语句

是否有与Python pass语句等效的Bash ?

Is there a Bash equivalent to the Python’s pass statement?


回答 0

您可以:为此使用。

You can use : for this.


回答 1

true 是成功执行任何操作的命令。

false在某种程度上是相反的:它什么也不做,但是声称发生了故障。)

true is a command that successfully does nothing.

(false would, in a way, be the opposite: it doesn’t do anything, but claims that a failure occurred.)


从python中运行bash脚本

问题:从python中运行bash脚本

我的以下代码有问题:

callBash.py:

import subprocess
print "start"
subprocess.call("sleep.sh")
print "end"

sleep.sh:

sleep 10

我希望10秒钟后打印“结束”。(我知道这是一个愚蠢的示例,我可以简单地在python中睡眠,但是这个简单的sleep.sh文件只是一个测试)

I have a problem with the following code:

callBash.py:

import subprocess
print "start"
subprocess.call("sleep.sh")
print "end"

sleep.sh:

sleep 10

I want the “end” to be printed after 10s. (I know that this is a dumb example, I could simply sleep within python, but this simple sleep.sh file was just as a test)


回答 0

使sleep.sh可执行并添加shell=True到参数列表中(如先前答案中所建议)可以正常工作。根据搜索路径,您可能还需要添加./或其他合适的路径。(即,更改"sleep.sh""./sleep.sh"。)

shell=True不需要的参数,如果bash脚本的第一行是一个外壳的路径(一个Posix的系统如Linux下); 例如,#!/bin/bash

Making sleep.sh executable and adding shell=True to the parameter list (as suggested in previous answers) works ok. Depending on the search path, you may also need to add ./ or some other appropriate path. (Ie, change "sleep.sh" to "./sleep.sh".)

The shell=True parameter is not needed (under a Posix system like Linux) if the first line of the bash script is a path to a shell; for example, #!/bin/bash.


回答 1

如果sleep.sh具有shebang #!/bin/sh并且它具有适当的文件权限-运行chmod u+rx sleep.sh以确保它已经在其中,$PATH则您的代码应按原样工作:

import subprocess

rc = subprocess.call("sleep.sh")

如果脚本不在PATH中,则指定它的完整路径,例如,如果它在当前工作目录中:

from subprocess import call

rc = call("./sleep.sh")

如果脚本没有shebang,则需要指定shell=True

rc = call("./sleep.sh", shell=True)

如果脚本没有可执行权限,并且您无法更改它(例如,通过运行),os.chmod('sleep.sh', 0o755)则可以将脚本作为文本文件读取,然后将字符串传递给subprocess模块:

with open('sleep.sh', 'rb') as file:
    script = file.read()
rc = call(script, shell=True)

If sleep.sh has the shebang #!/bin/sh and it has appropriate file permissions — run chmod u+rx sleep.sh to make sure and it is in $PATH then your code should work as is:

import subprocess

rc = subprocess.call("sleep.sh")

If the script is not in the PATH then specify the full path to it e.g., if it is in the current working directory:

from subprocess import call

rc = call("./sleep.sh")

If the script has no shebang then you need to specify shell=True:

rc = call("./sleep.sh", shell=True)

If the script has no executable permissions and you can’t change it e.g., by running os.chmod('sleep.sh', 0o755) then you could read the script as a text file and pass the string to subprocess module instead:

with open('sleep.sh', 'rb') as file:
    script = file.read()
rc = call(script, shell=True)

回答 2

实际上,您只需要添加shell=True参数即可:

subprocess.call("sleep.sh", shell=True)

但要注意-

警告如果与不受信任的输入结合使用,使用shell = True调用系统外壳可能会带来安全隐患。有关详细信息,请参见“常用参数”下的警告。

资源

Actually, you just have to add the shell=True argument:

subprocess.call("sleep.sh", shell=True)

But beware –

Warning Invoking the system shell with shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.

source


回答 3

如果有人希望通过参数调用脚本

import subprocess

val = subprocess.check_call("./script.sh '%s'" % arg, shell=True)

请记住在传递之前使用str(arg)将args转换为字符串。

这可以用来传递任意多个参数:

subprocess.check_call("./script.ksh %s %s %s" % (arg1, str(arg2), arg3), shell=True)

If someone looking for calling a script with arguments

import subprocess

val = subprocess.check_call("./script.sh '%s'" % arg, shell=True)

Remember to convert the args to string before passing, using str(arg).

This can be used to pass as many arguments as desired:

subprocess.check_call("./script.ksh %s %s %s" % (arg1, str(arg2), arg3), shell=True)

回答 4

确保sleep.sh具有执行权限,并使用来运行它shell=True

#!/usr/bin/python

import subprocess
print "start"
subprocess.call("./sleep.sh", shell=True)
print "end"

Make sure that sleep.sh has execution permissions, and run it with shell=True:

#!/usr/bin/python

import subprocess
print "start"
subprocess.call("./sleep.sh", shell=True)
print "end"

回答 5

如果chmod不起作用,那么您也可以尝试

import os
os.system('sh script.sh')
#you can also use bash instead of sh

由我测试谢谢

If chmod not working then you also try

import os
os.system('sh script.sh')
#you can also use bash instead of sh

test by me thanks


回答 6

添加一个答案是因为询问了如何从python运行bash脚本后我被定向到了这里。OSError: [Errno 2] file not found如果您的脚本接受参数,则会收到错误消息。假设您的脚本输入了一个sleep time参数:subprocess.call("sleep.sh 10")将无法工作,您必须将其作为数组传递:subprocess.call(["sleep.sh", 10])

Adding an answer because I was directed here after asking how to run a bash script from python. You receive an error OSError: [Errno 2] file not found if your script takes in parameters. Lets say for instance your script took in a sleep time parameter: subprocess.call("sleep.sh 10") will not work, you must pass it as an array: subprocess.call(["sleep.sh", 10])


bash:mkvirtualenv:找不到命令

问题:bash:mkvirtualenv:找不到命令

按照Doug Hellman的virtualenvwrapper帖子中的说明进行操作,我仍然无法启动测试环境。

[mpenning@tsunami ~]$ mkvirtualenv test
-bash: mkvirtualenv: command not found
[mpenning@tsunami ~]$

请注意,我使用WORKON_HOME的不在我的$HOME。我尝试/usr/local/bin/virtualenvwrapper.shvirtualenvwrapper安装文档中所示进行查找,但是它不存在。

如果这很重要,我正在运行CentOS 6和python 2.6.6。


# File: ~/.bash_profile
# ...

export WORKON_HOME="/opt/virtual_env/"
source "/opt/virtual_env/bin/virtualenvwrapper_bashrc"

After following the instructions on Doug Hellman’s virtualenvwrapper post, I still could not fire up a test environment.

[mpenning@tsunami ~]$ mkvirtualenv test
-bash: mkvirtualenv: command not found
[mpenning@tsunami ~]$

It should be noted that I’m using WORKON_HOME that is not in my $HOME. I tried looking for /usr/local/bin/virtualenvwrapper.sh as shown in the virtualenvwrapper installation docs, but it does not exist.

I’m running CentOS 6 and python 2.6.6, if this matters.


# File: ~/.bash_profile
# ...

export WORKON_HOME="/opt/virtual_env/"
source "/opt/virtual_env/bin/virtualenvwrapper_bashrc"

回答 0

解决方案1

由于某种原因,请virtualenvwrapper.sh安装在中/usr/bin/virtualenvwrapper.sh,而不是在下/usr/local/bin

.bash_profile作品中的以下内容…

source "/usr/bin/virtualenvwrapper.sh"
export WORKON_HOME="/opt/virtual_env/"

我的安装看起来不错,无需采购 virtualenvwrapper_bashrc

解决方案2

或者,如下所述,您可以利用virtualenvwrapper.sh外壳中已经存在的机会,PATH然后发出一个source `which virtualenvwrapper.sh`

Solution 1:

For some reason, virtualenvwrapper.sh installed in /usr/bin/virtualenvwrapper.sh, instead of under /usr/local/bin.

The following in my .bash_profile works…

source "/usr/bin/virtualenvwrapper.sh"
export WORKON_HOME="/opt/virtual_env/"

My install seems to work fine without sourcing virtualenvwrapper_bashrc

Solution 2:

Alternatively as mentioned below, you could leverage the chance that virtualenvwrapper.sh is already in your shell’s PATH and just issue a source `which virtualenvwrapper.sh`


回答 1

尝试:

source `which virtualenvwrapper.sh`

反引号是命令替换-它们将程序打印出的所有内容放入表达式中。在这种情况下,“哪个”检查$ PATH以找到virtualenvwrapper.sh并输出路径。然后,shell通过“源”读取脚本。

如果您希望每次重新启动外壳程序时都会发生这种情况,最好先从“哪个”命令中获取输出,然后将“源代码”行放入外壳程序中,如下所示:

echo "source /path/to/virtualenvwrapper.sh" >> ~/.profile

^这可能因您的外壳而略有不同。另外,请注意不要使用单个>,因为这会截断〜/ .profile:-o

Try:

source `which virtualenvwrapper.sh`

The backticks are command substitution – they take whatever the program prints out and put it in the expression. In this case “which” checks the $PATH to find virtualenvwrapper.sh and outputs the path to it. The script is then read by the shell via ‘source’.

If you want this to happen every time you restart your shell, it’s probably better to grab the output from the “which” command first, and then put the “source” line in your shell, something like this:

echo "source /path/to/virtualenvwrapper.sh" >> ~/.profile

^ This may differ slightly based on your shell. Also, be careful not to use the a single > as this will truncate your ~/.profile :-o


回答 2

我在OS X 10.9.1和python 2.7.5上遇到了相同的问题。没有问题WORKON_HOME的我,但我确实有手动添加source "/usr/local/bin/virtualenvwrapper.sh"~/.bash_profile(或~/.bashrc后我跑UNIX)pip install virtualenvwrapper

I had the same issue on OS X 10.9.1 with python 2.7.5. No issues with WORKON_HOME for me, but I did have to manually add source "/usr/local/bin/virtualenvwrapper.sh" to ~/.bash_profile (or ~/.bashrc in unix) after I ran pip install virtualenvwrapper


回答 3

执行此命令的先决条件-

  1. PIP(递归缩写,P IP nstalls P ackages)是用于安装和管理Python编写的软件包,软件包管理系统。在Python软件包索引(PyPI)中可以找到许多软件包。

    须藤apt-get install python-pip

  2. 安装虚拟环境。用于创建虚拟环境,安装彼此隔离的多个项目的软件包和依赖项。

    sudo pip安装virtualenv

  3. 安装虚拟环境包装器关于虚拟环境包装

    sudo pip安装virtualenvwrapper

安装必备组件后,您需要使虚拟环境包装器生效以创建虚拟环境。以下是步骤-

  1. 在路径变量中设置虚拟环境目录 export WORKON_HOME=(directory you need to save envs)

  2. source /usr/local/bin/virtualenvwrapper.sh -p $WORKON_HOME

如@Mike所提到的,来源`哪个virtualenvwrapper.sh`或which virtualenvwrapper.sh可用于定位virtualenvwrapper.sh文件。

最好在〜/ .bashrc中放置两行,以免每次打开新的Shell时都执行上述命令。这就是使用mkvirtualenv创建环境所需的全部

注意事项-

  • 在Ubuntu下,您可能需要以root用户身份安装virtualenv和virtualenvwrapper。只需在上面的命令前加上sudo前缀即可。
  • 根据用于安装virtualenv的过程,virtualenvwrapper.sh的路径可能会有所不同。通过运行$ find / usr -name virtualenvwrapper.sh查找合适的路径。相应地调整.bash_profile或.bashrc脚本中的行。

Prerequisites to execute this command –

  1. pip (recursive acronym of Pip Installs Packages) is a package management system used to install and manage software packages written in Python. Many packages can be found in the Python Package Index (PyPI).

    sudo apt-get install python-pip

  2. Install Virtual Environment. Used to create virtual environment, to install packages and dependencies of multiple projects isolated from each other.

    sudo pip install virtualenv

  3. Install virtual environment wrapper About virtual env wrapper

    sudo pip install virtualenvwrapper

After Installing prerequisites you need to bring virtual environment wrapper into action to create virtual environment. Following are the steps –

  1. set virtual environment directory in path variable- export WORKON_HOME=(directory you need to save envs)

  2. source /usr/local/bin/virtualenvwrapper.sh -p $WORKON_HOME

As mentioned by @Mike, source `which virtualenvwrapper.sh` or which virtualenvwrapper.sh can used to locate virtualenvwrapper.sh file.

It’s best to put above two lines in ~/.bashrc to avoid executing the above commands every time you open new shell. That’s all you need to create environment using mkvirtualenv

Points to keep in mind –

  • Under Ubuntu, you may need install virtualenv and virtualenvwrapper as root. Simply prefix the command above with sudo.
  • Depending on the process used to install virtualenv, the path to virtualenvwrapper.sh may vary. Find the appropriate path by running $ find /usr -name virtualenvwrapper.sh. Adjust the line in your .bash_profile or .bashrc script accordingly.

回答 4

使用此过程在ubuntu中创建虚拟环境

第1步

安装点子

   sudo apt-get install python-pip

第2步

安装virtualenv

   sudo pip install virtualenv

第三步

创建一个目录来存储您的virtualenvs(我使用〜/ .virtualenvs)

   mkdir ~/.virtualenvs

或使用此命令在env中安装特定版本的python

virtualenv -p /usr/bin/python3.6 venv

第4步

   sudo pip install virtualenvwrapper

第5步

   sudo nano ~/.bashrc

步骤6

在bashrc文件的末尾添加这两行代码

  export WORKON_HOME=~/.virtualenvs
  source /usr/local/bin/virtualenvwrapper.sh

步骤7

打开新终端(推荐)

步骤8

创建一个新的virtualenv

  mkvirtualenv myawesomeproject

步骤9

要在virtualenvs之间加载或切换,请使用workon命令:

  workon myawesomeproject

步骤10

要退出新的virtualenv,请使用

 deactivate

并确保使用pip vs pip3

或按照以下步骤使用python3安装虚拟环境

安装环境

python3 -m venv my-project-env

并使用以下命令激活您的虚拟环境:

source my-project-env/bin/activate

Use this procedure to create virtual env in ubuntu

step 1

Install pip

   sudo apt-get install python-pip

step 2

Install virtualenv

   sudo pip install virtualenv

step 3

Create a dir to store your virtualenvs (I use ~/.virtualenvs)

   mkdir ~/.virtualenvs

or use this command to install specific version of python in env

virtualenv -p /usr/bin/python3.6 venv

step 4

   sudo pip install virtualenvwrapper

step 5

   sudo nano ~/.bashrc

step 6

Add this two line code at the end of the bashrc file

  export WORKON_HOME=~/.virtualenvs
  source /usr/local/bin/virtualenvwrapper.sh

step 7

Open new terminal (recommended)

step 8

Create a new virtualenv

  mkvirtualenv myawesomeproject

step 9

To load or switch between virtualenvs, use the workon command:

  workon myawesomeproject

step 10

To exit your new virtualenv, use

 deactivate

and make sure using pip vs pip3

OR follow the steps below to install virtual environment using python3

Install env

python3 -m venv my-project-env

and activate your virtual environment using the following command:

source my-project-env/bin/activate

回答 5

由于我刚经历了一次阻力,所以我将尝试写两个小时前希望得到的答案。这适用于不只是想要复制粘贴解决方案的人

第一:您是否想知道为什么复制和粘贴路径对某些人有用,而对其他人却无效?**主要原因是,解决方案不同是因为Python版本2.x或3.x不同。实际上,存在与python 2或3一起工作的virtualenv和virtualenvwrapper的不同版本。如果您使用的是python 2,请像这样安装:

sudo pip install virutalenv
sudo pip install virtualenvwrapper

如果您打算使用python 3,请安装相关的python 3版本

sudo pip3 install virtualenv
sudo pip3 install virtualenvwrapper

您已经成功安装了适用于python版本的软件包,并且已经全部安装好吗?好吧,尝试一下。输入workon到你的终端。您的终端将无法找到命令(workon是virtualenvwrapper的命令)。当然不会。Workon是可执行文件,只有在您加载/提供文件后才能使用virtualenvwrapper.sh。但是正式安装指南已经涵盖了这一点,对吧?在文档中说,只需打开.bash_profile并插入以下内容:

export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh

尤其是,该命令source /usr/local/bin/virtualenvwrapper.sh似乎很有帮助,因为该命令似乎可以加载/提供所需的文件virtualenvwrapper.sh,该文件包含您要使用的所有命令,如like workonmkvirtualenv但是,是的。按照官方安装指南进行操作时,您很可能会从初始帖子中收到错误消息:mkvirtualenv: command not found。仍然找不到命令,您仍然感到沮丧。那么,这里出了什么问题?问题在于,如果您正在寻找它,则不是virtualenvwrapper.sh。简短提醒…您在这里看:

source /usr/local/bin/virtualenvwrapper.sh

但是,找到所需文件的方法非常简单。只需输入

which virtualenvwrapper

到您的终端。这将在您的PATH中搜索该文件,因为该文件很可能位于系统PATH所包含的某个文件夹中。

如果您的系统非常陌生,则所需的文件将隐藏在PATH文件夹之外。在这种情况下,您可以virtalenvwrapper.sh使用shell命令找到路径find / -name virtualenvwrapper.sh

您的结果可能看起来像这样:/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenvwrapper.sh 恭喜。You have found your missing file!。现在,您要做的就是更改.bash_profile中的一个命令。只是改变:

source "/usr/local/bin/virtualenvwrapper.sh"

至:

"/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenvwrapper.sh"

恭喜你 Virtualenvwrapper现在可以在您的系统上运行。但是您可以做一件事来增强您的解决方案。如果virtualenvwrapper.sh使用命令找到了文件,则which virtualenvwrapper.sh知道该文件位于PATH文件夹中。因此,如果只写文件名,则文件系统将假定该文件位于PATH文件夹内。因此,您不必写出完整的路径。只需输入:

source "virtualenvwrapper.sh"

而已。您不再沮丧。您已经解决了问题。希望。

Since I just went though a drag, I’ll try to write the answer I’d have wished for two hours ago. This is for people who don’t just want the copy&paste solution

First: Do you wonder why copying and pasting paths works for some people while it doesn’t work for others?** The main reason, solutions differ are different python versions, 2.x or 3.x. There are actually distinct versions of virtualenv and virtualenvwrapper that work with either python 2 or 3. If you are on python 2 install like so:

sudo pip install virutalenv
sudo pip install virtualenvwrapper

If you are planning to use python 3 install the related python 3 versions

sudo pip3 install virtualenv
sudo pip3 install virtualenvwrapper

You’ve successfully installed the packages for your python version and are all set, right? Well, try it. Type workon into your terminal. Your terminal will not be able to find the command (workon is a command of virtualenvwrapper). Of course it won’t. Workon is an executable that will only be available to you once you load/source the file virtualenvwrapper.sh. But the official installation guide has you covered on this one, right?. Just open your .bash_profile and insert the following, it says in the documentation:

export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh

Especially the command source /usr/local/bin/virtualenvwrapper.sh seems helpful since the command seems to load/source the desired file virtualenvwrapper.sh that contains all the commands you want to work with like workon and mkvirtualenv. But yeah, no. When following the official installation guide, you are very likely to receive the error from the initial post: mkvirtualenv: command not found. Still no command is being found and you are still frustrated. So whats the problem here? The problem is that virtualenvwrapper.sh is not were you are looking for it right now. Short reminder … you are looking here:

source /usr/local/bin/virtualenvwrapper.sh

But there is a pretty straight forward way to finding the desired file. Just type

which virtualenvwrapper

to your terminal. This will search your PATH for the file, since it is very likely to be in some folder that is included in the PATH of your system.

If your system is very exotic, the desired file will hide outside of a PATH folder. In that case you can find the path to virtalenvwrapper.sh with the shell command find / -name virtualenvwrapper.sh

Your result may look something like this: /Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenvwrapper.sh Congratulations. You have found your missing file!. Now all you have to do is changing one command in your .bash_profile. Just change:

source "/usr/local/bin/virtualenvwrapper.sh"

to:

"/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenvwrapper.sh"

Congratulations. Virtualenvwrapper does now work on your system. But you can do one more thing to enhance your solution. If you’ve found the file virtualenvwrapper.sh with the command which virtualenvwrapper.sh you know that it is inside of a folder of the PATH. So if you just write the filename, your file system will assume the file is inside of a PATH folder. So you you don’t have to write out the full path. Just type:

source "virtualenvwrapper.sh"

Thats it. You are no longer frustrated. You have solved your problem. Hopefully.


回答 6

为了virtualenvwrapper在Ubuntu 18.04.3上成功安装,您需要执行以下操作:

  1. 安装 virtualenv

    sudo apt install virtualenv
  2. 安装 virtualenvwrapper

    sudo pip install virtualenv
    sudo pip install virtualenvwrapper
  3. 将以下内容添加到.bashrc文件末尾

    export WORKON_HOME=~/virtualenvs
    export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python
    source ~/.local/bin/virtualenvwrapper.sh
  4. 执行.bashrc文件

    source ~/.bashrc
  5. 创建您的virtualenv

    mkvirtualenv your_virtualenv

In order to successfully install the virtualenvwrapper on Ubuntu 18.04.3 you need to do the following:

  1. Install virtualenv

    sudo apt install virtualenv
    
  2. Install virtualenvwrapper

    sudo pip install virtualenv
    sudo pip install virtualenvwrapper
    
  3. Add the following to the end of the .bashrc file

    export WORKON_HOME=~/virtualenvs
    export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python
    source ~/.local/bin/virtualenvwrapper.sh
    
  4. Execute the .bashrc file

    source ~/.bashrc
    
  5. Create your virtualenv

    mkvirtualenv your_virtualenv
    

回答 7

在Windows 7和Git Bash上,这可以帮助我:

  1. 创建一个〜/ .bashrc文件(在用户主文件夹下)
  2. 添加行导出WORKON_HOME = $ HOME / .virtualenvs(如果此文件夹不存在,则必须创建此文件夹)
  3. 添加行源“ C:\ Program Files(x86)\ Python36-32 \ Scripts \ virtualenvwrapper.sh”(更改virtualenvwrapper.sh的路径)

现在重新启动 git bash和mkvirtualenv命令将可以正常工作。

On Windows 7 and Git Bash this helps me:

  1. Create a ~/.bashrc file (under your user home folder)
  2. Add line export WORKON_HOME=$HOME/.virtualenvs (you must create this folder if it doesn’t exist)
  3. Add line source “C:\Program Files (x86)\Python36-32\Scripts\virtualenvwrapper.sh” (change path for your virtualenvwrapper.sh)

Restart your git bash and mkvirtualenv command now will work nicely.


回答 8

在Windows 10和Windows的Python36上使用Git Bash,我在稍微不同的位置找到了virtualenvwrapper.sh,运行此命令解决了该问题

source virtualenvwrapper.sh 
/c/users/[myUserName]/AppData/Local/Programs/Python36/Scripts

Using Git Bash on Windows 10 and Python36 for Windows I found the virtualenvwrapper.sh in a slightly different place and running this resolved the issue

source virtualenvwrapper.sh 
/c/users/[myUserName]/AppData/Local/Programs/Python36/Scripts

回答 9

通过在〜/ .bash_profile(或unix中的〜/ .bashrc)文件中添加以下两行,解决了我在python 2.7.6的Ubuntu 14.04 OS中的问题。

源“ /usr/local/bin/virtualenvwrapper.sh”

导出WORKON_HOME =“ / opt / virtual_env /”

然后将这两行都执行到终端上。

Solved my issue in Ubuntu 14.04 OS with python 2.7.6, by adding below two lines into ~/.bash_profile (or ~/.bashrc in unix) files.

source “/usr/local/bin/virtualenvwrapper.sh”

export WORKON_HOME=”/opt/virtual_env/”

And then executing both these lines onto the terminal.


回答 10

在Windows 10上,要创建虚拟环境,我将“ pip mkvirtualenv myproject” 替换为“ mkvirtualenv myproject”,并且效果很好。

On Windows 10, to create the virtual environment, I replace “pip mkvirtualenv myproject” by “mkvirtualenv myproject” and that works well.


如何在Python中实现常见的bash习惯用法?[关闭]

问题:如何在Python中实现常见的bash习惯用法?[关闭]

目前,我通过一堆不记得的AWK,sed,Bash和一小部分Perl对文本文件进行操作。

我见过提到python在这种情况下有好处的几个地方。如何使用Python替换Shell脚本,AWK,sed和朋友?

I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.

I’ve seen mentioned a few places that python is good for this kind of thing. How can I use Python to replace shell scripting, AWK, sed and friends?


回答 0

任何外壳程序都有几套功能。

  • 基本的Linux / Unix命令。所有这些都可以通过子流程库获得。对于执行所有外部命令,这并不总是最好的首选。还要查看shutil中的一些命令,这些命令是独立的Linux命令,但是您可以直接在Python脚本中实现。os库中还有另一批Linux命令。您可以在Python中更简单地完成这些操作。

    还有-奖金! – 更快速。外壳程序中的每个单独的Linux命令(有一些exceptions)都会派生一个子进程。通过使用Python shutilos模块,您无需派生子进程。

  • 外壳环境功能。这包括设置命令环境的内容(当前目录和环境变量以及诸如此类)。您可以直接从Python轻松地对此进行管理。

  • Shell编程功能。这就是所有过程状态代码检查,各种逻辑命令(如果有,为……等),测试命令及其所有亲属。函数定义的东西。在Python中,这一切非常容易。这是摆脱bash并在Python中完成的巨大胜利之一。

  • 互动功能。这包括命令历史记录和“不”记录。编写shell脚本不需要此。这仅用于人类交互,而不用于脚本编写。

  • Shell文件管理功能。这包括重定向和管道。这比较棘手。其中大部分可以通过子流程来完成。但是一些容易在shell中执行的操作在Python中是令人不快的。具体来说就是这样的东西(a | b; c ) | something >result。这将并行运行两个进程(输出a作为的输入b),然后是第三个进程。该序列的输出与并行运行,something并将输出收集到名为的文件中result。用任何其他语言表达都是很复杂的。

特定程序(awk,sed,grep等)通常可以重写为Python模块。不要太过分。替换您需要的内容并发展您的“ grep”模块。不要以编写替换“ grep”的Python模块开始。

最好的事情是您可以分步执行此操作。

  1. 用Python替换AWK和PERL。别的一切。
  2. 看一下用Python替换GREP。这可能会稍微复杂一些,但是您的GREP版本可以根据您的处理需求进行定制。
  3. 看一下用来代替FIND的Python循环os.walk。这是一个很大的胜利,因为您不会产生那么多的进程。
  4. 看一下用Python脚本替换常见的shell逻辑(循环,决策等)。

Any shell has several sets of features.

  • The Essential Linux/Unix commands. All of these are available through the subprocess library. This isn’t always the best first choice for doing all external commands. Look also at shutil for some commands that are separate Linux commands, but you could probably implement directly in your Python scripts. Another huge batch of Linux commands are in the os library; you can do these more simply in Python.

    And — bonus! — more quickly. Each separate Linux command in the shell (with a few exceptions) forks a subprocess. By using Python shutil and os modules, you don’t fork a subprocess.

  • The shell environment features. This includes stuff that sets a command’s environment (current directory and environment variables and what-not). You can easily manage this from Python directly.

  • The shell programming features. This is all the process status code checking, the various logic commands (if, while, for, etc.) the test command and all of it’s relatives. The function definition stuff. This is all much, much easier in Python. This is one of the huge victories in getting rid of bash and doing it in Python.

  • Interaction features. This includes command history and what-not. You don’t need this for writing shell scripts. This is only for human interaction, and not for script-writing.

  • The shell file management features. This includes redirection and pipelines. This is trickier. Much of this can be done with subprocess. But some things that are easy in the shell are unpleasant in Python. Specifically stuff like (a | b; c ) | something >result. This runs two processes in parallel (with output of a as input to b), followed by a third process. The output from that sequence is run in parallel with something and the output is collected into a file named result. That’s just complex to express in any other language.

Specific programs (awk, sed, grep, etc.) can often be rewritten as Python modules. Don’t go overboard. Replace what you need and evolve your “grep” module. Don’t start out writing a Python module that replaces “grep”.

The best thing is that you can do this in steps.

  1. Replace AWK and PERL with Python. Leave everything else alone.
  2. Look at replacing GREP with Python. This can be a bit more complex, but your version of GREP can be tailored to your processing needs.
  3. Look at replacing FIND with Python loops that use os.walk. This is a big win because you don’t spawn as many processes.
  4. Look at replacing common shell logic (loops, decisions, etc.) with Python scripts.

回答 1

当然是 :)

看一下这些库,这些库可以帮助您不再编写Shell脚本(Plumbum的座右铭)。

另外,如果你要替换的awk,sed的和grep的东西基于Python的话,我建议小学项目

“ Pyed Piper”或pyp是类似于awk或sed的linux命令行文本操作工具,但是它使用标准的python字符串和列表方法以及自定义功能,这些功能在激烈的生产环境中可以快速生成结果。

Yes, of course :)

Take a look at these libraries which help you Never write shell scripts again (Plumbum’s motto).

Also, if you want to replace awk, sed and grep with something Python based then I recommend pyp

“The Pyed Piper”, or pyp, is a linux command line text manipulation tool similar to awk or sed, but which uses standard python string and list methods as well as custom functions evolved to generate fast results in an intense production environment.


回答 2

我刚刚发现了如何结合bash和ipython的最佳部分。到目前为止,对我来说,这比使用子流程等更舒服。您可以轻松地复制现有bash脚本的大部分内容,例如以python方式添加错误处理:)这是我的结果:

#!/usr/bin/env ipython3

# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# … by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy    # creates new ipy-file
#
# 2. chmod +x scriptname.ipy                            # make in executable
#
# 3. starting with line 2, write normal python or do some of
#    the ! magic of ipython, so that you can use unix commands
#    within python and even assign their output to a variable via
#    var = !cmd1 | cmd2 | cmd3                          # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
#    but parses raw python fine, please check again for the .ipy suffix

# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
  !echo $file | grep "p"
# sorry for this nonsense example ;)

请参阅有关系统外壳程序命令的 IPython文档,并将其用作系统外壳程序

I just discovered how to combine the best parts of bash and ipython. Up to now this seems more comfortable to me than using subprocess and so on. You can easily copy big parts of existing bash scripts and e.g. add error handling in the python way :) And here is my result:

#!/usr/bin/env ipython3

# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# … by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy    # creates new ipy-file
#
# 2. chmod +x scriptname.ipy                            # make in executable
#
# 3. starting with line 2, write normal python or do some of
#    the ! magic of ipython, so that you can use unix commands
#    within python and even assign their output to a variable via
#    var = !cmd1 | cmd2 | cmd3                          # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
#    but parses raw python fine, please check again for the .ipy suffix

# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
  !echo $file | grep "p"
# sorry for this nonsense example ;)

See IPython docs on system shell commands and using it as a system shell.


回答 3

从2015年和Python 3.4发行版开始,现在可以通过以下网址获得相当完整的用户交互shell:http : //xon.sh/https://github.com/scopatz/xonsh

演示视频不显示正在使用的管道,但他们在默认的shell模式下支持。

Xonsh(’conch’)会非常努力地模仿bash,因此您已经获得了肌肉记忆的东西,例如

env | uniq | sort -r | grep PATH

要么

my-web-server 2>&1 | my-log-sorter

仍然可以正常工作。

该教程篇幅很长,似乎涵盖了人们通常希望在ash或bash提示符下看到的大量功能:

  • 编译,评估和执行!
  • 命令历史记录和制表符完成
  • ?&的帮助和超级帮助??
  • 别名和自定义提示
  • 执行*.xsh也可以导入的命令和/或脚本
  • 环境变量,包括使用 ${}
  • 输入/输出重定向和组合
  • 后台作业和作业控制
  • 嵌套子流程,管道和协同流程
  • 存在命令时为子进程模式,否则为Python模式
  • 使用捕获的子过程,使用的未捕获子$()过程$[],使用的Python评估@()
  • 带有“ *或”的正则表达式的文件名,以及带有反引号的正则表达式的文件名

As of 2015 and Python 3.4’s release, there’s now a reasonably complete user-interactive shell available at: http://xon.sh/ or https://github.com/scopatz/xonsh

The demonstration video does not show pipes being used, but they ARE supported when in the default shell mode.

Xonsh (‘conch’) tries very hard to emulate bash, so things you’ve already gained muscle memory for, like

env | uniq | sort -r | grep PATH

or

my-web-server 2>&1 | my-log-sorter

will still work fine.

The tutorial is quite lengthy and seems to cover a significant amount of the functionality someone would generally expect at a ash or bash prompt:

  • Compiles, Evaluates, & Executes!
  • Command History and Tab Completion
  • Help & Superhelp with ? & ??
  • Aliases & Customized Prompts
  • Executes Commands and/or *.xsh Scripts which can also be imported
  • Environment Variables including Lookup with ${}
  • Input/Output Redirection and Combining
  • Background Jobs & Job Control
  • Nesting Subprocesses, Pipes, and Coprocesses
  • Subprocess-mode when a command exists, Python-mode otherwise
  • Captured Subprocess with $(), Uncaptured Subprocess with $[], Python Evaluation with @()
  • Filename Globbing with * or Regular Expression Filename Globbing with Backticks

回答 4

  • 如果要使用Python作为外壳,为什么不看看IPython?交互式学习语言也很好。
  • 如果您进行大量的文本操作,并且将Vim用作文本编辑器,则还可以直接在python中为Vim编写插件。只需在Vim中输入“:help python”,然后按照说明进行操作或查看此演示文稿即可。编写可直接在编辑器中使用的函数是如此简单和强大!
  • If you want to use Python as a shell, why not have a look at IPython ? It is also good to learn interactively the language.
  • If you do a lot of text manipulation, and if you use Vim as a text editor, you can also directly write plugins for Vim in python. just type “:help python” in Vim and follow the instructions or have a look at this presentation. It is so easy and powerfull to write functions that you will use directly in your editor!

回答 5

最初有sh,sed和awk(以及find,grep和…)。这很好。但是awk可能是一个奇怪的小野兽,如果您不经常使用它,将很难记住。然后,伟大的骆驼创造了Perl。Perl是系统管理员的梦想。就像在类固醇上编写外壳脚本一样。文本处理(包括正则表达式)只是该语言的一部分。然后它变得丑陋了。人们试图用Perl进行大型应用程序。现在,请不要误会我的意思,Perl可以是一个应用程序,但是如果您不太谨慎的话,它可能(可以!)看起来像一团糟。然后就是所有这些平面数据业务。这足以使程序员发疯。

输入Python,Ruby等。这些确实是非常好的通用语言。它们支持文本处理,并且做得很好(尽管在语言的基本核心中可能并不紧密地缠在一起)。但是它们也可以很好地扩展,并且到最后仍然具有漂亮的代码。他们还开发了相当庞大的社区,其中有大量的图书馆可以满足大多数需求。

现在,对Perl的许多负面影响只是一个见解,当然有些人可以编写非常简洁的Perl,但是由于许多人抱怨创建混淆代码太容易了,因此您知道其中有些道理。真正的问题就变成了,您是否打算将这种语言用于比简单的bash脚本替换更多的事情。如果没有,请学习更多Perl。另一方面,如果您想要一种语言,并且随着您想做更多的事情而发展,那么我建议使用Python或Ruby。

无论哪种方式,祝您好运!

In the beginning there was sh, sed, and awk (and find, and grep, and…). It was good. But awk can be an odd little beast and hard to remember if you don’t use it often. Then the great camel created Perl. Perl was a system administrator’s dream. It was like shell scripting on steroids. Text processing, including regular expressions were just part of the language. Then it got ugly… People tried to make big applications with Perl. Now, don’t get me wrong, Perl can be an application, but it can (can!) look like a mess if you’re not really careful. Then there is all this flat data business. It’s enough to drive a programmer nuts.

Enter Python, Ruby, et al. These are really very good general purpose languages. They support text processing, and do it well (though perhaps not as tightly entwined in the basic core of the language). But they also scale up very well, and still have nice looking code at the end of the day. They also have developed pretty hefty communities with plenty of libraries for most anything.

Now, much of the negativeness towards Perl is a matter of opinion, and certainly some people can write very clean Perl, but with this many people complaining about it being too easy to create obfuscated code, you know some grain of truth is there. The question really becomes then, are you ever going to use this language for more than simple bash script replacements. If not, learn some more Perl.. it is absolutely fantastic for that. If, on the other hand, you want a language that will grow with you as you want to do more, may I suggest Python or Ruby.

Either way, good luck!


回答 6

我建议使用很棒的在线书籍《Dive Into Python》。这就是我最初学习语言的方式。

除了教给您语言的基本结构和大量有用的数据结构外,它还有一章很好的文件处理章节,随后的一章是正则表达式等等。

I suggest the awesome online book Dive Into Python. It’s how I learned the language originally.

Beyond teaching you the basic structure of the language, and a whole lot of useful data structures, it has a good chapter on file handling and subsequent chapters on regular expressions and more.


回答 7

添加到先前的答案:检查pexpect模块以处理交互式命令(adduser,passwd等)

Adding to previous answers: check the pexpect module for dealing with interactive commands (adduser, passwd etc.)


回答 8

我喜欢Python的原因之一是,它比POSIX工具具有更好的标准化。我必须仔细检查每一位是否与其他操作系统兼容。在Linux系统上编写的程序在OSX的BSD系统上可能无法正常工作。使用Python,我只需要检查目标系统是否具有足够现代的Python版本。

更好的是,使用标准Python编写的程序甚至可以在Windows上运行!

One reason I love Python is that it is much better standardized than the POSIX tools. I have to double and triple check that each bit is compatible with other operating systems. A program written on a Linux system might not work the same on a BSD system of OSX. With Python, I just have to check that the target system has a sufficiently modern version of Python.

Even better, a program written in standard Python will even run on Windows!


回答 9

我将根据经验给出我的看法:

对于外壳:

  • Shell可以很容易地产生只读代码。编写它,当您重新使用它时,将永远不会再想出自己做了什么。这很容易做到。
  • shell可以使用管道在一行中进行大量文本处理,拆分等操作。
  • 当集成不同编程语言中的程序调用时,它是最好的粘合语言。

对于python:

  • 如果要包括Windows的可移植性,请使用python。
  • 当您必须要操作的不仅仅是文本(例如数字集合)时,python可能会更好。为此,我建议使用python。

我通常会为大多数事情选择bash,但是当我必须跨Windows边界进行操作时,我只会使用python。

I will give here my opinion based on experience:

For shell:

  • shell can very easily spawn read-only code. Write it and when you come back to it, you will never figure out what you did again. It’s very easy to accomplish this.
  • shell can do A LOT of text processing, splitting, etc in one line with pipes.
  • it is the best glue language when it comes to integrate the call of programs in different programming languages.

For python:

  • if you want portability to windows included, use python.
  • python can be better when you must manipulate just more than text, such as collections of numbers. For this, I recommend python.

I usually choose bash for most of the things, but when I have something that must cross windows boundaries, I just use python.


回答 10

pythonpy是一种工具,可使用python语法轻松访问awk和sed的许多功能:

$ echo me2 | py -x 're.sub("me", "you", x)'
you2

pythonpy is a tool that provides easy access to many of the features from awk and sed, but using python syntax:

$ echo me2 | py -x 're.sub("me", "you", x)'
you2

回答 11

我建立了半长的shell脚本(300-500行)和Python代码,它们具有相似的功能。当执行许多外部命令时,我发现该外壳更易于使用。当有大量的文本操作时,Perl也是一个不错的选择。

I have built semi-long shell scripts (300-500 lines) and Python code which does similar functionality. When many external commands are being executed, I find the shell is easier to use. Perl is also a good option when there is lots of text manipulation.


回答 12

在研究此主题时,我发现了这个概念验证代码(通过http://jlebar.com/2010/2/1/Replacing_Bash.html上的注释),可以使您“使用Python编写类似Shell的管道简洁的语法,并在有意义的地方利用现有的系统工具”:

for line in sh("cat /tmp/junk2") | cut(d=',',f=1) | 'sort' | uniq:
    sys.stdout.write(line)

While researching this topic, I found this proof-of-concept code (via a comment at http://jlebar.com/2010/2/1/Replacing_Bash.html) that lets you “write shell-like pipelines in Python using a terse syntax, and leveraging existing system tools where they make sense”:

for line in sh("cat /tmp/junk2") | cut(d=',',f=1) | 'sort' | uniq:
    sys.stdout.write(line)

回答 13

最好的选择是专门针对您的问题的工具。如果正在处理文本文件,则Sed,Awk和Perl是最有竞争力的竞争者。Python是一种通用的动态语言。与任何通用语言一样,文件处理也受支持,但这并不是其核心目的。如果我特别需要动态语言,我会考虑使用Python或Ruby。

简而言之,请非常好地学习Sed和Awk,以及带有* nix风格的所有其他好东西(所有Bash内置,grep,tr等)。如果您对文本文件处理感兴趣,那么您已经在使用正确的东西。

Your best bet is a tool that is specifically geared towards your problem. If it’s processing text files, then Sed, Awk and Perl are the top contenders. Python is a general-purpose dynamic language. As with any general purpose language, there’s support for file-manipulation, but that isn’t what it’s core purpose is. I would consider Python or Ruby if I had a requirement for a dynamic language in particular.

In short, learn Sed and Awk really well, plus all the other goodies that come with your flavour of *nix (All the Bash built-ins, grep, tr and so forth). If it’s text file processing you’re interested in, you’re already using the right stuff.


回答 14

您可以在ShellPy库中使用python代替bash 。

这是一个从Github下载Python用户的化身的示例:

import json
import os
import tempfile

# get the api answer with curl
answer = `curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zero
if answer:
    answer_json = json.loads(answer.stdout)
    avatar_url = answer_json['avatar_url']

    destination = os.path.join(tempfile.gettempdir(), 'python.png')

    # execute curl once again, this time to get the image
    result = `curl {avatar_url} > {destination}
    if result:
        # if there were no problems show the file
        p`ls -l {destination}
    else:
        print('Failed to download avatar')

    print('Avatar downloaded')
else:
    print('Failed to access github api')

如您所见,重音符(`)符号内的所有表达式都在shell中执行。并且在Python代码中,您可以捕获此执行的结果并对其执行操作。例如:

log = `git log --pretty=oneline --grep='Create'

该行将首先git log --pretty=oneline --grep='Create'在shell中执行,然后将结果分配给log变量。结果具有以下属性:

标准输出从运行进程的标准输出的全部文本

stderr来自执行过程的stderr的全文

返回码的执行的返回码

这是库的一般概述,可在此处找到带有示例的更详细描述。

You can use python instead of bash with the ShellPy library.

Here is an example that downloads avatar of Python user from Github:

import json
import os
import tempfile

# get the api answer with curl
answer = `curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zero
if answer:
    answer_json = json.loads(answer.stdout)
    avatar_url = answer_json['avatar_url']

    destination = os.path.join(tempfile.gettempdir(), 'python.png')

    # execute curl once again, this time to get the image
    result = `curl {avatar_url} > {destination}
    if result:
        # if there were no problems show the file
        p`ls -l {destination}
    else:
        print('Failed to download avatar')

    print('Avatar downloaded')
else:
    print('Failed to access github api')

As you can see, all expressions inside of grave accent ( ` ) symbol are executed in shell. And in Python code, you can capture results of this execution and perform actions on it. For example:

log = `git log --pretty=oneline --grep='Create'

This line will first execute git log --pretty=oneline --grep='Create' in shell and then assign the result to the log variable. The result has the following properties:

stdout the whole text from stdout of the executed process

stderr the whole text from stderr of the executed process

returncode returncode of the execution

This is general overview of the library, more detailed description with examples can be found here.


回答 15

如果您的文本文件操作通常是一次性的,可能是在shell提示符下完成的,那么您将无法从python得到更好的结果。

另一方面,如果通常您需要一遍又一遍地执行相同(或类似)的任务,并且必须编写脚本来执行此操作,那么python很棒-而且您可以轻松地创建自己的库(您可以也使用shell脚本,但这比较麻烦)。

一个非常简单的例子来让人感觉。

import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")
for line in stdout_text:
  if line.startswith("#"):
    pass
  else
    jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))
    # do something with jobID

还要检查sys和getopt模块,它们是您首先需要的。

If your textfile manipulation usually is one-time, possibly done on the shell-prompt, you will not get anything better from python.

On the other hand, if you usually have to do the same (or similar) task over and over, and you have to write your scripts for doing that, then python is great – and you can easily create your own libraries (you can do that with shell scripts too, but it’s more cumbersome).

A very simple example to get a feeling.

import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")
for line in stdout_text:
  if line.startswith("#"):
    pass
  else
    jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))
    # do something with jobID

Check also sys and getopt module, they are the first you will need.


回答 16

我已经在PyPI上发布了一个软件包:ez
使用pip install ez安装它。

它在shell中打包了常用命令,很好的是我的lib使用与shell基本相同的语法。例如,cp(源,目标)可以处理文件和文件夹!(shutil.copy shutil.copytree的包装,它决定何时使用哪一个)。更妙的是,它可以像R一样支持向量化!

另一个示例:没有os.walk,使用fls(path,regex)递归查找文件并使用正则表达式进行过滤,并返回具有或不具有全路径的文件列表

最后一个例子:您可以将它们组合起来以编写非常简单的脚本:
files = fls('.','py$'); cp(files, myDir)

一定要检查一下!我花了数百个小时来编写/改进它!

I have published a package on PyPI: ez.
Use pip install ez to install it.

It has packed common commands in shell and nicely my lib uses basically the same syntax as shell. e.g., cp(source, destination) can handle both file and folder! (wrapper of shutil.copy shutil.copytree and it decides when to use which one). Even more nicely, it can support vectorization like R!

Another example: no os.walk, use fls(path, regex) to recursively find files and filter with regular expression and it returns a list of files with or without fullpath

Final example: you can combine them to write very simply scripts:
files = fls('.','py$'); cp(files, myDir)

Definitely check it out! It has cost me hundreds of hours to write/improve it!


在Python中运行Bash命令

问题:在Python中运行Bash命令

在我的本地计算机上,我运行一个包含此行的python脚本

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
os.system(bashCommand)

这很好。

然后,我在服务器上运行相同的代码,并收到以下错误消息

'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/bin/cwm", line 48, in <module>
from swap import  diag
ImportError: No module named swap

因此,我要做的就是print bashCommand在运行之前,在终端中插入了一个比命令更清晰的信息os.system()

当然,我再次收到错误(由引起os.system(bashCommand)),但是在该错误出现之前,它将在终端中打印命令。然后我只是复制了输出,然后将复制粘贴到终端中,然后按回车,它就可以工作了…

有人知道发生了什么吗?

On my local machine, I run a python script which contains this line

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
os.system(bashCommand)

This works fine.

Then I run the same code on a server and I get the following error message

'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/bin/cwm", line 48, in <module>
from swap import  diag
ImportError: No module named swap

So what I did then is I inserted a print bashCommand which prints me than the command in the terminal before it runs it with os.system().

Of course, I get again the error (caused by os.system(bashCommand)) but before that error it prints the command in the terminal. Then I just copied that output and did a copy paste into the terminal and hit enter and it works…

Does anyone have a clue what’s going on?


回答 0

不要使用os.system。不推荐使用它,而推荐使用subprocess。从文档:“此模块旨在取代旧的几个模块和功能:os.systemos.spawn”。

就像您的情况一样:

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()

Don’t use os.system. It has been deprecated in favor of subprocess. From the docs: “This module intends to replace several older modules and functions: os.system, os.spawn“.

Like in your case:

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()

回答 1

为了稍微扩展此处的早期答案,通常会忽略许多细节。

  • 身高subprocess.run()subprocess.check_call()和朋友过subprocess.call()subprocess.Popen()os.system()os.popen()
  • 理解并可能使用text=True,又名universal_newlines=True
  • 了解shell=Trueor 的含义shell=False以及它如何更改报价以及shell便利的可用性。
  • 了解sh和Bash 之间的差异
  • 了解子流程如何与其父流程分离,并且通常无法更改父流程。
  • 避免将Python解释器作为Python的子进程运行。

这些主题将在下面更详细地介绍。

更喜欢subprocess.run()还是subprocess.check_call()

subprocess.Popen()函数是低级主力,但正确使用起来很棘手,最终您会复制/粘贴多行代码…这些代码已经方便地存在于标准库中,作为一组用于各种用途的高级包装函数,下面将更详细地介绍。

这是文档中的一段:

推荐的调用子流程的方法是将该run()功能用于它可以处理的所有用例。对于更高级的用例,Popen可以直接使用基础接口。

不幸的是,这些包装函数的可用性在Python版本之间有所不同。

  • subprocess.run()在Python 3.5中正式引入。它旨在替换以下所有内容。
  • subprocess.check_output()是在Python 2.7 / 3.1中引入的。它基本上相当于subprocess.run(..., check=True, stdout=subprocess.PIPE).stdout
  • subprocess.check_call()是在Python 2.5中引入的。它基本上相当于subprocess.run(..., check=True)
  • subprocess.call()是在Python 2.4中的原始subprocess模块(PEP-324)中引入的。它基本上相当于subprocess.run(...).returncode

高级API与 subprocess.Popen()

subprocess.run()它替代的旧版旧功能相比,重构和扩展的逻辑更加丰富,用途更广。它返回一个CompletedProcess具有各种方法的对象,使您可以从完成的子流程中检索退出状态,标准输出以及其他一些结果和状态指示符。

subprocess.run()如果您只需要一个程序来运行并将控制权返回给Python,则可以采用这种方法。对于更复杂的场景(后台进程,也许通过Python父程序使用交互式I / O),您仍然需要使用subprocess.Popen()并照顾好所有管道。这需要对所有运动部件有相当复杂的了解,因此不应掉以轻心。更简单的Popen对象表示(可能仍在运行)的进程,在子进程的剩余生命周期中,需要从您的代码中对其进行管理。

也许应该强调,仅仅subprocess.Popen()是创造一个过程。如果不这样做的话,您将有一个子进程与Python同时运行,因此是一个“后台”进程。如果不需要进行输入或输出或与您进行协调,则可以与Python程序并行进行有用的工作。

避免os.system()os.popen()

自从永恒(从Python 2.5开始)以来,os模块文档中就包含了建议优先subprocessos.system()

subprocess模块提供了更强大的功能来生成新流程并检索其结果。使用该模块优于使用此功能。

问题system()在于它显然依赖于系统,并且没有提供与子流程进行交互的方法。它运行简单,标准输出和标准错误超出了Python的范围。Python收到的唯一信息是命令的退出状态(零表示成功,尽管非零值的含义在某种程度上也取决于系统)。

PEP-324(上面已经提到过)包含了更详细的理由,说明了为什么os.system会出现问题以及如何subprocess尝试解决这些问题。

os.popen()过去更不鼓励

从2.6版开始不推荐使用:此功能已过时。使用subprocess模块。

但是,自从Python 3发行以来,它已经重新实现为仅使用subprocess,并重定向到subprocess.Popen()文档以获取详细信息。

了解并通常使用 check=True

您还会注意到,它与subprocess.call()有许多相同的限制os.system()。在常规使用中,通常应该检查流程是否成功完成,执行subprocess.check_call()subprocess.check_output()执行(其中后者还返回完成的子流程的标准输出)。同样,除非特别需要允许子流程返回错误状态check=Truesubprocess.run()否则通常应使用with 。

实际上,使用check=True或时subprocess.check_*,如果子进程返回非零退出状态,Python将抛出CalledProcessError异常

一个常见的错误subprocess.run()check=True如果子进程失败,则在下游代码失败时忽略并感到惊讶。

在另一方面,有一个共同的问题check_call()check_output()是谁盲目使用这些功能的用户,当异常发生时,如感到惊讶grep并没有找到匹配。(您可能应该grep仍然用本机Python代码替换,如下所述。)

所有事情都计算在内,您需要了解shell命令如何返回退出代码,以及在什么条件下它们将返回非零(错误)退出代码,并做出有意识的决定,如何精确地处理它。

了解并且可能使用text=Trueakauniversal_newlines=True

从Python 3开始,Python内部的字符串是Unicode字符串。但是,不能保证子进程会生成Unicode输出或字符串。

(如果差异不是立即显而易见的,则建议使用Ned Batchelder的实用Unicode(如果不是必须的话)阅读。如果您愿意,可以在链接后进行36分钟的视频演示,尽管您自己阅读页面的时间可能会大大减少。 )

深入地讲,Python必须获取bytes缓冲区并以某种方式解释它。如果它包含二进制数据的斑点,则不应将其解码为Unicode字符串,因为这是容易出错和引起错误的行为-正是这种讨厌的行为,使许多Python 2脚本充满了麻烦,之后才有办法正确区分编码文本和二进制数据。

使用text=True,您可以告诉Python您实际上希望以系统的默认编码返回文本数据,并且应将其解码为Python(Unicode)字符串,以达到Python的最佳能力(通常,UTF-8在不超过日期系统,也许除了Windows?)

如果您没有要求,Python只会bytesstdoutstderr字符串中提供字符串。也许稍后您确实知道它们是文本字符串,并且知道了它们的编码。然后,您可以解码它们。

normal = subprocess.run([external, arg],
    stdout=subprocess.PIPE, stderr=subprocess.PIPE,
    check=True,
    text=True)
print(normal.stdout)

convoluted = subprocess.run([external, arg],
    stdout=subprocess.PIPE, stderr=subprocess.PIPE,
    check=True)
# You have to know (or guess) the encoding
print(convoluted.stdout.decode('utf-8'))

Python 3.7 text为关键字参数引入了更简短,更具描述性和可理解性的别名,该别名以前曾被误导使用universal_newlines

了解shell=Trueshell=False

随着shell=True您将单个字符串传递给您的外壳,外壳便从那里接收它。

随着shell=False您将参数列表传递给操作系统,绕过了外壳程序。

当没有外壳时,您可以保存进程并摆脱相当多的隐藏复杂性,这些复杂性可能会也可能不会包含错误甚至安全问题。

另一方面,当您没有外壳程序时,就没有重定向,通配符扩展,作业控制和大量其他外壳程序功能。

一个常见的错误是使用shell=TruePython,然后仍将令牌列表传递给Python,反之亦然。这在某些情况下可能会起作用,但确实定义不清,并可能以有趣的方式破坏。

# XXX AVOID THIS BUG
buggy = subprocess.run('dig +short stackoverflow.com')

# XXX AVOID THIS BUG TOO
broken = subprocess.run(['dig', '+short', 'stackoverflow.com'],
    shell=True)

# XXX DEFINITELY AVOID THIS
pathological = subprocess.run(['dig +short stackoverflow.com'],
    shell=True)

correct = subprocess.run(['dig', '+short', 'stackoverflow.com'],
    # Probably don't forget these, too
    check=True, text=True)

# XXX Probably better avoid shell=True
# but this is nominally correct
fixed_but_fugly = subprocess.run('dig +short stackoverflow.com',
    shell=True,
    # Probably don't forget these, too
    check=True, text=True)

常见的反驳“但对我有用”不是一个有用的反驳,除非您确切地了解它在什么情况下会停止工作。

重构实例

通常,shell的功能可以用本地Python代码替换。简单的Awk或sed脚本可能应该简单地翻译成Python。

为了部分说明这一点,这是一个典型但有些愚蠢的示例,其中涉及许多外壳功能。

cmd = '''while read -r x;
   do ping -c 3 "$x" | grep 'round-trip min/avg/max'
   done <hosts.txt'''

# Trivial but horrible
results = subprocess.run(
    cmd, shell=True, universal_newlines=True, check=True)
print(results.stdout)

# Reimplement with shell=False
with open('hosts.txt') as hosts:
    for host in hosts:
        host = host.rstrip('\n')  # drop newline
        ping = subprocess.run(
             ['ping', '-c', '3', host],
             text=True,
             stdout=subprocess.PIPE,
             check=True)
        for line in ping.stdout.split('\n'):
             if 'round-trip min/avg/max' in line:
                 print('{}: {}'.format(host, line))

这里要注意一些事情:

  • 随着shell=False你不需要引用的外壳需要大约字符串。无论如何用引号引起来可能是一个错误。
  • 在子流程中运行尽可能少的代码通常是有意义的。这使您可以从Python代码中更好地控制执行。
  • 话虽这么说,复杂的Shell管道非常繁琐,有时很难在Python中重新实现。

重构后的代码还以非常简洁的语法说明了shell实际为您做了多少-更好或更坏。Python说明确优于隐式,但Python代码相当冗长,可以说是看起来复杂得多,这确实是。另一方面,它提供了许多要点,您可以在其中进行控制,例如通过增强功能可以很容易地说明这一点,我们可以轻松地将主机名与shell命令输出一起包括在内。(这在shell中也绝非挑战性,但是以另一种转移和也许另一种过程为代价的。)

普通壳结构

为了完整起见,这里简要介绍了其中一些外壳程序功能,并提供了一些注释,说明如何用本地Python设施替换它们。

  • Globbing aka通配符扩展可以glob.glob()用python或类似的简单Python字符串比较代替,或者经常用for file in os.listdir('.'): if not file.endswith('.png'): continue。Bash具有各种其他扩展功能,例如.{png,jpg}大括号扩展和{1..100}波浪号扩展(~扩展到您的主目录,并且更广泛~account地扩展到另一个用户的主目录)
  • Shell变量(例如$SHELL$my_exported_var有时可以简单地用Python变量替换)。导出的shell变量可作为例如os.environ['SHELL'](的意思export是使变量提供给子进程-一个变量,它是不可用的子进程显然不会提供给Python的运行作为shell的子进程,反之亦然env=关键字subprocess方法的参数可让您将子流程的环境定义为字典,因此这是使Python变量对子流程可见的一种方法。与shell=False您将需要了解如何删除任何引号;例如,cd "$HOME"相当于os.chdir(os.environ['HOME'])目录名周围不带引号。(常常cd是不是有用的或必要的,无论如何,和很多新手忽略了双引号周围的变量,并摆脱它,直到有一天……
  • 重定向允许您从文件读取作为标准输入,并将标准输出写入文件。grep 'foo' <inputfile >outputfile打开outputfile以进行写入和inputfile阅读,并将其内容作为标准输入传递给grep,然后其标准输出进入outputfile。通常,用本机Python代码替换它并不难。
  • 管道是重定向的一种形式。echo foo | nl运行两个子进程,其中的标准输出echo是的标准输入nl(在OS级别,在类Unix系统中,这是一个文件句柄)。如果您无法用本机Python代码替换管道的一端或两端,则也许可以考虑使用外壳程序,尤其是在管道具有两个或三个以上进程的情况下(尽管请查看pipesPython标准库中的模块或多个更具现代性和多功能的第三方竞争对手)。
  • 作业控制使您可以中断作业,在后台运行它们,将它们返回到前台等。当然,Python也提供了停止和继续一个进程的基本Unix信号。但是作业是外壳程序中的一个更高层次的抽象,涉及流程组等,如果您想从Python中进行类似的工作,则必须理解。
  • 除非您了解所有内容基本上都是字符串,否则在外壳程序中进行报价可能会造成混淆。因此ls -l /等效于,'ls' '-l' '/'但文字的引号是完全可选的。包含外壳元字符的未加引号的字符串将进行参数扩展,空格标记化和通配符扩展;双引号可防止空格标记化和通配符扩展,但允许参数扩展(变量替换,命令替换和反斜杠处理)。从理论上讲这很简单,但是可能会令人困惑,尤其是在存在多层解释时(例如,远程shell命令)。

了解sh和Bash 之间的差异

subprocess/bin/sh除非另有明确要求,否则将使用shell命令运行外壳命令(当然,在Windows上除外,因为Windows使用COMSPEC变量的值)。这意味着各种仅Bash的功能(如数组[[等)不可用。

如果您需要使用仅Bash语法,则可以将路径传递为shell executable='/bin/bash'(当然,如果您的Bash安装在其他位置,则需要调整路径)。

subprocess.run('''
    # This for loop syntax is Bash only
    for((i=1;i<=$#;i++)); do
        # Arrays are Bash-only
        array[i]+=123
    done''',
    shell=True, check=True,
    executable='/bin/bash')

A subprocess与它的父项分开,并且不能对其进行更改

一个常见的错误是做类似的事情

subprocess.run('foo=bar', shell=True)
subprocess.run('echo "$foo"', shell=True)  # Doesn't work

除了缺乏优雅之外,这还背叛了人们对“ subprocess”名称中“ sub”部分的根本性了解。

子进程与Python完全独立运行,完成时,Python不知道它做了什么(除了模糊的指示,它可以从子进程的退出状态和输出中推断出来)。孩子通常不能改变父母的环境;它不能设置变量,更改工作目录,也不能在没有上级合作的情况下与其上级通信。

在这种情况下,立即解决的办法是在一个子进程中运行两个命令。

subprocess.run('foo=bar; echo "$foo"', shell=True)

尽管显然,这种特定用例根本不需要外壳。请记住,您可以通过以下方式操纵当前进程的环境(因此也可以操纵其子进程)

os.environ['foo'] = 'bar'

或通过以下方式将环境设置传递给子进程

subprocess.run('echo "$foo"', shell=True, env={'foo': 'bar'})

(更不用说明显的重构了subprocess.run(['echo', 'bar']);但是echo,当然,这是在子流程中首先要运行的一个糟糕的例子)。

不要从Python运行Python

这是些可疑的建议。当然,在某些情况下,将Python解释器作为Python脚本的子进程运行甚至是绝对必要的情况。但是通常,正确的方法只是import将另一个Python模块放入您的调用脚本中,然后直接调用其功能。

如果其他Python脚本在您的控制下,并且不是模块,请考虑将其转换为一个。(此答案已经太久了,因此在这里我将不做详细介绍。)

如果需要并行处理,则可以在带有multiprocessing模块的子进程中运行Python函数 还有一个可以threading在单个进程中运行多个任务(它更轻巧,可以为您提供更多控制权,但同时也更多地限制了进程中的线程紧密耦合并绑定到单个GIL。)

To somewhat expand on the earlier answers here, there are a number of details which are commonly overlooked.

  • Prefer subprocess.run() over subprocess.check_call() and friends over subprocess.call() over subprocess.Popen() over os.system() over os.popen()
  • Understand and probably use text=True, aka universal_newlines=True.
  • Understand the meaning of shell=True or shell=False and how it changes quoting and the availability of shell conveniences.
  • Understand differences between sh and Bash
  • Understand how a subprocess is separate from its parent, and generally cannot change the parent.
  • Avoid running the Python interpreter as a subprocess of Python.

These topics are covered in some more detail below.

Prefer subprocess.run() or subprocess.check_call()

The subprocess.Popen() function is a low-level workhorse but it is tricky to use correctly and you end up copy/pasting multiple lines of code … which conveniently already exist in the standard library as a set of higher-level wrapper functions for various purposes, which are presented in more detail in the following.

Here’s a paragraph from the documentation:

The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.

Unfortunately, the availability of these wrapper functions differs between Python versions.

  • subprocess.run() was officially introduced in Python 3.5. It is meant to replace all of the following.
  • subprocess.check_output() was introduced in Python 2.7 / 3.1. It is basically equivalent to subprocess.run(..., check=True, stdout=subprocess.PIPE).stdout
  • subprocess.check_call() was introduced in Python 2.5. It is basically equivalent to subprocess.run(..., check=True)
  • subprocess.call() was introduced in Python 2.4 in the original subprocess module (PEP-324). It is basically equivalent to subprocess.run(...).returncode

High-level API vs subprocess.Popen()

The refactored and extended subprocess.run() is more logical and more versatile than the older legacy functions it replaces. It returns a CompletedProcess object which has various methods which allow you to retrieve the exit status, the standard output, and a few other results and status indicators from the finished subprocess.

subprocess.run() is the way to go if you simply need a program to run and return control to Python. For more involved scenarios (background processes, perhaps with interactive I/O with the Python parent program) you still need to use subprocess.Popen() and take care of all the plumbing yourself. This requires a fairly intricate understanding of all the moving parts and should not be undertaken lightly. The simpler Popen object represents the (possibly still-running) process which needs to be managed from your code for the remainder of the lifetime of the subprocess.

It should perhaps be emphasized that just subprocess.Popen() merely creates a process. If you leave it at that, you have a subprocess running concurrently alongside with Python, so a “background” process. If it doesn’t need to do input or output or otherwise coordinate with you, it can do useful work in parallel with your Python program.

Avoid os.system() and os.popen()

Since time eternal (well, since Python 2.5) the os module documentation has contained the recommendation to prefer subprocess over os.system():

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.

The problems with system() are that it’s obviously system-dependent and doesn’t offer ways to interact with the subprocess. It simply runs, with standard output and standard error outside of Python’s reach. The only information Python receives back is the exit status of the command (zero means success, though the meaning of non-zero values is also somewhat system-dependent).

PEP-324 (which was already mentioned above) contains a more detailed rationale for why os.system is problematic and how subprocess attempts to solve those issues.

os.popen() used to be even more strongly discouraged:

Deprecated since version 2.6: This function is obsolete. Use the subprocess module.

However, since sometime in Python 3, it has been reimplemented to simply use subprocess, and redirects to the subprocess.Popen() documentation for details.

Understand and usually use check=True

You’ll also notice that subprocess.call() has many of the same limitations as os.system(). In regular use, you should generally check whether the process finished successfully, which subprocess.check_call() and subprocess.check_output() do (where the latter also returns the standard output of the finished subprocess). Similarly, you should usually use check=True with subprocess.run() unless you specifically need to allow the subprocess to return an error status.

In practice, with check=True or subprocess.check_*, Python will throw a CalledProcessError exception if the subprocess returns a nonzero exit status.

A common error with subprocess.run() is to omit check=True and be surprised when downstream code fails if the subprocess failed.

On the other hand, a common problem with check_call() and check_output() was that users who blindly used these functions were surprised when the exception was raised e.g. when grep did not find a match. (You should probably replace grep with native Python code anyway, as outlined below.)

All things counted, you need to understand how shell commands return an exit code, and under what conditions they will return a non-zero (error) exit code, and make a conscious decision how exactly it should be handled.

Understand and probably use text=True aka universal_newlines=True

Since Python 3, strings internal to Python are Unicode strings. But there is no guarantee that a subprocess generates Unicode output, or strings at all.

(If the differences are not immediately obvious, Ned Batchelder’s Pragmatic Unicode is recommended, if not outright obligatory, reading. There is a 36-minute video presentation behind the link if you prefer, though reading the page yourself will probably take significantly less time.)

Deep down, Python has to fetch a bytes buffer and interpret it somehow. If it contains a blob of binary data, it shouldn’t be decoded into a Unicode string, because that’s error-prone and bug-inducing behavior – precisely the sort of pesky behavior which riddled many Python 2 scripts, before there was a way to properly distinguish between encoded text and binary data.

With text=True, you tell Python that you, in fact, expect back textual data in the system’s default encoding, and that it should be decoded into a Python (Unicode) string to the best of Python’s ability (usually UTF-8 on any moderately up to date system, except perhaps Windows?)

If that’s not what you request back, Python will just give you bytes strings in the stdout and stderr strings. Maybe at some later point you do know that they were text strings after all, and you know their encoding. Then, you can decode them.

normal = subprocess.run([external, arg],
    stdout=subprocess.PIPE, stderr=subprocess.PIPE,
    check=True,
    text=True)
print(normal.stdout)

convoluted = subprocess.run([external, arg],
    stdout=subprocess.PIPE, stderr=subprocess.PIPE,
    check=True)
# You have to know (or guess) the encoding
print(convoluted.stdout.decode('utf-8'))

Python 3.7 introduced the shorter and more descriptive and understandable alias text for the keyword argument which was previously somewhat misleadingly called universal_newlines.

Understand shell=True vs shell=False

With shell=True you pass a single string to your shell, and the shell takes it from there.

With shell=False you pass a list of arguments to the OS, bypassing the shell.

When you don’t have a shell, you save a process and get rid of a fairly substantial amount of hidden complexity, which may or may not harbor bugs or even security problems.

On the other hand, when you don’t have a shell, you don’t have redirection, wildcard expansion, job control, and a large number of other shell features.

A common mistake is to use shell=True and then still pass Python a list of tokens, or vice versa. This happens to work in some cases, but is really ill-defined and could break in interesting ways.

# XXX AVOID THIS BUG
buggy = subprocess.run('dig +short stackoverflow.com')

# XXX AVOID THIS BUG TOO
broken = subprocess.run(['dig', '+short', 'stackoverflow.com'],
    shell=True)

# XXX DEFINITELY AVOID THIS
pathological = subprocess.run(['dig +short stackoverflow.com'],
    shell=True)

correct = subprocess.run(['dig', '+short', 'stackoverflow.com'],
    # Probably don't forget these, too
    check=True, text=True)

# XXX Probably better avoid shell=True
# but this is nominally correct
fixed_but_fugly = subprocess.run('dig +short stackoverflow.com',
    shell=True,
    # Probably don't forget these, too
    check=True, text=True)

The common retort “but it works for me” is not a useful rebuttal unless you understand exactly under what circumstances it could stop working.

Refactoring Example

Very often, the features of the shell can be replaced with native Python code. Simple Awk or sed scripts should probably simply be translated to Python instead.

To partially illustrate this, here is a typical but slightly silly example which involves many shell features.

cmd = '''while read -r x;
   do ping -c 3 "$x" | grep 'round-trip min/avg/max'
   done <hosts.txt'''

# Trivial but horrible
results = subprocess.run(
    cmd, shell=True, universal_newlines=True, check=True)
print(results.stdout)

# Reimplement with shell=False
with open('hosts.txt') as hosts:
    for host in hosts:
        host = host.rstrip('\n')  # drop newline
        ping = subprocess.run(
             ['ping', '-c', '3', host],
             text=True,
             stdout=subprocess.PIPE,
             check=True)
        for line in ping.stdout.split('\n'):
             if 'round-trip min/avg/max' in line:
                 print('{}: {}'.format(host, line))

Some things to note here:

  • With shell=False you don’t need the quoting that the shell requires around strings. Putting quotes anyway is probably an error.
  • It often makes sense to run as little code as possible in a subprocess. This gives you more control over execution from within your Python code.
  • Having said that, complex shell pipelines are tedious and sometimes challenging to reimplement in Python.

The refactored code also illustrates just how much the shell really does for you with a very terse syntax — for better or for worse. Python says explicit is better than implicit but the Python code is rather verbose and arguably looks more complex than this really is. On the other hand, it offers a number of points where you can grab control in the middle of something else, as trivially exemplified by the enhancement that we can easily include the host name along with the shell command output. (This is by no means challenging to do in the shell, either, but at the expense of yet another diversion and perhaps another process.)

Common Shell Constructs

For completeness, here are brief explanations of some of these shell features, and some notes on how they can perhaps be replaced with native Python facilities.

  • Globbing aka wildcard expansion can be replaced with glob.glob() or very often with simple Python string comparisons like for file in os.listdir('.'): if not file.endswith('.png'): continue. Bash has various other expansion facilities like .{png,jpg} brace expansion and {1..100} as well as tilde expansion (~ expands to your home directory, and more generally ~account to the home directory of another user)
  • Shell variables like $SHELL or $my_exported_var can sometimes simply be replaced with Python variables. Exported shell variables are available as e.g. os.environ['SHELL'] (the meaning of export is to make the variable available to subprocesses — a variable which is not available to subprocesses will obviously not be available to Python running as a subprocess of the shell, or vice versa. The env= keyword argument to subprocess methods allows you to define the environment of the subprocess as a dictionary, so that’s one way to make a Python variable visible to a subprocess). With shell=False you will need to understand how to remove any quotes; for example, cd "$HOME" is equivalent to os.chdir(os.environ['HOME']) without quotes around the directory name. (Very often cd is not useful or necessary anyway, and many beginners omit the double quotes around the variable and get away with it until one day …)
  • Redirection allows you to read from a file as your standard input, and write your standard output to a file. grep 'foo' <inputfile >outputfile opens outputfile for writing and inputfile for reading, and passes its contents as standard input to grep, whose standard output then lands in outputfile. This is not generally hard to replace with native Python code.
  • Pipelines are a form of redirection. echo foo | nl runs two subprocesses, where the standard output of echo is the standard input of nl (on the OS level, in Unix-like systems, this is a single file handle). If you cannot replace one or both ends of the pipeline with native Python code, perhaps think about using a shell after all, especially if the pipeline has more than two or three processes (though look at the pipes module in the Python standard library or a number of more modern and versatile third-party competitors).
  • Job control lets you interrupt jobs, run them in the background, return them to the foreground, etc. The basic Unix signals to stop and continue a process are of course available from Python, too. But jobs are a higher-level abstraction in the shell which involve process groups etc which you have to understand if you want to do something like this from Python.
  • Quoting in the shell is potentially confusing until you understand that everything is basically a string. So ls -l / is equivalent to 'ls' '-l' '/' but the quoting around literals is completely optional. Unquoted strings which contain shell metacharacters undergo parameter expansion, whitespace tokenization and wildcard expansion; double quotes prevent whitespace tokenization and wildcard expansion but allow parameter expansions (variable substitution, command substitution, and backslash processing). This is simple in theory but can get bewildering, especially when there are several layers of interpretation (a remote shell command, for example).

Understand differences between sh and Bash

subprocess runs your shell commands with /bin/sh unless you specifically request otherwise (except of course on Windows, where it uses the value of the COMSPEC variable). This means that various Bash-only features like arrays, [[ etc are not available.

If you need to use Bash-only syntax, you can pass in the path to the shell as executable='/bin/bash' (where of course if your Bash is installed somewhere else, you need to adjust the path).

subprocess.run('''
    # This for loop syntax is Bash only
    for((i=1;i<=$#;i++)); do
        # Arrays are Bash-only
        array[i]+=123
    done''',
    shell=True, check=True,
    executable='/bin/bash')

A subprocess is separate from its parent, and cannot change it

A somewhat common mistake is doing something like

subprocess.run('cd /tmp', shell=True)
subprocess.run('pwd', shell=True)  # Oops, doesn't print /tmp

The same thing will happen if the first subprocess tries to set an environment variable, which of course will have disappeared when you run another subprocess, etc.

A child process runs completely separate from Python, and when it finishes, Python has no idea what it did (apart from the vague indicators that it can infer from the exit status and output from the child process). A child generally cannot change the parent’s environment; it cannot set a variable, change the working directory, or, in so many words, communicate with its parent without cooperation from the parent.

The immediate fix in this particular case is to run both commands in a single subprocess;

subprocess.run('cd /tmp; pwd', shell=True)

though obviously this particular use case isn’t very useful; instead, use the cwd keyword argument, or simply os.chdir() before running the subprocess. Similarly, for setting a variable, you can manipulate the environment of the current process (and thus also its children) via

os.environ['foo'] = 'bar'

or pass an environment setting to a child process with

subprocess.run('echo "$foo"', shell=True, env={'foo': 'bar'})

(not to mention the obvious refactoring subprocess.run(['echo', 'bar']); but echo is a poor example of something to run in a subprocess in the first place, of course).

Don’t run Python from Python

This is slightly dubious advice; there are certainly situations where it does make sense or is even an absolute requirement to run the Python interpreter as a subprocess from a Python script. But very frequently, the correct approach is simply to import the other Python module into your calling script and call its functions directly.

If the other Python script is under your control, and it isn’t a module, consider turning it into one. (This answer is too long already so I will not delve into details here.)

If you need parallelism, you can run Python functions in subprocesses with the multiprocessing module. There is also threading which runs multiple tasks in a single process (which is more lightweight and gives you more control, but also more constrained in that threads within a process are tightly coupled, and bound to a single GIL.)


回答 2

用子流程调用

import subprocess
subprocess.Popen("cwm --rdf test.rdf --ntriples > test.nt")

您收到的错误似乎是由于服务器上没有交换模块,您应该在服务器上安装交换,然后再次运行脚本

Call it with subprocess

import subprocess
subprocess.Popen("cwm --rdf test.rdf --ntriples > test.nt")

The error you are getting seems to be because there is no swap module on the server, you should install swap on the server then run the script again


回答 3

可以使用带有参数-c的bash程序来执行命令:

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
output = subprocess.check_output(['bash','-c', bashCommand])

It is possible you use the bash program, with the parameter -c for execute the commands:

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
output = subprocess.check_output(['bash','-c', bashCommand])

回答 4

您可以使用subprocess,但是我始终觉得这不是一种“ Pythonic”方式。因此,我创建了Sultan(无耻插件),使运行命令行功能变得容易。

https://github.com/aeroxis/sultan

You can use subprocess, but I always felt that it was not a ‘Pythonic’ way of doing it. So I created Sultan (shameless plug) that makes it easy to run command line functions.

https://github.com/aeroxis/sultan


回答 5

根据错误,您在服务器上缺少名为swap的软件包。这/usr/bin/cwm需要它。如果您使用的是Ubuntu / Debian,请python-swap使用aptitude 安装。

According to the error you are missing a package named swap on the server. This /usr/bin/cwm requires it. If you’re on Ubuntu/Debian, install python-swap using aptitude.


回答 6

您也可以使用“ os.popen”。例:

import os

command = os.popen('ls -al')
print(command.read())
print(command.close())

输出:

total 16
drwxr-xr-x 2 root root 4096 ago 13 21:53 .
drwxr-xr-x 4 root root 4096 ago 13 01:50 ..
-rw-r--r-- 1 root root 1278 ago 13 21:12 bot.py
-rw-r--r-- 1 root root   77 ago 13 21:53 test.py

None

Also you can use ‘os.popen’. Example:

import os

command = os.popen('ls -al')
print(command.read())
print(command.close())

Output:

total 16
drwxr-xr-x 2 root root 4096 ago 13 21:53 .
drwxr-xr-x 4 root root 4096 ago 13 01:50 ..
-rw-r--r-- 1 root root 1278 ago 13 21:12 bot.py
-rw-r--r-- 1 root root   77 ago 13 21:53 test.py

None

回答 7

要在没有外壳的情况下运行命令,请将命令作为列表传递,并使用[subprocess]以下命令在Python中实现重定向:

#!/usr/bin/env python
import subprocess

with open('test.nt', 'wb', 0) as file:
    subprocess.check_call("cwm --rdf test.rdf --ntriples".split(),
                          stdout=file)

注意:最后没有> test.ntstdout=file实现重定向。


要在Python中使用Shell运行命令,请将命令作为字符串传递并启用shell=True

#!/usr/bin/env python
import subprocess

subprocess.check_call("cwm --rdf test.rdf --ntriples > test.nt",
                      shell=True)

这是外壳程序,负责输出重定向(> test.nt在命令中)。


要运行使用bashisms的bash命令,请显式指定bash可执行文件,例如,模拟bash进程替换

#!/usr/bin/env python
import subprocess

subprocess.check_call('program <(command) <(another-command)',
                      shell=True, executable='/bin/bash')

To run the command without a shell, pass the command as a list and implement the redirection in Python using [subprocess]:

#!/usr/bin/env python
import subprocess

with open('test.nt', 'wb', 0) as file:
    subprocess.check_call("cwm --rdf test.rdf --ntriples".split(),
                          stdout=file)

Note: no > test.nt at the end. stdout=file implements the redirection.


To run the command using the shell in Python, pass the command as a string and enable shell=True:

#!/usr/bin/env python
import subprocess

subprocess.check_call("cwm --rdf test.rdf --ntriples > test.nt",
                      shell=True)

Here’s the shell is responsible for the output redirection (> test.nt is in the command).


To run a bash command that uses bashisms, specify the bash executable explicitly e.g., to emulate bash process substitution:

#!/usr/bin/env python
import subprocess

subprocess.check_call('program <(command) <(another-command)',
                      shell=True, executable='/bin/bash')

回答 8

执行此操作的pythonic方法是使用 subprocess.Popen

subprocess.Popen 接受一个列表,其中第一个元素是要运行的命令,后跟任何命令行参数。

举个例子:

import subprocess

args = ['echo', 'Hello!']
subprocess.Popen(args) // same as running `echo Hello!` on cmd line

args2 = ['echo', '-v', '"Hello Again"']
subprocess.Popen(args2) // same as running 'echo -v "Hello Again!"` on cmd line

The pythonic way of doing this is using subprocess.Popen

subprocess.Popen takes a list where the first element is the command to be run followed by any command line arguments.

As an example:

import subprocess

args = ['echo', 'Hello!']
subprocess.Popen(args) // same as running `echo Hello!` on cmd line

args2 = ['echo', '-v', '"Hello Again"']
subprocess.Popen(args2) // same as running 'echo -v "Hello Again!"` on cmd line

Mal-MAL-做一个Lisp

MAL-做一个Lisp

描述

1.Mal是一个受Clojure启发的Lisp解释器

2.MAL是一种学习工具

MAL的每个实现被分成11个增量的、自包含的(且可测试的)步骤,这些步骤演示了Lisp的核心概念。最后一步是能够自托管(运行mal的错误实现)。请参阅make-a-lisp process
guide

Make-a-LISP步骤包括:

每个Make-a-LISP步骤都有一个关联的架构图。该步骤的新元素以红色高亮显示。以下是step A

如果您对创建mal实现感兴趣(或者只是对使用mal做某事感兴趣),欢迎您加入我们的Discord或加入#mal onlibera.chat除了make-a-lisp
process guide
还有一个mal/make-a-lisp
FAQ
在这里我试图回答一些常见的问题

3.MAL用86种语言实现(91种不同实现,113种运行模式)

语言 创建者
Ada Chris Moore
Ada #2 Nicolas Boulenguez
GNU Awk Miutsuru Kariya
Bash 4 Joel Martin
BASIC(C64和QBASIC) Joel Martin
BBC BASIC V Ben Harris
C Joel Martin
C #2 Duncan Watts
C++ Stephen Thirlwall
C# Joel Martin
ChucK Vasilij Schneidermann
Clojure(Clojure和ClojureScript) Joel Martin
CoffeeScript Joel Martin
Common Lisp Iqbal Ansari
Crystal Linda_pp
D Dov Murik
Dart Harry Terkelsen
Elixir Martin Ek
Elm Jos van Bakel
Emacs Lisp Vasilij Schneidermann
Erlang Nathan Fiedler
ES6(ECMAScript 2015) Joel Martin
F# Peter Stephens
Factor Jordan Lewis
Fantom Dov Murik
Fennel sogaiu
Forth Chris Houser
GNU Guile Mu Lei
GNU Smalltalk Vasilij Schneidermann
Go Joel Martin
Groovy Joel Martin
Haskell Joel Martin
Haxe(Neko、Python、C++和JS) Joel Martin
Hy Joel Martin
Io Dov Murik
Janet sogaiu
Java Joel Martin
Java(松露/GraalVM) Matt McGill
JavaScript(Demo) Joel Martin
jq Ali MohammadPur
Julia Joel Martin
Kotlin Javier Fernandez-Ivern
LiveScript Jos van Bakel
Logo Dov Murik
Lua Joel Martin
GNU Make Joel Martin
mal itself Joel Martin
MATLAB(GNU Octave&MATLAB) Joel Martin
miniMAL(RepoDemo) Joel Martin
NASM Ben Dudson
Nim Dennis Felsing
Object Pascal Joel Martin
Objective C Joel Martin
OCaml Chris Houser
Perl Joel Martin
Perl 6 Hinrik Örn Sigurðsson
PHP Joel Martin
Picolisp Vasilij Schneidermann
Pike Dov Murik
PL/pgSQL(PostgreSQL) Joel Martin
PL/SQL(Oracle) Joel Martin
PostScript Joel Martin
PowerShell Joel Martin
Prolog Nicolas Boulenguez
Python(2.x和3.x) Joel Martin
Python #2(3.x) Gavin Lewis
RPython Joel Martin
R Joel Martin
Racket Joel Martin
Rexx Dov Murik
Ruby Joel Martin
Rust Joel Martin
Scala Joel Martin
Scheme (R7RS) Vasilij Schneidermann
Skew Dov Murik
Standard ML Fabian Bergström
Swift 2 Keith Rollin
Swift 3 Joel Martin
Swift 4 陆遥
Swift 5 Oleg Montak
Tcl Dov Murik
TypeScript Masahiro Wakame
Vala Simon Tatham
VHDL Dov Murik
Vimscript Dov Murik
Visual Basic.NET Joel Martin
WebAssembly(WASM) Joel Martin
Wren Dov Murik
XSLT Ali MohammadPur
Yorick Dov Murik
Zig Josh Tobin

演示文稿

Mal第一次出现在2014年Clojure West的闪电演讲中(不幸的是没有视频)。参见Examples/clojurewest2014.mal了解会议上的演示文稿(是的,该演示文稿是一个MALL程序)

在Midwest.io 2015上,乔尔·马丁(Joel Martin)就MAL做了题为“解锁的成就:一条更好的语言学习之路”的演讲VideoSlides

最近,乔尔在LambdaConf 2016大会上发表了题为“用10个增量步骤打造自己的Lisp解释器”的演讲:Part 1Part 2Part 3Part 4Slides

构建/运行实现

运行任何给定实现的最简单方法是使用docker。每个实现都有一个预先构建的停靠器映像,其中安装了语言依赖项。您可以在顶层Makefile中使用一个方便的目标启动REPL(其中impl是实现目录名,stepX是要运行的步骤):

make DOCKERIZE=1 "repl^IMPL^stepX"
    # OR stepA is the default step:
make DOCKERIZE=1 "repl^IMPL"

外部实现

以下实施作为单独的项目进行维护:

HolyC

生锈

  • by Tim Morgan
  • by vi-使用Pest语法,不使用典型的MAL基础设施(货币化步骤和内置的转换测试)

问:

  • by Ali Mohammad Pur-Q实现运行良好,但它需要专有的手动下载,不能Docker化(或集成到mal CI管道中),因此目前它仍然是一个单独的项目

其他MAL项目

  • malc详细说明:MAL(Make A Lisp)编译器。将MAL程序编译成LLVM汇编语言,然后编译成二进制
  • malcc-malcc是MAL语言的增量编译器实现。它使用微型C编译器作为编译器后端,并且完全支持MAL语言,包括宏、尾部调用消除,甚至运行时求值。“I Built a Lisp Compiler”发布有关该过程的帖子
  • frock+Clojure风格的PHP。使用mal/php运行程序
  • flk-无论Bash在哪里都可以运行的LISP
  • glisp详细说明:基于Lisp的自引导图形设计工具。Live Demo

实施详情

Ada

Ada实现是在Debian上使用GNAT4.9开发的。如果您有git、gnat和make(可选)的windows版本,它也可以在windows上编译而不变。没有外部依赖项(未实现ReadLine)

cd impls/ada
make
./stepX_YYY

Ada.2

第二个Ada实现是使用GNAT 8开发的,并与GNU读取线库链接

cd impls/ada
make
./stepX_YYY

GNU awk

Mal的GNU awk实现已经使用GNU awk 4.1.1进行了测试

cd impls/gawk
gawk -O -f stepX_YYY.awk

BASH 4

cd impls/bash
bash stepX_YYY.sh

基本(C64和QBasic)

Basic实现使用一个预处理器,该预处理器可以生成与C64 Basic(CBMv2)和QBasic兼容的Basic代码。C64模式已经过测试cbmbasic(当前需要打补丁的版本来修复线路输入问题),并且QBasic模式已经过测试qb64

生成C64代码并使用cbmbasic运行:

cd impls/basic
make stepX_YYY.bas
STEP=stepX_YYY ./run

生成QBasic代码并加载到qb64中:

cd impls/basic
make MODE=qbasic stepX_YYY.bas
./qb64 stepX_YYY.bas

感谢Steven Syrek有关此实现的原始灵感,请参阅

BBC Basic V

BBC Basic V实现可以在Brandy解释器中运行:

cd impls/bbc-basic
brandy -quit stepX_YYY.bbc

或在RISC OS 3或更高版本下的ARM BBC Basic V中:

*Dir bbc-basic.riscos
*Run setup
*Run stepX_YYY

C

mal的C实现需要以下库(lib和头包):glib、libffi6、libgc以及libedit或GNU readline库

cd impls/c
make
./stepX_YYY

C.2

mal的第二个C实现需要以下库(lib和头包):libedit、libgc、libdl和libffi

cd impls/c.2
make
./stepX_YYY

C++

构建mal的C++实现需要g++-4.9或clang++-3.5和readline兼容库。请参阅cpp/README.md有关更多详细信息,请执行以下操作:

cd impls/cpp
make
    # OR
make CXX=clang++-3.5
./stepX_YYY

C#

mal的C#实现已经在Linux上使用Mono C#编译器(MCS)和Mono运行时(2.10.8.1版)进行了测试。两者都是构建和运行C#实现所必需的

cd impls/cs
make
mono ./stepX_YYY.exe

卡盘

Chuck实现已经使用Chuck 1.3.5.2进行了测试

cd impls/chuck
./run

封闭式

在很大程度上,Clojure实现需要Clojure 1.5,然而,要通过所有测试,则需要Clojure 1.8.0-RC4

cd impls/clojure
lein with-profile +stepX trampoline run

CoffeeScript

sudo npm install -g coffee-script
cd impls/coffee
coffee ./stepX_YYY

通用Lisp

该实现已经在Ubuntu 16.04和Ubuntu 12.04上使用SBCL、CCL、CMUCL、GNU CLISP、ECL和Allegro CL进行了测试,请参阅README了解更多详细信息。如果您安装了上述依赖项,请执行以下操作来运行实现

cd impls/common-lisp
make
./run

水晶

MAL的晶体实现已经用Crystal 0.26.1进行了测试

cd impls/crystal
crystal run ./stepX_YYY.cr
    # OR
make   # needed to run tests
./stepX_YYY

D

使用GDC4.8对MAL的D实现进行了测试。它需要GNU读取线库

cd impls/d
make
./stepX_YYY

省道

DART实施已使用DART 1.20进行了测试

cd impls/dart
dart ./stepX_YYY

Emacs Lisp

Emacs Lisp的MAL实现已经使用Emacs 24.3和24.5进行了测试。虽然有非常基本的读数行编辑(<backspace>C-d工作,C-c取消进程),建议使用rlwrap

cd impls/elisp
emacs -Q --batch --load stepX_YYY.el
# with full readline support
rlwrap emacs -Q --batch --load stepX_YYY.el

灵丹妙药

MAL的长生不老的实现已经在长生不老的长生不老的1.0.5中进行了测试

cd impls/elixir
mix stepX_YYY
# Or with readline/line editing functionality:
iex -S mix stepX_YYY

榆树

MAL的ELM实现已经用ELM 0.18.0进行了测试

cd impls/elm
make stepX_YYY.js
STEP=stepX_YYY ./run

二郎

Mal的Erlang实现需要Erlang/OTP R17rebar要建造

cd impls/erlang
make
    # OR
MAL_STEP=stepX_YYY rebar compile escriptize # build individual step
./stepX_YYY

ES6(ECMAScript 2015)

ES6/ECMAScript 2015实施使用babel用于生成ES5兼容JavaScript的编译器。生成的代码已经在Node 0.12.4上进行了测试

cd impls/es6
make
node build/stepX_YYY.js

F#

mal的F#实现已经在Linux上使用Mono F#编译器(Fsharpc)和Mono运行时(版本3.12.1)进行了测试。单C#编译器(MCS)也是编译readline依赖项所必需的。所有这些都是构建和运行F#实现所必需的

cd impls/fsharp
make
mono ./stepX_YYY.exe

因素

MAL的因子实现已通过因子0.97(factorcode.org)

cd impls/factor
FACTOR_ROOTS=. factor -run=stepX_YYY

幻影

MAL的幻象实现已经用幻象1.0.70进行了测试

cd impls/fantom
make lib/fan/stepX_YYY.pod
STEP=stepX_YYY ./run

茴香

Mal的Fennel实现已经在Lua5.4上使用Fennel版本0.9.1进行了测试

cd impls/fennel
fennel ./stepX_YYY.fnl

第四

cd impls/forth
gforth stepX_YYY.fs

GNU Guile 2.1+

cd impls/guile
guile -L ./ stepX_YYY.scm

GNU Smalltalk

MALL的Smalltalk实现已经在GNU Smalltalk 3.2.91上进行了测试

cd impls/gnu-smalltalk
./run

MALL的GO实现要求在路径上安装GO。该实现已经在GO 1.3.1上进行了测试

cd impls/go
make
./stepX_YYY

时髦的

mal的Groovy实现需要Groovy才能运行,并且已经使用Groovy 1.8.6进行了测试

cd impls/groovy
make
groovy ./stepX_YYY.groovy

哈斯克尔

Haskell实现需要GHC编译器版本7.10.1或更高版本以及Haskell parsec和readline(或editline)包

cd impls/haskell
make
./stepX_YYY

Haxe(Neko、Python、C++和JavaScript)

Mal的Haxe实现需要编译Haxe3.2版。支持四种不同的Haxe目标:neko、Python、C++和JavaScript

cd impls/haxe
# Neko
make all-neko
neko ./stepX_YYY.n
# Python
make all-python
python3 ./stepX_YYY.py
# C++
make all-cpp
./cpp/stepX_YYY
# JavaScript
make all-js
node ./stepX_YYY.js

干草

MAL的Hy实现已经用Hy 0.13.0进行了测试

cd impls/hy
./stepX_YYY.hy

IO

已使用IO版本20110905测试了MAL的IO实现

cd impls/io
io ./stepX_YYY.io

珍妮特

MAIL的Janet实现已经使用Janet版本1.12.2进行了测试

cd impls/janet
janet ./stepX_YYY.janet

Java 1.7

mal的Java实现需要maven2来构建

cd impls/java
mvn compile
mvn -quiet exec:java -Dexec.mainClass=mal.stepX_YYY
    # OR
mvn -quiet exec:java -Dexec.mainClass=mal.stepX_YYY -Dexec.args="CMDLINE_ARGS"

Java,将Truffle用于GraalVM

这个Java实现可以在OpenJDK上运行,但是多亏了Truffle框架,它在GraalVM上的运行速度可以提高30倍。它已经在OpenJDK 11、GraalVM CE 20.1.0和GraalVM CE 21.1.0上进行了测试

cd impls/java-truffle
./gradlew build
STEP=stepX_YYY ./run

JavaScript/节点

cd impls/js
npm install
node stepX_YYY.js

朱莉娅

Mal的Julia实现需要Julia 0.4

cd impls/julia
julia stepX_YYY.jl

JQ

针对1.6版进行了测试,IO部门存在大量作弊行为

cd impls/jq
STEP=stepA_YYY ./run
    # with Debug
DEBUG=true STEP=stepA_YYY ./run

科特林

MAL的Kotlin实现已经使用Kotlin 1.0进行了测试

cd impls/kotlin
make
java -jar stepX_YYY.jar

LiveScript

已使用LiveScript 1.5测试了mal的LiveScript实现

cd impls/livescript
make
node_modules/.bin/lsc stepX_YYY.ls

徽标

MAL的Logo实现已经用UCBLogo 6.0进行了测试

cd impls/logo
logo stepX_YYY.lg

路亚

Mal的Lua实现已经使用Lua 5.3.5进行了测试。该实现需要安装luarock

cd impls/lua
make  # to build and link linenoise.so and rex_pcre.so
./stepX_YYY.lua

男性

运行mal的错误实现包括运行其他实现之一的STEPA,并传递作为命令行参数运行的mal步骤

cd impls/IMPL
IMPL_STEPA_CMD ../mal/stepX_YYY.mal

GNU Make 3.81

cd impls/make
make -f stepX_YYY.mk

NASM

MAL的NASM实现是为x86-64 Linux编写的,并且已经在Linux 3.16.0-4-AMD64和NASM版本2.11.05上进行了测试

cd impls/nasm
make
./stepX_YYY

NIM 1.0.4

MAL的NIM实现已经使用NIM 1.0.4进行了测试

cd impls/nim
make
  # OR
nimble build
./stepX_YYY

对象PASCAL

MAL的对象Pascal实现已经使用Free Pascal编译器版本2.6.2和2.6.4在Linux上构建和测试

cd impls/objpascal
make
./stepX_YYY

目标C

Mal的Objective C实现已经在Linux上使用CLANG/LLVM3.6进行了构建和测试。它还使用XCode7在OS X上进行了构建和测试

cd impls/objc
make
./stepX_YYY

OCaml 4.01.0

cd impls/ocaml
make
./stepX_YYY

MATLAB(GNU倍频程和MATLAB)

MATLAB实现已经在GNU Octave 4.2.1上进行了测试。它还在Linux上用MATLAB版本R2014a进行了测试。请注意,matlab是一个商业产品。

cd impls/matlab
./stepX_YYY
octave -q --no-gui --no-history --eval "stepX_YYY();quit;"
matlab -nodisplay -nosplash -nodesktop -nojvm -r "stepX_YYY();quit;"
    # OR with command line arguments
octave -q --no-gui --no-history --eval "stepX_YYY('arg1','arg2');quit;"
matlab -nodisplay -nosplash -nodesktop -nojvm -r "stepX_YYY('arg1','arg2');quit;"

极小值

miniMAL是用不到1024字节的JavaScript实现的小型Lisp解释器。要运行mal的最小实现,您需要下载/安装最小解释器(这需要Node.js)

cd impls/miniMAL
# Download miniMAL and dependencies
npm install
export PATH=`pwd`/node_modules/minimal-lisp/:$PATH
# Now run mal implementation in miniMAL
miniMAL ./stepX_YYY

Perl 5

Perl 5实现应该使用Perl 5.19.3和更高版本

要获得读取行编辑支持,请从CPAN安装Term::ReadLine::Perl或Term::ReadLine::GNU

cd impls/perl
perl stepX_YYY.pl

Perl 6

Perl6实现在Rakudo Perl6 2016.04上进行了测试

cd impls/perl6
perl6 stepX_YYY.pl

PHP 5.3

mal的PHP实现需要php命令行界面才能运行

cd impls/php
php stepX_YYY.php

皮奥利普

Picolisp实现需要libreadline和Picolisp 3.1.11或更高版本

cd impls/picolisp
./run

派克

Pike实现在Pike8.0上进行了测试

cd impls/pike
pike stepX_YYY.pike

pl/pgSQL(PostgreSQL SQL过程语言)

mal的PL/pgSQL实现需要一个正在运行的PostgreSQL服务器(“kanaka/mal-test-plpgsql”docker映像自动启动PostgreSQL服务器)。该实现连接到PostgreSQL服务器并创建名为“mal”的数据库来存储表和存储过程。包装器脚本使用psql命令连接到服务器,并默认为用户“postgres”,但可以使用PSQL_USER环境变量覆盖该值。可以使用PGPASSWORD环境变量指定密码。该实现已使用PostgreSQL 9.4进行了测试

cd impls/plpgsql
./wrap.sh stepX_YYY.sql
    # OR
PSQL_USER=myuser PGPASSWORD=mypass ./wrap.sh stepX_YYY.sql

PL/SQL(Oracle SQL过程语言)

mal的PL/SQL实现需要一个正在运行的Oracle DB服务器(“kanaka/mal-test-plsql”docker映像自动启动Oracle Express服务器)。该实现连接到Oracle服务器以创建类型、表和存储过程。默认的SQL*Plus登录值(用户名/口令@CONNECT_IDENTIFIER)是“SYSTEM/ORACLE”,但是可以用ORACLE_LOGON环境变量覆盖该值。该实施已使用Oracle Express Edition 11g Release 2进行了测试。请注意,任何SQL*Plus连接警告(用户密码过期等)都会干扰包装脚本与数据库通信的能力

cd impls/plsql
./wrap.sh stepX_YYY.sql
    # OR
ORACLE_LOGON=myuser/mypass@ORCL ./wrap.sh stepX_YYY.sql

PostScript Level 2/3

mal的PostScript实现需要运行Ghostscript。它已经使用Ghostscript 9.10进行了测试

cd impls/ps
gs -q -dNODISPLAY -I./ stepX_YYY.ps

PowerShell

Mal的PowerShell实现需要PowerShell脚本语言。它已经在Linux上使用PowerShell 6.0.0 Alpha 9进行了测试

cd impls/powershell
powershell ./stepX_YYY.ps1

序言

Prolog实现使用了一些特定于SWI-Prolog的结构,包括READLINE支持,并且已经在8.2.1版的Debian GNU/Linux上进行了测试

cd impls/prolog
swipl stepX_YYY

Python(2.x和3.x)

cd impls/python
python stepX_YYY.py

Python2(3.x)

第二个Python实现大量使用类型注释并使用Arpeggio解析器库

# Recommended: do these steps in a Python virtual environment.
pip3 install Arpeggio==1.9.0
python3 stepX_YYY.py

RPython

你一定是rpython在您的路径上(随附pypy)

cd impls/rpython
make        # this takes a very long time
./stepX_YYY

R

MALL R实现需要R(r-base-core)来运行

cd impls/r
make libs  # to download and build rdyncall
Rscript stepX_YYY.r

球拍(5.3)

Mal的racket实现需要运行racket编译器/解释器

cd impls/racket
./stepX_YYY.rkt

雷克斯

Mal的Rexx实现已经使用Regina Rexx 3.6进行了测试

cd impls/rexx
make
rexx -a ./stepX_YYY.rexxpp

拼音(1.9+)

cd impls/ruby
ruby stepX_YYY.rb

生锈(1.38+)

Mal的Rust实现需要使用Rust编译器和构建工具(Cargo)来构建

cd impls/rust
cargo run --release --bin stepX_YYY

缩放比例

安装Scala和SBT(http://www.scala-sbt.org/0.13/tutorial/Installing-sbt-on-Linux.html):

cd impls/scala
sbt 'run-main stepX_YYY'
    # OR
sbt compile
scala -classpath target/scala*/classes stepX_YYY

方案(R7RS)

MAL的方案实施已在赤壁-方案0.7.3、卡瓦2.4、高车0.9.5、鸡肉4.11.0、人马座0.8.3、气旋0.6.3(Git版本)和Foment 0.4(Git版本)上进行了测试。在弄清库是如何加载的并调整了R7RS实现的基础上,您应该能够让它在其他符合R7RS标准的实现上运行Makefilerun相应地编写脚本

cd impls/scheme
make symlinks
# chibi
scheme_MODE=chibi ./run
# kawa
make kawa
scheme_MODE=kawa ./run
# gauche
scheme_MODE=gauche ./run
# chicken
make chicken
scheme_MODE=chicken ./run
# sagittarius
scheme_MODE=sagittarius ./run
# cyclone
make cyclone
scheme_MODE=cyclone ./run
# foment
scheme_MODE=foment ./run

歪斜

MAL的不对称实现已经使用不对称0.7.42进行了测试

cd impls/skew
make
node stepX_YYY.js

标准ML(Poly/ML、MLton、莫斯科ML)

Mal的标准ML实现需要一个SML97实施。Makefile支持POLY/ML、MLTON、MOVICO ML,并已在POLY/ML 5.8.1、MLTON 20210117和MOSSIONS ML版本2.10上进行了测试

cd impls/sml
# Poly/ML
make sml_MODE=polyml
./stepX_YYY
# MLton
make sml_MODE=mlton
./stepX_YYY
# Moscow ML
make sml_MODE=mosml
./stepX_YYY

斯威夫特

MALL的SWIFT实施需要SWIFT 2.0编译器(XCode 7.0)来构建。由于语言和标准库中的更改,旧版本将无法运行

cd impls/swift
make
./stepX_YYY

斯威夫特3

MALL的SWIFT 3实施需要SWIFT 3.0编译器。它已经在SWIFT 3预览版3上进行了测试

cd impls/swift3
make
./stepX_YYY

斯威夫特4

MALL的SWIFT 4实施需要SWIFT 4.0编译器。它已在SWIFT 4.2.3版本中进行了测试

cd impls/swift4
make
./stepX_YYY

SWIFT 5

MALL的SWIFT 5实施需要SWIFT 5.0编译器。它已在SWIFT 5.1.1版本中进行了测试

cd impls/swift5
swift run stepX_YYY

TCL 8.6

Mal的Tcl实现需要运行Tcl 8.6。要获得readline行编辑支持,请安装tclreadline

cd impls/tcl
tclsh ./stepX_YYY.tcl

打字稿

mal的TypeScript实现需要TypeScript 2.2编译器。它已经在Node.js V6上进行了测试

cd impls/ts
make
node ./stepX_YYY.js

瓦拉

MALL的VALA实现已经用VALA0.40.8编译器进行了测试。您将需要安装valaclibreadline-dev或同等的

cd impls/vala
make
./stepX_YYY

VHDL

用GHDL0.29对mal的vhdl实现进行了测试。

cd impls/vhdl
make
./run_vhdl.sh ./stepX_YYY

Vimscript

Mal的Vimscript实现需要运行Vim 8.0

cd impls/vimscript
./run_vimscript.sh ./stepX_YYY.vim

Visual Basic.NET

Mal的VB.NET实现已经在Linux上使用Mono VB编译器(Vbnc)和Mono运行时(2.10.8.1版)进行了测试。构建和运行VB.NET实现需要两者

cd impls/vb
make
mono ./stepX_YYY.exe

WebAssembly(Wasm)

WebAssembly实现是用Wam(WebAssembly宏语言),并在几种不同的非Web嵌入(运行时)下运行:nodewasmtimewasmerlucetwaxwacewarpy

cd impls/wasm
# node
make wasm_MODE=node
./run.js ./stepX_YYY.wasm
# wasmtime
make wasm_MODE=wasmtime
wasmtime --dir=./ --dir=../ --dir=/ ./stepX_YYY.wasm
# wasmer
make wasm_MODE=wasmer
wasmer run --dir=./ --dir=../ --dir=/ ./stepX_YYY.wasm
# lucet
make wasm_MODE=lucet
lucet-wasi --dir=./:./ --dir=../:../ --dir=/:/ ./stepX_YYY.so
# wax
make wasm_MODE=wax
wax ./stepX_YYY.wasm
# wace
make wasm_MODE=wace_libc
wace ./stepX_YYY.wasm
# warpy
make wasm_MODE=warpy
warpy --argv --memory-pages 256 ./stepX_YYY.wasm

XSLT

mal的XSLT实现是用XSLT3编写的,并在Saxon 9.9.1.6家庭版上进行了测试

cd impls/xslt
STEP=stepX_YY ./run

雷恩

MAL的WREN实现在WREN 0.2.0上进行了测试

cd impls/wren
wren ./stepX_YYY.wren

约里克

MAL的Yorick实现在Yorick 2.2.04上进行了测试

cd impls/yorick
yorick -batch ./stepX_YYY.i

之字形

MAL的Zig实现在Zig0.5上进行了测试

cd impls/zig
zig build stepX_YYY

运行测试

顶层Makefile有许多有用的目标来协助实现、开发和测试。这个helpTarget提供目标和选项的列表:

make help

功能测试

中几乎有800个通用功能测试(针对所有实现)。tests/目录。每个步骤都有相应的测试文件,其中包含特定于该步骤的测试。这个runtest.py测试工具启动MAL步骤实现,然后将测试一次一个提供给实现,并将输出/返回值与预期的输出/返回值进行比较

  • 要在所有实现中运行所有测试(请准备等待):
make test
  • 要针对单个实施运行所有测试,请执行以下操作:
make "test^IMPL"

# e.g.
make "test^clojure"
make "test^js"
  • 要对所有实施运行单个步骤的测试,请执行以下操作:
make "test^stepX"

# e.g.
make "test^step2"
make "test^step7"
  • 要针对单个实施运行特定步骤的测试,请执行以下操作:
make "test^IMPL^stepX"

# e.g
make "test^ruby^step3"
make "test^ps^step4"

自托管功能测试

  • 若要在自托管模式下运行功能测试,请指定mal作为测试实现,并使用MAL_IMPLMake Variable以更改基础主机语言(默认值为JavaScript):
make MAL_IMPL=IMPL "test^mal^step2"

# e.g.
make "test^mal^step2"   # js is default
make MAL_IMPL=ruby "test^mal^step2"
make MAL_IMPL=python "test^mal^step2"

启动REPL

  • 要在特定步骤中启动实施的REPL,请执行以下操作:
make "repl^IMPL^stepX"

# e.g
make "repl^ruby^step3"
make "repl^ps^step4"
  • 如果您省略了这一步,那么stepA使用的是:
make "repl^IMPL"

# e.g
make "repl^ruby"
make "repl^ps"
  • 若要启动自托管实现的REPL,请指定mal作为REPL实现,并使用MAL_IMPLMake Variable以更改基础主机语言(默认值为JavaScript):
make MAL_IMPL=IMPL "repl^mal^stepX"

# e.g.
make "repl^mal^step2"   # js is default
make MAL_IMPL=ruby "repl^mal^step2"
make MAL_IMPL=python "repl^mal"

性能测试

警告:这些性能测试在统计上既不有效,也不全面;运行时性能不是mal的主要目标。如果你从这些性能测试中得出任何严肃的结论,那么请联系我,了解堪萨斯州一些令人惊叹的海滨房产,我愿意以低价卖给你

  • 要针对单个实施运行性能测试,请执行以下操作:
make "perf^IMPL"

# e.g.
make "perf^js"
  • 要对所有实施运行性能测试,请执行以下操作:
make "perf"

正在生成语言统计信息

  • 要报告单个实施的行和字节统计信息,请执行以下操作:
make "stats^IMPL"

# e.g.
make "stats^js"

对接测试

每个实现目录都包含一个Dockerfile,用于创建包含该实现的所有依赖项的docker映像。此外,顶级Makefile还支持在停靠器容器中通过传递以下参数来运行测试目标(以及perf、stats、repl等“DOCKERIZE=1”在make命令行上。例如:

make DOCKERIZE=1 "test^js^step3"

现有实现已经构建了坞站映像,并将其推送到坞站注册表。但是,如果您希望在本地构建或重建坞站映像,TopLevel Makefile提供了构建坞站映像的规则:

make "docker-build^IMPL"

注意事项

  • Docker镜像被命名为“Kanaka/mal-test-iml”
  • 基于JVM的语言实现(Groovy、Java、Clojure、Scala):您可能需要首先手动运行此命令一次make DOCKERIZE=1 "repl^IMPL"然后才能运行测试,因为需要下载运行时依赖项以避免测试超时。这些依赖项被下载到/mal目录中的点文件中,因此它们将在两次运行之间保持不变

许可证

MAL(make-a-lisp)是根据MPL 2.0(Mozilla Public License 2.0)许可的。有关更多详细信息,请参阅LICENSE.txt