The Essential Linux/Unix commands. All of these are available through the subprocess library. This isn’t always the best first choice for doing all external commands. Look also at shutil for some commands that are separate Linux commands, but you could probably implement directly in your Python scripts. Another huge batch of Linux commands are in the os library; you can do these more simply in Python.
And — bonus! — more quickly. Each separate Linux command in the shell (with a few exceptions) forks a subprocess. By using Python shutil and os modules, you don’t fork a subprocess.
The shell environment features. This includes stuff that sets a command’s environment (current directory and environment variables and what-not). You can easily manage this from Python directly.
The shell programming features. This is all the process status code checking, the various logic commands (if, while, for, etc.) the test command and all of it’s relatives. The function definition stuff. This is all much, much easier in Python. This is one of the huge victories in getting rid of bash and doing it in Python.
Interaction features. This includes command history and what-not. You don’t need this for writing shell scripts. This is only for human interaction, and not for script-writing.
The shell file management features. This includes redirection and pipelines. This is trickier. Much of this can be done with subprocess. But some things that are easy in the shell are unpleasant in Python. Specifically stuff like (a | b; c ) | something >result. This runs two processes in parallel (with output of a as input to b), followed by a third process. The output from that sequence is run in parallel with something and the output is collected into a file named result. That’s just complex to express in any other language.
Specific programs (awk, sed, grep, etc.) can often be rewritten as Python modules. Don’t go overboard. Replace what you need and evolve your “grep” module. Don’t start out writing a Python module that replaces “grep”.
The best thing is that you can do this in steps.
Replace AWK and PERL with Python. Leave everything else alone.
Look at replacing GREP with Python. This can be a bit more complex, but your version of GREP can be tailored to your processing needs.
Look at replacing FIND with Python loops that use os.walk. This is a big win because you don’t spawn as many processes.
Look at replacing common shell logic (loops, decisions, etc.) with Python scripts.
Also, if you want to replace awk, sed and grep with something Python based then I recommend pyp –
“The Pyed Piper”, or pyp, is a linux command line text manipulation
tool similar to awk or sed, but which uses standard python string and
list methods as well as custom functions evolved to generate fast
results in an intense production environment.
#!/usr/bin/env ipython3# *** How to have the most comfort scripting experience of your life ***# ######################################################################## … by using ipython for scripting combined with subcommands from bash!## 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy # creates new ipy-file## 2. chmod +x scriptname.ipy # make in executable## 3. starting with line 2, write normal python or do some of# the ! magic of ipython, so that you can use unix commands# within python and even assign their output to a variable via# var = !cmd1 | cmd2 | cmd3 # enjoy ;)## 4. run via ./scriptname.ipy - if it fails with recognizing % and !# but parses raw python fine, please check again for the .ipy suffix# ugly example, please go and find more in the wild
files =!ls *.*| grep "y"for file in files:!echo $file | grep "p"# sorry for this nonsense example ;)
I just discovered how to combine the best parts of bash and ipython. Up to now this seems more comfortable to me than using subprocess and so on. You can easily copy big parts of existing bash scripts and e.g. add error handling in the python way :)
And here is my result:
#!/usr/bin/env ipython3
# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# … by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy # creates new ipy-file
#
# 2. chmod +x scriptname.ipy # make in executable
#
# 3. starting with line 2, write normal python or do some of
# the ! magic of ipython, so that you can use unix commands
# within python and even assign their output to a variable via
# var = !cmd1 | cmd2 | cmd3 # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
# but parses raw python fine, please check again for the .ipy suffix
# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
!echo $file | grep "p"
# sorry for this nonsense example ;)
If you want to use Python as a shell, why not have a look at IPython ? It is also good to learn interactively the language.
If you do a lot of text manipulation, and if you use Vim as a text editor, you can also directly write plugins for Vim in python. just type “:help python” in Vim and follow the instructions or have a look at this presentation. It is so easy and powerfull to write functions that you will use directly in your editor!
In the beginning there was sh, sed, and awk (and find, and grep, and…). It was good. But awk can be an odd little beast and hard to remember if you don’t use it often. Then the great camel created Perl. Perl was a system administrator’s dream. It was like shell scripting on steroids. Text processing, including regular expressions were just part of the language. Then it got ugly… People tried to make big applications with Perl. Now, don’t get me wrong, Perl can be an application, but it can (can!) look like a mess if you’re not really careful. Then there is all this flat data business. It’s enough to drive a programmer nuts.
Enter Python, Ruby, et al. These are really very good general purpose languages. They support text processing, and do it well (though perhaps not as tightly entwined in the basic core of the language). But they also scale up very well, and still have nice looking code at the end of the day. They also have developed pretty hefty communities with plenty of libraries for most anything.
Now, much of the negativeness towards Perl is a matter of opinion, and certainly some people can write very clean Perl, but with this many people complaining about it being too easy to create obfuscated code, you know some grain of truth is there. The question really becomes then, are you ever going to use this language for more than simple bash script replacements. If not, learn some more Perl.. it is absolutely fantastic for that. If, on the other hand, you want a language that will grow with you as you want to do more, may I suggest Python or Ruby.
I suggest the awesome online book Dive Into Python. It’s how I learned the language originally.
Beyond teaching you the basic structure of the language, and a whole lot of useful data structures, it has a good chapter on file handling and subsequent chapters on regular expressions and more.
One reason I love Python is that it is much better standardized than the POSIX tools. I have to double and triple check that each bit is compatible with other operating systems. A program written on a Linux system might not work the same on a BSD system of OSX. With Python, I just have to check that the target system has a sufficiently modern version of Python.
Even better, a program written in standard Python will even run on Windows!
shell can very easily spawn read-only code. Write it and when you come back to it, you will never figure out what you did again. It’s very easy to accomplish this.
shell can do A LOT of text processing, splitting, etc in one line with pipes.
it is the best glue language when it comes to integrate the call of programs in different programming languages.
For python:
if you want portability to windows included, use python.
python can be better when you must manipulate just more than text, such as collections of numbers. For this, I recommend python.
I usually choose bash for most of the things, but when I have something that must cross windows boundaries, I just use python.
I have built semi-long shell scripts (300-500 lines) and Python code which does similar functionality. When many external commands are being executed, I find the shell is easier to use. Perl is also a good option when there is lots of text manipulation.
Your best bet is a tool that is specifically geared towards your problem. If it’s processing text files, then Sed, Awk and Perl are the top contenders. Python is a general-purpose dynamic language. As with any general purpose language, there’s support for file-manipulation, but that isn’t what it’s core purpose is. I would consider Python or Ruby if I had a requirement for a dynamic language in particular.
In short, learn Sed and Awk really well, plus all the other goodies that come with your flavour of *nix (All the Bash built-ins, grep, tr and so forth). If it’s text file processing you’re interested in, you’re already using the right stuff.
import json
import os
import tempfile
# get the api answer with curl
answer =`curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zeroif answer:
answer_json = json.loads(answer.stdout)
avatar_url = answer_json['avatar_url']
destination = os.path.join(tempfile.gettempdir(),'python.png')# execute curl once again, this time to get the image
result =`curl {avatar_url}>{destination}if result:# if there were no problems show the file
p`ls -l {destination}else:print('Failed to download avatar')print('Avatar downloaded')else:print('Failed to access github api')
You can use python instead of bash with the ShellPy library.
Here is an example that downloads avatar of Python user from Github:
import json
import os
import tempfile
# get the api answer with curl
answer = `curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zero
if answer:
answer_json = json.loads(answer.stdout)
avatar_url = answer_json['avatar_url']
destination = os.path.join(tempfile.gettempdir(), 'python.png')
# execute curl once again, this time to get the image
result = `curl {avatar_url} > {destination}
if result:
# if there were no problems show the file
p`ls -l {destination}
else:
print('Failed to download avatar')
print('Avatar downloaded')
else:
print('Failed to access github api')
As you can see, all expressions inside of grave accent ( ` ) symbol are executed in shell. And in Python code, you can capture results of this execution and perform actions on it. For example:
log = `git log --pretty=oneline --grep='Create'
This line will first execute git log --pretty=oneline --grep='Create' in shell and then assign the result to the log variable. The result has the following properties:
stdout the whole text from stdout of the executed process
stderr the whole text from stderr of the executed process
returncode returncode of the execution
This is general overview of the library, more detailed description with examples can be found here.
import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")for line in stdout_text:if line.startswith("#"):passelse
jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))# do something with jobID
If your textfile manipulation usually is one-time, possibly done on the shell-prompt, you will not get anything better from python.
On the other hand, if you usually have to do the same (or similar) task over and over, and you have to write your scripts for doing that, then python is great – and you can easily create your own libraries (you can do that with shell scripts too, but it’s more cumbersome).
A very simple example to get a feeling.
import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")
for line in stdout_text:
if line.startswith("#"):
pass
else
jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))
# do something with jobID
Check also sys and getopt module, they are the first you will need.
I have published a package on PyPI: ez.
Use pip install ez to install it.
It has packed common commands in shell and nicely my lib uses basically the same syntax as shell. e.g., cp(source, destination) can handle both file and folder! (wrapper of shutil.copy shutil.copytree and it decides when to use which one). Even more nicely, it can support vectorization like R!
Another example: no os.walk, use fls(path, regex) to recursively find files and filter with regular expression and it returns a list of files with or without fullpath
Final example: you can combine them to write very simply scripts: files = fls('.','py$'); cp(files, myDir)
Definitely check it out! It has cost me hundreds of hours to write/improve it!