I use IPython notebooks and would like to be able to select to create a 2.x or 3.x python notebook in IPython.
I initially had Anaconda. With Anaconda a global environment variable had to be changed to select what version of python you want and then IPython could be started. This is not what I was looking for so I uninstalled Anaconda and now have set up my own installation using MacPorts and PiP. It seems that I still have to use
port select --set python <python version>
to toggle between python 2.x and 3.x. which is no better than the anaconda solution.
Is there a way to select what version of python you want to use after you start an IPython notebook, preferably with my current MacPorts build?
The idea here is to install multiple ipython kernels. Here are instructions for anaconda. If you are not using anaconda, I recently added instructions using pure virtualenvs.
Anaconda >= 4.1.0
Since version 4.1.0, anaconda includes a special package nb_conda_kernels that detects conda environments with notebook kernels and automatically registers them. This makes using a new python version as easy as creating new conda environments:
After a restart of jupyter notebook, the new kernels are available over the graphical interface. Please note that new packages have to be explicitly installed into the new environments. The Managing environments section in conda’s docs provides further information.
Manually registering kernels
Users who do not want to use nb_conda_kernels or still use older versions of anaconda can use the following steps to manually register ipython kernels.
After that you should be able to choose between python2
and python3 when creating a new notebook in the interface.
Additionally you can pass the --name and --display-name options to ipython kernel install if you want to change the names of your kernels. See ipython kernel install --help for more informations.
These instructions explain how to install a python2 and python3 kernel in separate virtual environments for non-anaconda users. If you are using anaconda, please find my other answer for a solution directly tailored to anaconda.
I assume that you already have jupyter notebook installed.
First make sure that you have a python2 and a python3 interpreter with pip available.
To make things easier, you may want to add shell aliases for the activation command to your shell config file. Depending on the system and shell you use, this can be e.g. ~/.bashrc, ~/.bash_profile or ~/.zshrc
alias kernel2='source ~/py2_kernel/bin/activate'
alias kernel3='source ~/py3_kernel/bin/activate'
After restarting your shell, you can now install new packages after activating the environment you want to use.
With a current version of the Notebook/Jupyter, you can create a Python3 kernel. After starting a new notebook application from the command line with Python 2 you should see an entry „Python 3“ in the dropdown menu „New“. This gives you a notebook that uses Python 3. So you can have two notebooks side-by-side with different Python versions.
The Details
Create this directory: mkdir -p ~/.ipython/kernels/python3
Create this file ~/.ipython/kernels/python3/kernel.json with this content:
Use sudo pip3 install jupyter for installing jupyter for python3 and sudo pip install jupyter for installing jupyter notebook for python2. Then, you can call ipython kernel install command to enable both types of notebook to choose from in jupyter notebook.
Under Windows 7 I had anaconda and anaconda3 installed.
I went into \Users\me\anaconda\Scripts and executed
sudo .\ipython kernelspec install-self
then I went into \Users\me\anaconda3\Scripts and executed
sudo .\ipython kernel install
(I got jupyter kernelspec install-self is DEPRECATED as of 4.0. You probably want 'ipython kernel install' to install the IPython kernelspec.)
After starting jupyter notebook (in anaconda3) I got a neat dropdown menu in the upper right corner under “New” letting me choose between Python 2 odr Python 3 kernels.
I’ve recently moved over to using IPython notebooks as part of my workflow. However, I’ve not been successful in finding a way to import .py files into the individual cells of an open IPython notebook so that they can edited, run and then saved. Can this be done?
I’ve found this in the documentation which tells me how to import .py files as new notebooks but this falls short of what I want to achieve.
EDIT: Starting from IPython 3 (now Jupyter project), the notebook has a text editor that can be used as a more convenient alternative to
load/edit/save text files.
A text file can be loaded in a notebook cell with the magic command %load.
If you execute a cell containing:
%load filename.py
the content of filename.py will be loaded in the next cell. You can edit and execute it as usual.
To save the cell content back into a file add the cell-magic %%writefile filename.py at the beginning of the cell and run it. Beware that if a file with the same name already exists it will be silently overwritten.
To see the help for any magic command add a ?: like %load? or %%writefile?.
For general help on magic functions type “%magic”
For a list of the available magic functions, use %lsmagic. For a description
of any of them, type %magic_name?, e.g. ‘%cd?’.
write/save cell contents into myfile.py (use -a to append). Another alias: %%file myfile.py
To run
%run myfile.py
run myfile.py and output results in the current cell
To load/import
%load myfile.py
load “import” myfile.py into the current cell
For more magic and help
%lsmagic
list all the other cool cell magic commands.
%COMMAND-NAME?
for help on how to use a certain command. i.e. %run?
Note
Beside the cell magic commands, IPython notebook (now Jupyter notebook) is so cool that it allows you to use any unix command right from the cell (this is also equivalent to using the %%bash cell magic command).
To run a unix command from the cell, just precede your command with ! mark. for example:
!python --version see your python version
!python myfile.py run myfile.py and output results in the current cell, just like %run (see the difference between !python and %run in the comments below).
Also, see this nbviewer for further explanation with examples.
Hope this helps.
Drag and drop a Python file in the Ipython notebooks “home” notebooks table, click upload. This will create a new notebook with only one cell containing your .py file content
I have found it satisfactory to use ls and cd within ipython notebook to find the file. Then type cat your_file_name into the cell, and you’ll get back the contents of the file, which you can then paste into the cell as code.
If you only want the image of your figure to appear larger without changing the general appearance of your figure increase the figure resolution. Changing the figure size as suggested in most other answers will change the appearance since font sizes do not scale accordingly.
import matplotlib.pylab as plt
plt.rcParams['figure.dpi'] = 200
Currently I am working on a python project that contains sub modules and uses numpy/scipy. Ipython is used as interactive console. Unfortunately I am not very happy with workflow that I am using right now, I would appreciate some advice.
In IPython, the framework is loaded by a simple import command. However, it is often necessary to change code in one of the submodules of the framework. At this point a model is already loaded and I use IPython to interact with it.
Now, the framework contains many modules that depend on each other, i.e. when the framework is initially loaded the main module is importing and configuring the submodules. The changes to the code are only executed if the module is reloaded using reload(main_mod.sub_mod). This is cumbersome as I need to reload all changed modules individually using the full path. It would be very convenient if reload(main_module) would also reload all sub modules, but without reloading numpy/scipy..
It will reload all changed modules every time before executing a new line. The way this works is slightly different than dreload. Some caveats apply, type %autoreload? to see what can go wrong.
If you want to always enable this settings, modify your IPython configuration file ~/.ipython/profile_default/ipython_config.py[1] and appending:
[1]
If you don’t have the file ~/.ipython/profile_default/ipython_config.py, you need to call ipython profile create first. Or the file may be located at $IPYTHONDIR.
For some reason, neither %autoreload, nor dreload seem to work for the situation when you import code from one notebook to another. Only plain Python reload works:
IPython offers dreload() to recursively reload all submodules. Personally, I prefer to use the %run() magic command (though it does not perform a deep reload, as pointed out by John Salvatier in the comments).
To avoid typing those magic function again and again, they could be put in the ipython startup script(Name it with .py suffix under .ipython/profile_default/startup. All python scripts under that folder will be loaded according to lexical order), which looks like the following:
import inspect
# needs to be primed with an empty set for loadeddef recursively_reload_all_submodules(module, loaded=None):for name in dir(module):
member = getattr(module, name)if inspect.ismodule(member)and member notin loaded:
recursively_reload_all_submodules(member, loaded)
loaded.add(module)
reload(module)import mymodule
recursively_reload_all_submodules(mymodule, set())
import inspect
# needs to be primed with an empty set for loaded
def recursively_reload_all_submodules(module, loaded=None):
for name in dir(module):
member = getattr(module, name)
if inspect.ismodule(member) and member not in loaded:
recursively_reload_all_submodules(member, loaded)
loaded.add(module)
reload(module)
import mymodule
recursively_reload_all_submodules(mymodule, set())
This should effectively reload the entire tree of modules and submodules you give it. You can also put this function in your .ipythonrc (I think) so it is loaded every time you start the interpreter.
My standard practice for reloading is to combine both methods following first opening of IPython:
from IPython.lib.deepreload import reload
%load_ext autoreload
%autoreload 2
Loading modules before doing this will cause them not to be reloaded, even with a manual reload(module_name). I still, very rarely, get inexplicable problems with class methods not reloading that I’ve not yet looked into.
Note that the above mentioned autoreload only works in IntelliJ if you manually save the changed file (e.g. using ctrl+s or cmd+s). It doesn’t seem to work with auto-saving.
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
It looks like it’s preferable to do:
%reload_ext autoreload
%autoreload 2
Version information:
The version of the notebook server is 5.0.0 and is running on:
Python 3.6.2 |Anaconda, Inc.| (default, Sep 20 2017, 13:35:58) [MSC v.1900 32 bit (Intel)]
I would like to increase the width of the ipython notebook in my browser. I have a high-resolution screen, and I would like to expand the cell width/size to make use of this extra space.
If you don’t want to change your default settings, and you only want to change the width of the current notebook you’re working on, you can enter the following into a cell:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
/* Make the notebook cells take almost all available width */.container {width:99%!important;}/* Prevent the edit cell highlight box from getting clipped;
* important so that it also works when cell is in edit mode*/
div.cell.selected {border-left-width:1px!important;}
To get this to work with jupyter (version 4.0.6) I created ~/.jupyter/custom/custom.css containing:
/* Make the notebook cells take almost all available width */
.container {
width: 99% !important;
}
/* Prevent the edit cell highlight box from getting clipped;
* important so that it also works when cell is in edit mode*/
div.cell.selected {
border-left-width: 1px !important;
}
What I do usually after new installation is to modify the main css file where all visual styles are stored. I use Miniconda but location is similar with others C:\Miniconda3\Lib\site-packages\notebook\static\style\style.min.css
With some screens these resolutions are different and more than 1. To be on the safe side I change all to 98% so if I disconnect from my external screens on my laptop I still have 98% screen width.
Then just replace 1140px with 98% of the screen width.
(As of 2018, I would advise trying out JupyterHub/JupyterLab. It uses the full width of the monitor. If this is not an option, maybe since you are using one of the cloud-based Jupyter-as-a-service providers, keep reading)
(Stylish is accused of stealing user data, I have moved on to using Stylus plugin instead)
I recommend using Stylish Browser Plugin. This way you can override css for all notebooks, without adding any code to notebooks.
We don’t like to change configuration in .ipython/profile_default, since we are running a shared Jupyter server for the whole team and width is a user preference.
I made a style specifically for vertically-oriented high-res screens, that makes cells wider and adds a bit of empty-space in the bottom, so you can position the last cell in the centre of the screen.
https://userstyles.org/styles/131230/jupyter-wide
You can, of course, modify my css to your liking, if you have a different layout, or you don’t want extra empty-space in the end.
Last but not least, Stylish is a great tool to have in your toolset, since you can easily customise other sites/tools to your liking (e.g. Jira, Podio, Slack, etc.)
For Chrome users, I recommend Stylebot, which will let you override any CSS on any page, also let you search and install other share custom CSS. However, for our purpose we don’t need any advance theme. Open Stylebot, change to Edit CSS. Jupyter captures some keystrokes, so you will not be able to type the code below in. Just copy and paste, or just your editor:
#notebook-container.container {
width: 90%;
}
Change the width as you like, I find 90% looks nicer than 100%. But it is totally up to your eye.
/*Make the notebook cells take almost all available width and limit minimal width to 1110px*/.container {
width:99%;
min-width:1110px;}/*Prevent the edit cell highlight box from getting clipped;* important so that it also works when cell isin edit mode*/
div.cell.selected {
border-left-width:1px;}
I made some modification to @jvd10’s solution. The ‘!important’ seems too strong that the container doesn’t adapt well when TOC sidebar is displayed. I removed it and added ‘min-width’ to limit the minimal width.
Here is my .juyputer/custom/custom.css:
/* Make the notebook cells take almost all available width and limit minimal width to 1110px */
.container {
width: 99%;
min-width: 1110px;
}
/* Prevent the edit cell highlight box from getting clipped;
* important so that it also works when cell is in edit mode*/
div.cell.selected {
border-left-width: 1px;
}
回答 10
我尝试了一切,但对我没有用,最终我将数据框显示为HTML,如下所示
fromIPython.display import HTML
HTML (pd.to_html())
%matplotlib is a magic function in IPython. I’ll quote the relevant documentation here for you to read for convenience:
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Lines magics can return results and can be used in the right hand side of an assignment. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.
With this backend, the output of plotting commands is displayed inline within frontends like the Jupyter notebook, directly below the code cell that produced it. The resulting plots will then also be stored in the notebook document.
When using the ‘inline’ backend, your matplotlib graphs will be included in your notebook, next to the code. It may be worth also reading How to make IPython notebook matplotlib plot inline for reference on how to use it in your code.
If you want interactivity as well, you can use the nbagg backend with %matplotlib notebook (in IPython 3.x), as described here.
To set this up, before any plotting or import of matplotlib is
performed you must execute the %matplotlib magic command. This
performs the necessary behind-the-scenes setup for IPython to work
correctly hand in hand with matplotlib; it does not, however,
actually execute any Python import commands, that is, no names are
added to the namespace.
A particularly interesting backend, provided by IPython, is the
inline backend. This is available only for the Jupyter Notebook and
the Jupyter QtConsole. It can be invoked as follows:
%matplotlib inline
With this backend, the output of plotting commands is displayed inline
within frontends like the Jupyter notebook, directly below the code
cell that produced it. The resulting plots will then also be stored in
the notebook document.
If you want to add plots to your Jupyter notebook, then %matplotlib inline is a standard solution. And there are other magic commands will use matplotlib interactively within Jupyter.
%matplotlib: any plt plot command will now cause a figure window to open, and further commands can be run to update the plot. Some changes will not draw automatically, to force an update, use plt.draw()
%matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
Starting with IPython 5.0 and matplotlib 2.0 you can avoid the use of
IPython’s specific magic and use
matplotlib.pyplot.ion()/matplotlib.pyplot.ioff() which have the
advantages of working outside of IPython as well.
Some people use matplotlib interactively from the python shell and
have plotting windows pop up when they type commands. Some people run
Jupyter notebooks and draw inline plots for quick data analysis.
Others embed matplotlib into graphical user interfaces like wxpython
or pygtk to build rich applications. Some people use matplotlib in
batch scripts to generate postscript images from numerical
simulations, and still others run web application servers to
dynamically serve up graphs. To support all of these use cases,
matplotlib can target different outputs, and each of these
capabilities is called a backend; the “frontend” is the user facing
code, i.e., the plotting code, whereas the “backend” does all the hard
work behind-the-scenes to make the figure.
So when you type %matplotlib inline , it activates the inline backend. As discussed in the previous posts :
With this backend, the output of plotting commands is displayed inline
within frontends like the Jupyter notebook, directly below the code
cell that produced it. The resulting plots will then also be stored in
the notebook document.
It just means that any graph which we are creating as a part of our code will appear in the same notebook and not in separate window which would happen if we have not used this magic statement.
IPython kernel has the ability to display plots by executing code. The IPython kernel is designed to work seamlessly with the matplotlib plotting library to provide this functionality.
%matplotlib is a magic command which performs the necessary behind-the-scenes setup for IPython to work correctly hand-in-hand with matplotlib;
it does not execute any Python import commands, that is, no names are added to the namespace.
Display output in separate window
%matplotlib
Display output inline
(available only for the Jupyter Notebook and the Jupyter QtConsole)
Example – GTK3Agg – An Agg rendering to a GTK 3.x canvas (requires PyGObject and pycairo or cairocffi).
More details about matplotlib interactive backends: here
Starting with IPython 5.0 and matplotlib 2.0 you can avoid the use of
IPython’s specific magic and use matplotlib.pyplot.ion()/matplotlib.pyplot.ioff()
which have the advantages of working outside of IPython as well.
It is not mandatory to write that. It worked fine for me without (%matplotlib) magic function.
I am using Sypder compiler, one that comes with in Anaconda.
c.IPKernelApp.matplotlib=<CaselessStrEnum>Default:NoneChoices:['auto','gtk','gtk3','inline','nbagg','notebook','osx','qt','qt4','qt5','tk','wx']Configure matplotlib for interactive use with the default matplotlib backend.
I have to agree with foobarbecue (I don’t have enough recs to be able to simply insert a comment under his post):
It’s now recommended that python notebook isn’t started wit the argument --pylab, and according to Fernando Perez (creator of ipythonnb) %matplotlib inline should be the initial notebook command.
However, by leaving the () off the end of the plot type you receive a somewhat ambiguous non-error.
Erronious code:
df_randNumbers1.ix[:,["A","B"]].plot.kde
Example error:
<bound method FramePlotMethods.kde of <pandas.tools.plotting.FramePlotMethods object at 0x000001DDAF029588>>
Other than this one line message, there is no stack trace or other obvious reason to think you made a syntax error. The plot doesn’t print.
回答 9
在Jupyter的单独单元中运行绘图命令时,我遇到了同样的问题:
In[1]:%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
In[2]: x = np.array([1,3,4])
y = np.array([1,5,3])In[3]: fig = plt.figure()<Figure size 432x288with0Axes>#this might be the problemIn[4]: ax = fig.add_subplot(1,1,1)In[5]: ax.scatter(x, y)Out[5]:<matplotlib.collections.PathCollection at 0x12341234># CAN'T SEE ANY PLOT :(In[6]: plt.show()# STILL CAN'T SEE IT :(
通过将绘图命令合并到单个单元格中解决了该问题:
In[1]:%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
In[2]: x = np.array([1,3,4])
y = np.array([1,5,3])In[3]: fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x, y)Out[3]:<matplotlib.collections.PathCollection at 0x12341234># AND HERE APPEARS THE PLOT AS DESIRED :)
I had the same problem when I was running the plotting commands in separate cells in Jupyter:
In [1]: %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
In [2]: x = np.array([1, 3, 4])
y = np.array([1, 5, 3])
In [3]: fig = plt.figure()
<Figure size 432x288 with 0 Axes> #this might be the problem
In [4]: ax = fig.add_subplot(1, 1, 1)
In [5]: ax.scatter(x, y)
Out[5]: <matplotlib.collections.PathCollection at 0x12341234> # CAN'T SEE ANY PLOT :(
In [6]: plt.show() # STILL CAN'T SEE IT :(
The problem was solved by merging the plotting commands into a single cell:
In [1]: %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
In [2]: x = np.array([1, 3, 4])
y = np.array([1, 5, 3])
In [3]: fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x, y)
Out[3]: <matplotlib.collections.PathCollection at 0x12341234>
# AND HERE APPEARS THE PLOT AS DESIRED :)
Having been involved in the python world for so long, we are all aware of pip, easy_install, and virtualenv, but these tools did not meet all of our specific requirements. The main problem is that they are focused around Python, neglecting non-Python library dependencies, such as HDF5, MKL, LLVM, etc., which do not have a setup.py in their source code and also do not install files into Python’s site-packages directory.
So Conda is a packaging tool and installer that aims to do more than what pip does; handle library dependencies outside of the Python packages as well as the Python packages themselves. Conda also creates a virtual environment, like virtualenv does.
As such, Conda should be compared to Buildout perhaps, another tool that lets you handle both Python and non-Python installation tasks.
Because Conda introduces a new packaging format, you cannot use pip and Conda interchangeably; pip cannot install the Conda package format. You can use the two tools side by side (by installing pip with conda install pip) but they do not interoperate either.
This highlights a key difference between conda and pip. Pip installs Python packages whereas conda installs packages which may contain software written in any language. For example, before using pip, a Python interpreter must be installed via a system package manager or by downloading and running an installer. Conda on the other hand can install Python packages as well as the Python interpreter directly.
and further on
Occasionally a package is needed which is not available as a conda package but is available on PyPI and can be installed with pip. In these cases, it makes sense to try to use both conda and pip.
Compiles everything from source. EDIT: pip now installs binary wheels, if they are available.
Blessed by the core Python community (i.e., Python 3.4+ includes code that automatically bootstraps pip).
conda
Python agnostic. The main focus of existing packages are for Python, and indeed Conda itself is written in Python, but you can also have Conda packages for C libraries, or R packages, or really anything.
Installs binaries. There is a tool called conda build that builds packages from source, but conda install itself installs things from already built Conda packages.
External. Conda is the package manager of Anaconda, the Python distribution provided by Continuum Analytics, but it can be used outside of Anaconda too. You can use it with an existing Python installation by pip installing it (though this is not recommended unless you have a good reason to use an existing installation).
In both cases:
Written in Python
Open source (Conda is BSD and pip is MIT)
The first two bullet points of Conda are really what make it advantageous over pip for many packages. Since pip installs from source, it can be painful to install things with it if you are unable to compile the source code (this is especially true on Windows, but it can even be true on Linux if the packages have some difficult C or FORTRAN library dependencies). Conda installs from binary, meaning that someone (e.g., Continuum) has already done the hard work of compiling the package, and so the installation is easy.
There are also some differences if you are interested in building your own packages. For instance, pip is built on top of setuptools, whereas Conda uses its own format, which has some advantages (like being static, and again, Python agnostic).
The other answers give a fair description of the details, but I want to highlight some high-level points.
pip is a package manager that facilitates installation, upgrade, and uninstallation of python packages. It also works with virtual python environments.
conda is a package manager for any software (installation, upgrade and uninstallation). It also works with virtual system environments.
One of the goals with the design of conda is to facilitate package management for the entire software stack required by users, of which one or more python versions may only be a small part. This includes low-level libraries, such as linear algebra, compilers, such as mingw on Windows, editors, version control tools like Hg and Git, or whatever else requires distribution and management.
For version management, pip allows you to switch between and manage multiple python environments.
Conda allows you to switch between and manage multiple general purpose environments across which multiple other things can vary in version number, like C-libraries, or compilers, or test-suites, or database engines and so on.
Conda is not Windows-centric, but on Windows it is by far the superior solution currently available when complex scientific packages requiring compilation are required to be installed and managed.
I want to weep when I think of how much time I have lost trying to compile many of these packages via pip on Windows, or debug failed pip install sessions when compilation was required.
As a final point, Continuum Analytics also hosts (free) binstar.org (now called anaconda.org) to allow regular package developers to create their own custom (built!) software stacks that their package-users will be able to conda install from.
Not to confuse you further,
but you can also use pip within your conda environment, which validates the general vs. python specific managers comments above.
Python programmers are probably familiar with pip to download packages from PyPI and manage their requirements. Although, both conda and pip are package managers, they are very different:
Pip is specific for Python packages and conda is language-agnostic, which means we can use conda to manage packages from any language
Pip compiles from source and conda installs binaries, removing the burden of compilation
Conda creates language-agnostic environments natively whereas pip relies on virtualenv to manage only Python environments
Though it is recommended to always use conda packages, conda also includes pip, so you don’t have to choose between the two. For example, to install a python package that does not have a conda package, but is available through pip, just run, for example:
Reality: Conda and pip serve different purposes, and only directly compete in a small subset of tasks: namely installing Python packages in isolated environments.
Pip, which stands for Pip Installs Packages, is Python’s officially-sanctioned package manager, and is most commonly used to install packages published on the Python Package Index (PyPI). Both pip and PyPI are governed and supported by the Python Packaging Authority (PyPA).
In short, pip is a general-purpose manager for Python packages; conda is a language-agnostic cross-platform environment manager. For the user, the most salient distinction is probably this: pip installs python packages within any environment; conda installs any package within conda environments. If all you are doing is installing Python packages within an isolated environment, conda and pip+virtualenv are mostly interchangeable, modulo some difference in dependency handling and package availability. By isolated environment I mean a conda-env or virtualenv, in which you can install packages without modifying your system Python installation.
Even setting aside Myth #2, if we focus on just installation of Python packages, conda and pip serve different audiences and different purposes. If you want to, say, manage Python packages within an existing system Python installation, conda can’t help you: by design, it can only install packages within conda environments. If you want to, say, work with the many Python packages which rely on external dependencies (NumPy, SciPy, and Matplotlib are common examples), while tracking those dependencies in a meaningful way, pip can’t help you: by design, it manages Python packages and only Python packages.
Conda and pip are not competitors, but rather tools focused on different groups of users and patterns of use.
the mingwpy project may bring one day a ‘compilation’ package to windows users, allowing to install everything from source when needed.
“Conda” packaging remains better for the market it serves, and highlights areas where the “standard” should improve.
(also, the dependency specification multiple-effort, in standard wheel system and in conda system, or buildout, is not very pythonic, it would be nice if all these packaging ‘core’ techniques could converge, via a sort of PEP)
You can manually download IPython from GitHub or PyPI. To install one
of these versions, unpack it and run the following from the top-level
source directory using the Terminal:
conda is only for Anaconda + other scientific packages like R dependencies etc. NOT everyone needs Anaconda that already comes with Python. Anaconda is mostly for those who do Machine learning/deep learning etc. Casual Python dev won’t run Anaconda on his laptop.
I may have found one further difference of a minor nature. I have my python environments under /usr rather than /home or whatever. In order to install to it, I would have to use sudo install pip. For me, the undesired side effect of sudo install pip was slightly different than what are widely reported elsewhere: after doing so, I had to run python with sudo in order to import any of the sudo-installed packages. I gave up on that and eventually found I could use sudo conda to install packages to an environment under /usr which then imported normally without needing sudo permission for python. I even used sudo conda to fix a broken pip rather than using sudo pip uninstall pip or sudo pip --upgrade install pip.
install_req = ['ipython']
if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv:
install_req.remove('ipython')
install_req.append('ipython<6')
setup(
...
install_requires=install_req
)