You need to adjust your install to use relative paths. virtualenv provides for this with the --relocatable option. From the docs:
Normally environments are tied to a
specific path. That means that you
cannot move an environment around or
copy it to another computer. You can
fix up an environment to make it
relocatable with the command:
$ virtualenv –relocatable ENV
NOTE: ENV is the name of the virtual environment and you must run this from outside the ENV directory.
This will make some of the files
created by setuptools or distribute
use relative paths, and will change
all the scripts to use
activate_this.py instead of using the
location of the Python interpreter to
select the environment.
Note: you must run this after you’ve
installed any packages into the
environment. If you make an
environment relocatable, then install
a new package, you must run virtualenv
–relocatable again.
However, you have just renamed myproject to project, so that command failed to execute.
That is why it says pip is not installed, because you haven’t installed pip in the system global environment and your virtualenv pip is not sourced correctly.
If you want to fix this manually, this is the way:
With your favorite editor like Vim, modify /tmp/project/env/bin/activate usually in line 42:
NOTE: As @jb. points out, this solution only applies to easily (re)created virtualenvs. If an environment takes several hours to install this solution is not recommended
Virtualenvs are great because they are easy to make and switch around; they keep you from getting locked into a single configuration. If you know the project requirements, or can get them, Make a new virtualenv:
Create a requirements.txt file
(env)$ pip freeze > requirements.txt
If you can’t create the requirements.txt file, check env/lib/pythonX.X/site-packages before removing the original env.
Delete the existing (env)
deactivate && rm -rf env
Create a new virtualenv, activate it, and install requirements
I always install virtualenvwrapper to help out. From the shell prompt:
pip install virtualenvwrapper
There is a way documented in the virtualenvwrapper documents – cpvirtualenv
This is what you do. Make sure you are out of your environment and back to the shell prompt. Type in this with the names required:
virtualenv --relocatable ENV is not a desirable solution. I assume most people want the ability to rename a virtualenv without any long-term side effects.
So I’ve created a simple tool to do just that. The project page for virtualenv-mv outlines it in a bit more detail, but essentially you can use virtualenv-mv just like you’d use a simple implementation of mv (without any options).
For example:
virtualenv-mv myproject project
Please note however that I just hacked this up. It could break under unusual circumstances (e.g. symlinked virtualenvs) so please be careful (back up what you can’t afford to lose) and let me know if you encounter any problems.
I wrote this when I was young an naive, update-alternatives is the better way to do this. See @Pardhu’s answer.
Open your .bashrc file nano ~/.bashrc. Type alias python=python3
on to a new line at the top of the file then save the file with ctrl+o
and close the file with ctrl+x. Then, back at your command line type
source ~/.bashrc. Now your alias should be permanent.
This gives a priority of 10 for the path of python3. The disadvantage of editing .bashrc file is that it will not work while using the commands with sudo.
Update: Please use sudo while running the command like this:
Just follow these steps to help change the default python to the newly upgrade python version. Worked well for me.
sudo apt-install python3.7 Install the latest version of python you want
cd /usr/bin Enter the root directory where python is installed
sudo unlink python or sudo unlink python3 . Unlink the current default python
sudo ln -sv /usr/bin/python3.7 python Link the new downloaded python version
python --version Check the new python version and you’re good to go
回答 11
对于另一种仅限于当前用户的非侵入式方法:
# First, make $HOME/bin, which will be automatically added to user's PATH
mkdir -p ~/bin
# make link actual python binaries
ln -s $(which python3) python
ln -s $(which pip3) pip
For another non-invasive, current-user only approach:
# First, make $HOME/bin, which will be automatically added to user's PATH
mkdir -p ~/bin
# make link actual python binaries
ln -s $(which python3) python
ln -s $(which pip3) pip
The best way in ubuntu 18.04 which will work for all users is
sudo vim /etc/bash.bashrc
add lines
alias python=python3
alias pip=pip3
Save the changes and restart .
After restart what ever version of python 3 you have in the system along with python 2.7 will be taken as default. You could be more specific by saying the following in alias if you have multiple version of python 3.
sudo vim /etc/bash.bashrc
add lines
alias python=python3.6
alias pip=pip3.6
Change the path /usr/bin/python3 to your desired python version accordingly.
The last argument specified it priority means, if no manual alternative selection is made the alternative with the highest priority number will be set. In our case we have set a priority 2 for /usr/bin/python3.6.8 and as a result the /usr/bin/python3.6.8 was set as default python version automatically by update-alternatives command.
we can anytime switch between the above listed python alternative versions using below command and entering a selection number:
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [not found. pip may install it below.]
dateutil: yes [dateutil was not found.Itis required for date
axis support. pip/easy_install may attempt to
install it after matplotlib.]
tornado: yes [tornado was not found.Itis required for the
WebAgg backend. pip/easy_install may attempt to
install it after matplotlib.]
pyparsing: yes [pyparsing was not found.Itis required for
mathtext support. pip/easy_install may attempt to
install it after matplotlib.]
pycxx: yes [Couldn't import. Using local copy.]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
freetype: no [pkg-config information for 'freetype2' could
not be found.]
…
The following required packages can not be built:* freetype
When performing pip install -r requirements.txt, I get the following error during the stage where it is installing matplotlib:
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [not found. pip may install it below.]
dateutil: yes [dateutil was not found. It is required for date
axis support. pip/easy_install may attempt to
install it after matplotlib.]
tornado: yes [tornado was not found. It is required for the
WebAgg backend. pip/easy_install may attempt to
install it after matplotlib.]
pyparsing: yes [pyparsing was not found. It is required for
mathtext support. pip/easy_install may attempt to
install it after matplotlib.]
pycxx: yes [Couldn't import. Using local copy.]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
freetype: no [pkg-config information for 'freetype2' could
not be found.]
…
The following required packages can not be built:
* freetype
Shouldn’t pip install -r requirements.txt also install freetype? How should freetype be installed in Ubuntu 12.04 so it works with matplotlib?
I had the same issue with Python 3.6 on Windows, but then I switched to Python 3.5.2 and everything works fine.
回答 8
这个命令sudo apt-get install libfreetype6-dev对我来说在Ubuntu 16.04上失败了, The following packages have unmet dependencies:
libfreetype6-dev : Depends: libfreetype6 (= 2.6.1-0.1ubuntu2) but
2.6.1-0.1ubuntu2.3 is to be installed
This command sudo apt-get install libfreetype6-dev failed for me on ubuntu 16.04, The following packages have unmet dependencies:
libfreetype6-dev : Depends: libfreetype6 (= 2.6.1-0.1ubuntu2) but
2.6.1-0.1ubuntu2.3 is to be installed
So I downloaded installed freetype from the source, credit to this guide
$ tar -xvjf freetype-x.y.tar.bz2 # extract the downloaded version file
$ cd freetype-x.y/
$ ./configure
$ make
$ sudo make install
switched to virtualenv and pip install matplotlib and everything is working.
root@agrover-OptiPlex-780:~# easy_install -U eventlet
Searchingfor eventlet
Reading http://pypi.python.org/simple/eventlet/Reading http://wiki.secondlife.com/wiki/EventletReading http://eventlet.net
Best match: eventlet 0.9.16Processing eventlet-0.9.16-py2.7.egg
eventlet 0.9.16is already the active version in easy-install.pth
Using/usr/local/lib/python2.7/dist-packages/eventlet-0.9.16-py2.7.eggProcessing dependencies for eventlet
Searchingfor greenlet>=0.3Reading http://pypi.python.org/simple/greenlet/Reading https://github.com/python-greenlet/greenlet
Reading http://bitbucket.org/ambroff/greenlet
Best match: greenlet 0.3.4Downloading http://pypi.python.org/packages/source/g/greenlet/greenlet-0.3.4.zip#md5=530a69acebbb0d66eb5abd83523d8272Processing greenlet-0.3.4.zipWriting/tmp/easy_install-_aeHYm/greenlet-0.3.4/setup.cfg
Running greenlet-0.3.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_aeHYm/greenlet-0.3.4/egg-dist-tmp-t9_gbW
In file included from greenlet.c:5:0:
greenlet.h:8:20: fatal error:Python.h:No such file or directory
compilation terminated.
error:Setup script exited with error: command 'gcc' failed with exit status 1`
I wanted to install eventlet on my system in order to have “Herd” for software deployment.. but the terminal is showing a gcc error:
root@agrover-OptiPlex-780:~# easy_install -U eventlet
Searching for eventlet
Reading http://pypi.python.org/simple/eventlet/
Reading http://wiki.secondlife.com/wiki/Eventlet
Reading http://eventlet.net
Best match: eventlet 0.9.16
Processing eventlet-0.9.16-py2.7.egg
eventlet 0.9.16 is already the active version in easy-install.pth
Using /usr/local/lib/python2.7/dist-packages/eventlet-0.9.16-py2.7.egg
Processing dependencies for eventlet
Searching for greenlet>=0.3
Reading http://pypi.python.org/simple/greenlet/
Reading https://github.com/python-greenlet/greenlet
Reading http://bitbucket.org/ambroff/greenlet
Best match: greenlet 0.3.4
Downloading http://pypi.python.org/packages/source/g/greenlet/greenlet- 0.3.4.zip#md5=530a69acebbb0d66eb5abd83523d8272
Processing greenlet-0.3.4.zip
Writing /tmp/easy_install-_aeHYm/greenlet-0.3.4/setup.cfg
Running greenlet-0.3.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_aeHYm/greenlet-0.3.4/egg-dist-tmp-t9_gbW
In file included from greenlet.c:5:0:
greenlet.h:8:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: Setup script exited with error: command 'gcc' failed with exit status 1`
On MacOS I had trouble installing fbprophet which requires pystan which requires gcc to compile. I would consistently get the same error: command ‘gcc’ failed with exit status 1
I think I fixed the problem for myself thus:
I used brew install gcc to install the newest version, which ended up being gcc-8
Then I made sure that when gcc ran it would use gcc-8 instead.
It either worked because I added alias gcc='gcc-8 in my .zshrc (same as .bashrc but for zsh), or because I ran export PATH=/usr/local/bin:$PATH (see comment)
Also: all my attempts were inside a virtual environment and I only succeeded by installing fbprophet globally (with pip), but still no success inside a venv
This depends on how you installed TensorFlow. I am going to use the same headings used by TensorFlow’s installation instructions to structure this answer.
Pip installation
Run:
python -c 'import tensorflow as tf; print(tf.__version__)' # for Python 2
python3 -c 'import tensorflow as tf; print(tf.__version__)' # for Python 3
Note that python is symlinked to /usr/bin/python3 in some Linux distributions, so use python instead of python3 in these cases.
pip list | grep tensorflow for Python 2 or pip3 list | grep tensorflow for Python 3 will also show the version of Tensorflow installed.
Virtualenv installation
Run:
python -c 'import tensorflow as tf; print(tf.__version__)' # for both Python 2 and Python 3
pip list | grep tensorflow will also show the version of Tensorflow installed.
For example, I have installed TensorFlow 0.9.0 in a virtualenv for Python 3. So, I get:
$ python -c 'import tensorflow as tf; print(tf.__version__)'
0.9.0
$ pip list | grep tensorflow
tensorflow (0.9.0)
Almost every normal package in python assigns the variable .__version__ to the current version. So if you want to find the version of some package you can do the following
import a
a.__version__
For tensorflow it will be
import tensorflow as tf
tf.version.VERSION
For old versions of tensorflow (below 0.10), use tf.__version__
回答 2
如果您是通过pip安装的,则只需运行以下命令
$ pip show tensorflow
Name: tensorflow
Version:1.5.0Summary:TensorFlow helps the tensors flow
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I have installed tensorflow in my ubuntu 16.04 using the second answer here with ubuntu’s builtin apt cuda installation.
Now my question is how can I test if tensorflow is really using gpu? I have a gtx 960m gpu. When I import tensorflow this is the output
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
Is this output enough to check if tensorflow is using gpu ?
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[2,3], name='a')
b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[3,2], name='b')
c = tf.matmul(a, b)with tf.Session()as sess:print(sess.run(c))
Apart from using sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) which is outlined in other answers as well as in the official TensorFlow documentation, you can try to assign a computation to the gpu and see whether you have an error.
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
Here
“/cpu:0”: The CPU of your machine.
“/gpu:0”: The GPU of your machine, if you have one.
If you have a gpu and can use it, you will see the result. Otherwise you will see an error with a long stacktrace. In the end you will have something like this:
Cannot assign a device to node ‘MatMul’: Could not satisfy explicit
device specification ‘/device:GPU:0’ because no devices matching that
specification are registered in this process
import tensorflow as tf
if tf.test.gpu_device_name():print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))else:print("Please install GPU version of TF")
import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
It usually prints like
Default GPU Device: /device:GPU:0
This seems easier to me rather than those verbose logs.
Ok, first launch an ipython shell from the terminal and import TensorFlow:
$ ipython --pylab
Python 3.6.5 |Anaconda custom (64-bit)| (default, Apr 29 2018, 16:14:56)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
Using matplotlib backend: Qt5Agg
In [1]: import tensorflow as tf
Now, we can watch the GPU memory usage in a console using the following command:
# realtime update for every 2s
$ watch -n 2 nvidia-smi
Since we’ve only imported TensorFlow but have not used any GPU yet, the usage stats will be:
Notice how the GPU memory usage is very less (~ 700MB); Sometimes the GPU memory usage might even be as low as 0 MB.
Now, let’s load the GPU in our code. As indicated in tf documentation, do:
In [2]: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
Now, the watch stats should show an updated GPU usage memory as below:
Observe now how our Python process from the ipython shell is using ~ 7 GB of the GPU memory.
P.S. You can continue watching these stats as the code is running, to see how intense the GPU usage is over time.
I prefer to use nvidia-smi to monitor GPU usage. if it goes up significantly when you start you program, it’s a strong sign that your tensorflow is using GPU.
You can check if you are currently using the GPU by running the following code:
import tensorflow as tf
tf.test.gpu_device_name()
If the output is '', it means you are using CPU only;
If the output is something like that /device:GPU:0, it means GPU works.
And use the following code to check which GPU you are using:
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
回答 14
将其放在jupyter笔记本顶部附近。注释掉您不需要的内容。
# confirm TensorFlow sees the GPUfrom tensorflow.python.client import device_lib
assert'GPU'in str(device_lib.list_local_devices())# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus())>0# confirm PyTorch sees the GPUfrom torch import cuda
assert cuda.is_available()assert cuda.device_count()>0print(cuda.get_device_name(cuda.current_device()))
Put this near the top of your jupyter notebook. Comment out what you don’t need.
# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
NOTE: With the release of TensorFlow 2.0, Keras is now included as part of the TF API.
The recommended way in which to check if TensorFlow is using GPU is the following:
tf.config.list_physical_devices('GPU')
As of TensorFlow 2.1, tf.test.gpu_device_name() has been deprecated in favour of the aforementioned.
Then, in the terminal you can use nvidia-smi to check how much GPU memory has been alloted; at the same time, using watch -n K nvidia-smi would tell you for example every K seconds how much memory you are using (you may want to use K = 1 for real-time)
回答 17
这是我用来列出可tf.session直接从bash 访问的设备的行:
python -c "import os; os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'; import tensorflow as tf; sess = tf.Session(); [print(x) for x in sess.list_devices()]; print(tf.__version__);"
This is the line I am using to list devices available to tf.session directly from bash:
python -c "import os; os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'; import tensorflow as tf; sess = tf.Session(); [print(x) for x in sess.list_devices()]; print(tf.__version__);"
It will print available devices and tensorflow version, for example:
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()with tf.device('/gpu:0'):
a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[2,3], name='a')
b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[3,2], name='b')
c = tf.matmul(a, b)with tf.Session()as sess:print(sess.run(c))
Tensorflow 1测试
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[2,3], name='a')
b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[3,2], name='b')
c = tf.matmul(a, b)with tf.Session()as sess:print(sess.run(c))
I found below snippet is very handy to test the gpu ..
Tensorflow 2.0 Test
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
Tensorflow 1 Test
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
You have some options to test whether GPU acceleration is being used by your TensorFlow installation.
You can type in the following commands in three different platforms.
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
Jupyter Notebook – Check the console which is running the Jupyter Notebook. You will be able to see the GPU being used.
Python Shell – You will be able to directly see the output. (Note- do not assign the output of the second command to the variable ‘sess’; if that helps).
Spyder – Type in the following command in the console.
import tensorflow as tf
tf.test.is_gpu_available()
回答 22
Tensorflow 2.1
可以使用nvidia-smi进行验证的简单计算,以了解GPU上的内存使用情况。
import tensorflow as tf
c1 =[]
n =10def matpow(M, n):if n <1:#Abstract cases where n < 1return M
else:return tf.matmul(M, matpow(M, n-1))with tf.device('/gpu:0'):
a = tf.Variable(tf.random.uniform(shape=(10000,10000)), name="a")
b = tf.Variable(tf.random.uniform(shape=(10000,10000)), name="b")
c1.append(matpow(a, n))
c1.append(matpow(b, n))
A simple calculation that can be verified with nvidia-smi for memory usage on the GPU.
import tensorflow as tf
c1 = []
n = 10
def matpow(M, n):
if n < 1: #Abstract cases where n < 1
return M
else:
return tf.matmul(M, matpow(M, n-1))
with tf.device('/gpu:0'):
a = tf.Variable(tf.random.uniform(shape=(10000, 10000)), name="a")
b = tf.Variable(tf.random.uniform(shape=(10000, 10000)), name="b")
c1.append(matpow(a, n))
c1.append(matpow(b, n))
回答 23
>>>import tensorflow as tf
>>> tf.config.list_physical_devices('GPU')2020-05-1014:58:16.243814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcuda.so.12020-05-1014:58:16.262675: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read fromSysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-1014:58:16.263119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555]Found device 0with properties:
pciBusID:0000:01:00.0 name:GeForce GTX 10606GB computeCapability:6.1
coreClock:1.7715GHz coreCount:10 deviceMemorySize:5.93GiB deviceMemoryBandwidth:178.99GiB/s
2020-05-1014:58:16.263143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcudart.so.10.12020-05-1014:58:16.263188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcublas.so.102020-05-1014:58:16.264289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcufft.so.102020-05-1014:58:16.264495: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcurand.so.102020-05-1014:58:16.265644: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcusolver.so.102020-05-1014:58:16.266329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcusparse.so.102020-05-1014:58:16.266357: I tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfully opened dynamic library libcudnn.so.72020-05-1014:58:16.266478: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read fromSysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-1014:58:16.266823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read fromSysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-1014:58:16.267107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697]Adding visible gpu devices:0[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
>>> import tensorflow as tf
>>> tf.config.list_physical_devices('GPU')
2020-05-10 14:58:16.243814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-10 14:58:16.262675: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.263119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 6GB computeCapability: 6.1
coreClock: 1.7715GHz coreCount: 10 deviceMemorySize: 5.93GiB deviceMemoryBandwidth: 178.99GiB/s
2020-05-10 14:58:16.263143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-05-10 14:58:16.263188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-05-10 14:58:16.264289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-05-10 14:58:16.264495: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-05-10 14:58:16.265644: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-05-10 14:58:16.266329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-05-10 14:58:16.266357: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-10 14:58:16.266478: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.266823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.267107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
As suggested by @AmitaiIrron:
This section indicates that a gpu was found
2020-05-10 14:58:16.263119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 6GB computeCapability: 6.1
coreClock: 1.7715GHz coreCount: 10 deviceMemorySize: 5.93GiB deviceMemoryBandwidth: 178.99GiB/s
And here that it got added as an available physical device
Searchingfor lxml
Reading http://pypi.python.org/simple/lxml/Reading http://codespeak.net/lxml
Best match: lxml 2.3Downloading http://lxml.de/files/lxml-2.3.tgz
Processing lxml-2.3.tgz
Running lxml-2.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-7UdQOZ/lxml-2.3/egg-dist-tmp-GacQGyBuilding lxml version 2.3.Building without Cython.
ERROR:/bin/sh: xslt-config:not found
** make sure the development packages of libxml2 and libxslt are installed **Using build configuration of libxslt
In file included from src/lxml/lxml.etree.c:227:0:
src/lxml/etree_defs.h:9:31: fatal error: libxml/xmlversion.h:No such file or directory
compilation terminated.
I’m having difficulty installing lxml with easy_install on Ubuntu 11.
When I type $ easy_install lxml I get:
Searching for lxml
Reading http://pypi.python.org/simple/lxml/
Reading http://codespeak.net/lxml
Best match: lxml 2.3
Downloading http://lxml.de/files/lxml-2.3.tgz
Processing lxml-2.3.tgz
Running lxml-2.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-7UdQOZ/lxml-2.3/egg-dist-tmp-GacQGy
Building lxml version 2.3.
Building without Cython.
ERROR: /bin/sh: xslt-config: not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
In file included from src/lxml/lxml.etree.c:227:0:
src/lxml/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
compilation terminated.
If I try wget ftp://xmlsoft.org/libxml2/libxml2-sources-2.6.27.tar.gz I get
<successful connection info>
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /libxml2 ... done.
==> SIZE libxml2-sources-2.6.27.tar.gz ... done.
==> PASV ... done. ==> RETR libxml2-sources-2.6.27.tar.gz ...
No such file `libxml2-sources-2.6.27.tar.gz'.
If I try the other first, I’ll get to ./configure --prefix=/usr/local/libxslt --with-libxml-prefix=/usr/local/libxml2 and that will fail eventually with:
checking for libxml libraries >= 2.6.27... configure: error: Could not find libxml2 anywhere, check ftp://xmlsoft.org/.
I’ve tried both versions 2.6.27 and 2.6.29 of libxml2 with no difference.
Leaving no stone unturned, I have successfully done sudo apt-get install libxml2-dev, but this changes nothing.
I installed lxml with pip in Vagrant, using Ubuntu 14.04 and had the same problem. Even though all requirements where installed, i got the same error again and again. Turned out, my VM had to little memory by default. With 1024 MB everything works fine.
Add this to your VagrantFile and lxml should properly compile / install:
config.vm.provider "virtualbox" do |vb|
vb.memory = 1024
end
Ubuntu 12.10+ and Fedora 13+ have a package called python3-pip which will install pip-3.2 (or pip-3.3, pip-3.4 or pip3 for newer versions) without needing this jumping through hoops.
I came across this and fixed this without needing the likes of wget or virtualenvs (assuming Ubuntu 12.04):
Install package python3-setuptools: run sudo aptitude install python3-setuptools, this will give you the command easy_install3.
Install pip using Python 3’s setuptools: run sudo easy_install3 pip, this will give you the command pip-3.2 like kev’s solution.
Install your PyPI packages: run sudo pip-3.2 install <package> (installing python packages into your base system requires root, of course).
Check also Tobu’s answer if you want an even more upgraded version of Python.
I want to add that using a virtual environment is usually the preferred way to develop a python application, so @felixyan answer is probably the best in an ideal world. But if you really want to install that package globally, or if need to test / use it frequently without activating a virtual environment, I suppose installing it as a global package is the way to go.
Old question, but none of the answers satisfies me. One of my systems is running Ubuntu 12.04 LTS and for some reason there’s no package python3-pip or python-pip for Python 3. So here is what I’ve done (all commands were executed as root):
Install setuptools for Python3 in case you haven’t.
apt-get install python3-setuptools
or
aptitude install python3-setuptools
With Python 2.4+ you can invoke easy_install with specific Python version by using python -m easy_install. So pip for Python 3 could be installed by:
python3 -m easy_install pip
That’s it, you got pip for Python 3. Now just invoke pip with the specific version of Python to install package for Python 3. For example, with Python 3.2 installed on my system, I used:
Firstly, you need to install pip for the Python 3 installation that you want. Then you run that pip to install packages for that Python version.
Since you have both pip and python 3 in /usr/bin, I assume they are both installed with a package manager of some sort. That package manager should also have a Python 3 pip. That’s the one you should install.
Felix’ recommendation of virtualenv is a good one. If you are only testing, or you are doing development, then you shouldn’t install the package in the system python. Using virtualenv, or even building your own Pythons for development, is better in those cases.
But if you actually do want to install this package in the system python, installing pip for Python 3 is the way to go.
Although the question relates to Ubuntu, let me contribute by saying that I’m on Mac and my python command defaults to Python 2.7.5. I have Python 3 as well, accessible via python3, so knowing the pip package origin, I just downloaded it and issued sudo python3 setup.py install against it and, surely enough, only Python 3 has now this module inside its site packages. Hope this helps a wandering Mac-stranger.
回答 13
直接执行pip二进制文件。
首先找到所需的PIP版本。
jon-mint python3.3# whereis ip
ip:/bin/ip /sbin/ip /usr/share/man/man8/ip.8.gz/usr/share/man/man7/ip.7.gz
To install pip for python3 use should use pip3 instead of pip.
To install python in ubuntu 18.08 bionic
sudo apt-get install python3.7
To install the required pip package in ubuntu
sudo apt-get install python3-pip
回答 16
安装python3的另一种方法是使用wget。以下是安装步骤。
wget http://www.python.org/ftp/python/3.3.5/Python-3.3.5.tar.xz
tar xJf ./Python-3.3.5.tar.xz
cd ./Python-3.3.5./configure --prefix=/opt/python3.3
make && sudo make install
Another way to install python3 is using wget. Below are the steps for installation.
wget http://www.python.org/ftp/python/3.3.5/Python-3.3.5.tar.xz
tar xJf ./Python-3.3.5.tar.xz
cd ./Python-3.3.5
./configure --prefix=/opt/python3.3
make && sudo make install