I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
我在Keras上安装了Tensorflow后端和CUDA。我想有时需要强迫Keras使用CPU。这可以在没有在虚拟环境中安装单独的CPU Tensorflow的情况下完成吗?如果是这样的话?如果后端是Theano,可以设置标志,但我还没有听说过可通过Keras访问的Tensorflow标志。
6 个解决方案
#1
49
If you want to force Keras to use CPU
如果要强制Keras使用CPU
Way 1
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
before Keras / Tensorflow is imported.
在导入Keras / Tensorflow之前。
Way 2
Run your script as
运行您的脚本
$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
See also
也可以看看
- https://github.com/keras-team/keras/issues/152
- https://github.com/keras-team/keras/issues/152
- https://github.com/fchollet/keras/issues/4613
- https://github.com/fchollet/keras/issues/4613
#2
31
A rather graceful and separable way of doing this is to use
一个相当优雅和可分离的方法是使用
import tensorflow as tf
from keras import backend as K
num_cores = 4
if GPU:
num_GPU = 1
num_CPU = 1
if CPU:
num_CPU = 1
num_GPU = 0
config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,\
inter_op_parallelism_threads=num_cores, allow_soft_placement=True,\
device_count = {'CPU' : num_CPU, 'GPU' : num_GPU})
session = tf.Session(config=config)
K.set_session(session)
Here with booleans
GPU
and CPU
you can specify whether to use a GPU or GPU when running your code. Notice that I'm doing this by specifying that there are 0 GPU devices when I want to just use the CPU. As an added bonus, via this method you can specify how many GPUs and CPUs to use too! Additionally, via num_cores
you can set the number of CPU cores to use.
在布尔GPU和CPU中,您可以指定在运行代码时是使用GPU还是GPU。请注意,我通过指定当我想使用CPU时有0个GPU设备来执行此操作。作为额外的好处,通过这种方法,您可以指定使用多少GPU和CPU!此外,通过num_cores,您可以设置要使用的CPU核心数。
All of this is executed in the constructor of my class, before any other operations, and is completely separable from any model, or other code I use.
所有这些都在我的类的构造函数中执行,在任何其他操作之前执行,并且可以与任何模型或我使用的其他代码完全分离。
The only thing to note is that you'll need tensorflow-gpu
and cuda
/cudnn
installed because you're always giving the option of using a GPU.
唯一需要注意的是你需要安装tensorflow-gpu和cuda / cudnn,因为你总是可以选择使用GPU。
#3
22
This worked for me (win10), place before you import keras:
这对我(win10)有用,在你导入keras之前的位置:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
#4
15
As per keras tutorial, you can simply use the same tf.device
scope as in regular tensorflow:
根据keras教程,您可以简单地使用与常规tensorflow相同的tf.device范围:
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on GPU:0
with tf.device('/cpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on CPU:0
#5
6
Just import tensortflow and use keras, it's that easy.
只需导入tensortflow并使用keras,就这么简单。
import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
#6
2
I just spent some time figure it out. Thoma's answer is not complete. say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free. you should write CUDA_VISIBLE_DEVICES=0 python test.py
我只是花了一些时间搞清楚。托马斯的答案并不完整。说你的程序是test.py,你想用gpu0运行这个程序,并保持其他gpus免费。你应该写CUDA_VISIBLE_DEVICES = 0 python test.py
notice it's DEVICES not DEVICE
注意它的设备不是设备
#1
49
If you want to force Keras to use CPU
如果要强制Keras使用CPU
Way 1
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
before Keras / Tensorflow is imported.
在导入Keras / Tensorflow之前。
Way 2
Run your script as
运行您的脚本
$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
See also
也可以看看
- https://github.com/keras-team/keras/issues/152
- https://github.com/keras-team/keras/issues/152
- https://github.com/fchollet/keras/issues/4613
- https://github.com/fchollet/keras/issues/4613
#2
31
A rather graceful and separable way of doing this is to use
一个相当优雅和可分离的方法是使用
import tensorflow as tf
from keras import backend as K
num_cores = 4
if GPU:
num_GPU = 1
num_CPU = 1
if CPU:
num_CPU = 1
num_GPU = 0
config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,\
inter_op_parallelism_threads=num_cores, allow_soft_placement=True,\
device_count = {'CPU' : num_CPU, 'GPU' : num_GPU})
session = tf.Session(config=config)
K.set_session(session)
Here with booleans
GPU
and CPU
you can specify whether to use a GPU or GPU when running your code. Notice that I'm doing this by specifying that there are 0 GPU devices when I want to just use the CPU. As an added bonus, via this method you can specify how many GPUs and CPUs to use too! Additionally, via num_cores
you can set the number of CPU cores to use.
在布尔GPU和CPU中,您可以指定在运行代码时是使用GPU还是GPU。请注意,我通过指定当我想使用CPU时有0个GPU设备来执行此操作。作为额外的好处,通过这种方法,您可以指定使用多少GPU和CPU!此外,通过num_cores,您可以设置要使用的CPU核心数。
All of this is executed in the constructor of my class, before any other operations, and is completely separable from any model, or other code I use.
所有这些都在我的类的构造函数中执行,在任何其他操作之前执行,并且可以与任何模型或我使用的其他代码完全分离。
The only thing to note is that you'll need tensorflow-gpu
and cuda
/cudnn
installed because you're always giving the option of using a GPU.
唯一需要注意的是你需要安装tensorflow-gpu和cuda / cudnn,因为你总是可以选择使用GPU。
#3
22
This worked for me (win10), place before you import keras:
这对我(win10)有用,在你导入keras之前的位置:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
#4
15
As per keras tutorial, you can simply use the same tf.device
scope as in regular tensorflow:
根据keras教程,您可以简单地使用与常规tensorflow相同的tf.device范围:
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on GPU:0
with tf.device('/cpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on CPU:0
#5
6
Just import tensortflow and use keras, it's that easy.
只需导入tensortflow并使用keras,就这么简单。
import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
#6
2
I just spent some time figure it out. Thoma's answer is not complete. say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free. you should write CUDA_VISIBLE_DEVICES=0 python test.py
我只是花了一些时间搞清楚。托马斯的答案并不完整。说你的程序是test.py,你想用gpu0运行这个程序,并保持其他gpus免费。你应该写CUDA_VISIBLE_DEVICES = 0 python test.py
notice it's DEVICES not DEVICE
注意它的设备不是设备