def get_available_gpus():
devices = device_lib.list_local_devices()
#return [x.name for x in devices if x.device_type == 'CPU']
return [x.name for x in devices ]
print(get_available_gpus())
['/cpu:0']
Currently, I can see and execute only on CPU.
MacBook Pro i7 Late 2013
Device 0: "GeForce GT 750M" CUDA Driver Version /
Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147024896 bytes)
( 2) Multiprocessors,
(192) CUDA Cores/MP: 384 CUDA Cores GPU
Max Clock rate: 926 MHz (0.93 GHz)
Memory Clock rate: 2508 Mhz Memory Bus Width: 128-bit L2
Cache Size: 262144 bytes
http://osxdaily.com/2017/01/08/disable-gpu-switching-macbook-pro/
Still not working with TensorFlow
start = timeit.timeit()
print ("starting")
with tf.device('/cpu:0'):
# [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print (sess.run(c))
end = timeit.timeit()
print ("elapsed", end - start)
starting
[[ 22. 28.]
[ 49. 64.]]
elapsed -0.0033593010011827573