QBoard » Artificial Intelligence & ML » AI and ML - Tensorflow » Can Keras with Tensorflow backend be forced to use CPU or GPU at will?

Can Keras with Tensorflow backend be forced to use CPU or GPU at will?

  • I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
      September 23, 2021 11:04 PM IST
    0
  • Just import tensortflow and use keras, it's that easy.
    import tensorflow as tf # your code here with tf.device('/gpu:0'): model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
      September 24, 2021 12:29 PM IST
    0
  • A rather separable way of doing this is to use

    import tensorflow as tf
    from keras import backend as K
    
    num_cores = 4
    
    if GPU:
        num_GPU = 1
        num_CPU = 1
    if CPU:
        num_CPU = 1
        num_GPU = 0
    
    config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,
                            inter_op_parallelism_threads=num_cores, 
                            allow_soft_placement=True,
                            device_count = {'CPU' : num_CPU,
                                            'GPU' : num_GPU}
                           )
    
    session = tf.Session(config=config)
    K.set_session(session)


    Here, with booleans GPU and CPU, we indicate whether we would like to run our code with the GPU or CPU by rigidly defining the number of GPUs and CPUs the Tensorflow session is allowed to access. The variables num_GPU and num_CPU define this value. num_cores then sets the number of CPU cores available for usage via intra_op_parallelism_threads and inter_op_parallelism_threads.

    The intra_op_parallelism_threads variable dictates the number of threads a parallel operation in a single node in the computation graph is allowed to use (intra). While the inter_ops_parallelism_threads variable defines the number of threads accessible for parallel operations across the nodes of the computation graph (inter).

    allow_soft_placement allows for operations to be run on the CPU if any of the following criterion are met:

    there is no GPU implementation for the operation

    there are no GPU devices known or registered

    there is a need to co-locate with other inputs from the CPU

    All of this is executed in the constructor of my class before any other operations, and is completely separable from any model or other code I use.

    Note: This requires tensorflow-gpu and cuda/cudnn to be installed because the option is given to use a GPU.

    Refs:

    What do the options in ConfigProto like allow_soft_placement and log_device_placement mean?

    Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads

      September 24, 2021 1:46 PM IST
    0
  • import os
    os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
      September 25, 2021 12:00 AM IST
    0