You will not get the values of a tensors a/b/c
now. There values will be evaluated only inside of the Session.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX TITAN Black major: 3 minor: 5 memoryClockRate (GHz) 0.98 pciBusID 0000:01:00.0 Total memory: 5.94GiB Free memory: 5.31GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0)
TensorFlow sessions allocate ~all GPU memory on startup, so they can bypass the cuda allocator.
Do not run more than one cuda-using library in the same process or weird things (like this stream executor error) will happen.
This encapsulation has nothing to do with OOP encapsulation. A slightly better (in terms of understanding for a new-comer) definition is in session documentation.
A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.
Which means that none the operators and variables defined in the graph-definition part are being executed. For example nothing is being executed/calculated here
a = tf.Variable(tf.random_normal([3, 3], stddev=1.)
b = tf.Variable(tf.random_normal([3, 3], stddev=1.)
c = a + b
You will not get the values of a tensors a/b/c
now. There values will be evaluated only inside of the Session.