I ran the following code to
W = tf.Variable(tf.zeros([1, 3]), dtype=tf.float32, name="W")
B = tf.constant([[1, 2, 3]], dtype=tf.float32, name="B")
act = tf.add(W, B)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
sess.run(act)
writer = tf.summary.FileWriter("./graphs", sess.graph)
writer.close()
And verified it with tensorboard:
What confuses me is the read operation and the operation prior to that which is denoted as (W). Constant B is directed straight to the Add operation while tf.variable has all these operation nodes inside. Here are my questions:
What is (W) operation? Constant B is a regular circle which denotes a constant. Oval shaped nodes denote Operation node. (W) doesn't seem like any operation yet it is denoted with the same oval shaped node? What is that node's job?
Add node explicitly reads (W) node with a read operation as opposed to constant node B. Why is read necessary for variable nodes?