TensorFlow Basic Concepts – AICorr.com



Overview of TensorFlow Basic Concepts

TensorFlow 2.0 has simplified many of the complexities of earlier versions, providing an intuitive and more user-friendly API. TensorFlow 2.0 provides a much easier and more Pythonic experience compared to TensorFlow 1.0 while keeping the core concepts of deep learning computations. However, understanding key concepts like computational graphs, sessions, variables, placeholders, constants, and operations is still fundamental to using TensorFlow effectively.

This will help you with understanding and differentiating between TensorFlow 1 and TensorFlow 2.

Within this content, we cover an overview of the above basic concepts of TensorFlow.

What are Computational Graphs?

In TensorFlow, a computational graph is a way to represent mathematical computations. Nodes in the graph represent operations (e.g., addition, multiplication), and edges represent tensors, which are multi-dimensional arrays used to pass data between nodes.

In TensorFlow 1.0, users would explicitly build and manipulate this graph. But in TensorFlow 2.0, you interact with the graph more implicitly and naturally, thanks to eager execution.

Static Graph (TF 1.0 style)

You can still create static graphs using the tf.function decorator, which transforms a Python function into a TensorFlow graph.

Below, the output returns the result with graph execution.

import tensorflow as tf

@tf.function
def my_function(x, y):
    return x * y + y

result = my_function(3, 4) 

print(result)

# Outcome: tf.Tensor(16, shape=(), dtype=int32)

Eager Execution

In TensorFlow 2.0, eager execution is enabled by default, which means operations are evaluated immediately. You don’t need to explicitly create or execute a graph, which makes the code easier to debug and more Pythonic.

Eager execution (default in TensorFlow 2.0). The following will immediately return the result (5.0).

import tensorflow as tf

a = tf.constant(2.0)
b = tf.constant(3.0)
c = a + b

print(c) 

Sessions in TensorFlow

In TensorFlow 1.0, a session was used to execute operations within a computational graph. You first had to build a graph and then run it in a session to get the results. However, with TensorFlow 2.0, sessions have been replaced with eager execution, so you can simply run operations as Python code.

TensorFlow 1.0 Session

Please be aware that the following produces an error “module ‘tensorflow’ has no attribute ‘Session’” in TF2. It needs to run in version 1 in order to work. We only demonstrate with the below code.

import tensorflow as tf

a = tf.constant(3.0)
b = tf.constant(4.0)
c = a + b

with tf.Session() as sess:
    result = sess.run(c)
    print(result) 

# error in TF2 - module 'tensorflow' has no attribute 'Session'

TensorFlow 2.0 Eager Execution (no session required)

import tensorflow as tf

a = tf.constant(3.0)
b = tf.constant(4.0)
c = a + b

print(c.numpy())

# Output: 7.0

Variables in TensorFlow

A variable in TensorFlow is a special type of tensor that can change its value during training. It’s typically used for storing weights and biases in a machine learning model.

In the first instance, we define a variable my_var. Then, we update the value of the variable through the .assign method. Values change from (1.0, 2.0, 3.0) to (3.0, 2.0, 1.0).

Create a Variable in TensorFlow

import tensorflow as tf

my_var = tf.Variable([1.0, 2.0, 3.0], dtype=tf.float32)

print(my_var)

Update a Variable in TensorFlow

my_var.assign([3.0, 2.0, 1.0])

print(my_var)

Placeholders in TensorFlow

In TensorFlow 1.0, placeholders were used to feed data into a TensorFlow graph at runtime. They were placeholders for tensors that would be provided during the execution. In TensorFlow 2.0, placeholders are no longer necessary because you can directly pass data into the operations during eager execution.

Similarly to sessions, in TF2 the code for placeholders produces an error “module ‘tensorflow’ has no attribute ‘placeholder’”. Example only for demonstration.

Placeholder (TensorFlow 1.0)

import tensorflow as tf

x = tf.placeholder(tf.float32, shape=[None, 3])
y = x * 2

with tf.Session() as sess:
    result = sess.run(y, feed_dict={x: [[1, 2, 3], [4, 5, 6]]})
    print(result)

Placeholder (TensorFlow 2.0 – no placeholders required)

import tensorflow as tf

def multiply_by_two(x):
    return x * 2

result = multiply_by_two(tf.constant([[1, 2, 3], [4, 5, 6]]))

print(result)

Constants in TensorFlow

A constant in TensorFlow is a fixed value tensor that does not change during execution. It defines values that will remain unchanged throughout the computation.

Create Constants

First, we create 2 constants (with values 5 and 6). Then, we perform an addition operation.

import tensorflow as tf

const_a = tf.constant(5)
const_b = tf.constant(6)

result = const_a + const_b

print(result.numpy())

# Output: 11

Operations in TensorFlow

Operations in TensorFlow define the computational tasks you want to perform. Examples include addition, multiplication, and other mathematical operations.

TensorFlow also supports more complex operations like matrix multiplication.

Basic Operations TensorFlow

import tensorflow as tf

# Define constants
a = tf.constant(10)
b = tf.constant(5)

# Operations
add = tf.add(a, b)  # Addition
subtract = tf.subtract(a, b)  # Subtraction
multiply = tf.multiply(a, b)  # Multiplication
divide = tf.divide(a, b)  # Division

print("Addition:", add.numpy())
print("Subtraction:", subtract.numpy())
print("Multiplication:", multiply.numpy())
print("Division:", divide.numpy())

Matrix Operations

import tensorflow as tf

# Define two matrices
matrix1 = tf.constant([[1, 2], [3, 4]])
matrix2 = tf.constant([[5, 6], [7, 8]])

# Perform matrix multiplication
result = tf.matmul(matrix1, matrix2)

print(result.numpy())

Previous: Introduction to TensorFlow