The Guide to DeepLearning with Tensorflow and Keras - The Beginning


Learn how to build a neural network and how to train, evaluate and optimize it with TensorFlow.


This is a part-by-part course which we will cover from basics to implementing models in productions.


Today’s TensorFlow tutorial for beginners will introduce you to performing deep learning in an interactive way:



You’ll first learn more about tensors;


Then, the tutorial you’ll briefly go over some of the ways that you can install TensorFlow on your system so that you’re able to get started and load data in your workspace;


After this, you’ll go over some of the TensorFlow basics: you’ll see how you can easily get started with simple computations.





What is TensorFlow?


TensorFlow is a popular open source library that's used for implementing machine learning and deep learning.


It was initially built at Google for internal consumption and was released publicly on November 9, 2015.


Since then, TensorFlow has been extensively used to develop machine learning and deep learning models in several business domains.




To use TensorFlow in our projects, we need to learn how to program using the TensorFlow API. 



TensorFlow has multiple APIs that can be used to interact with the library. The TensorFlow APIs are divided into two levels:




Low-level API: The API known as TensorFlow core provides fine-grained lower level functionality. Because of this, this low-level API offers complete control while being used on models. We will cover TensorFlow core in this post.




High-level API: These APIs provide high-level functionalities that have been built on TensorFlow core and are comparatively easier to learn and implement. Some high-level APIs include Estimators, Keras, TFLearn, TFSlim, and Sonnet.



The TensorFlow core


The TensorFlow core is the lower-level API on which the higher-level TensorFlow modules are built. In this section, 



We will go over a quick overview of TensorFlow core and learn about the basic elements of TensorFlow.



Here we go! Let’s begin the fundamentals of Tensorflow.🙏🙏



Setting up Tensorflow.

TensorFlow is tested and supported on the following 64-bit systems:




  • Ubuntu 16.04 or later
  • Windows 7 or later
  • macOS 10.12.6 (Sierra) or later (no GPU support)
  • Raspbian 9.0 or later


##### Current release for CPU-only
pip install tensorflow

##### Nightly build for CPU-only (unstable)
pip install tf-nightly

##### GPU package for CUDA-enabled GPU cards
pip install tensorflow-gpu

##### Nightly build with GPU support (unstable)
pip install tf-nightly-gpu

Lets Spice things with Hello World! Example!


import tensorflow as tf
hello = tf.constant("hello wold")
sess = tf.Session()
print(sess.run(hello))
print('--------------')
--------------------
b'hello wold'
--------------

We will check line by line what we have written and how we implemented out Hello World! example

Tensorflow fundamentals.


First, we’re going to take a look at the tensor object type. Then we’ll have a graphical understanding of TensorFlow to define computations. Finally,


we’ll run the graphs with sessions, showing how to substitute intermediate values.



Tensors



Tensors are the basic components in TensorFlow. A tensor is a multidimensional collection of data elements.




It is generally identified by shape, type, and rank. Rank refers to the number of dimensions of a tensor, while shape refers to the size of each dimension.




You may have seen several examples of tensors before, such as in a zero-dimensional collection (also known as a scalar), a one-dimensional collection (also known as a vector), and a two-dimensional collection (also known as a matrix).






A scalar value is a tensor of rank 0 and shape []. A vector, or a one-dimensional array, is a tensor of rank 1 and shape [number_of_columns] or [number_of_rows]


Let's create some constants with the following code:




const1=tf.constant(34,name='x1')
const2=tf.constant(59.0,name='y1')
const3=tf.constant(32.0,dtype=tf.float16,name='z1')

print('const1 (x): ',const1)
print('const2 (y): ',const2)
print('const3 (z): ',const3)




Let's take a look at the preceding code in detail:




  • The first line of code defines a constant tensor, const1, stores a value of 34, and names it x1.
  • The second line of code defines a constant tensor, const2, stores a value of 59.0, and names it y1.
  • The third line of code defines the data type as tf.float16 for const3. Use the dtype parameter or place the data type as the second argument to denote the data type. 

Example:


hello = tf.constant("Hello ")
world = tf.constant("World")
type(hello)
print(hello)


with tf.Session() as sess:
result = sess.run(hello+world)


print(result)

--------------------
tensorflow.python.framework.ops.Tensor
Tensor("Const_1:0", shape=(), dtype=string)
b'Hello World'


Constants


The constant valued tensors are created using the tf.constant() function, and has the following definition:


Syntax:





tf.constant(
value,
dtype=None,
shape=None,
name='const_name',
verify_shape=False
)


Let's create some constants with the following code:





a = tf.constant(10)
b = tf.constant(20)

const1=tf.constant(34,name='x1')
const2=tf.constant(59.0,name='y1')
const3=tf.constant(32.0,dtype=tf.float16,name='z1')

print(a)
print(b)
print('const1 (x): ',const1)
print('const2 (y): ',const2)
print('const3 (z): ',const3)
------------------------------
Tensor("Const_1:0", shape=(), dtype=int32)
Tensor("Const_2:0", shape=(), dtype=int32)
const1 (x): Tensor("x:0", shape=(), dtype=int32)
const2 (y): Tensor("y:0", shape=(), dtype=float32)
const3 (z): Tensor("z:0", shape=(), dtype=float16)



Operations


How can we do addition/multiplication in tensorflow ?.




The TensorFlow library contains several built-in operations that can be applied on tensors. 




An operation node can be defined by passing input values and saving the output in another tensor. To understand this better, let's define two operations.



a = tf.constant(10)
b = tf.constant(20)
type(a)

with tf.Session() as sess:
result = sess.run(a+b)
mul = sess.run(tf.multiply(a,b))

print(result)
print(mul)
------------------
30
200





Some of the built-in operations of TensorFlow include arithmetic operations, math functions, and complex number operations.




Working with Matrices in Tensorflow.



We can easily create n*n matrices in tensorflow using the in-built functions. We' ll dicrectly dive into the example to understand the code.




const = tf.constant(10)



## Building a 4*4 Matix with all element as 10
## We are going to use fill() to fill the matrix with default value
## fill((row,col),def_value) can be used to fill values.
fill_mat = tf.fill((4,4),10)


## Creating a zero 4*4 Matrix
## Default tr.zeros() to create a zero matrix
myzeros = tf.zeros((4,4))


## Creating one matrices
## Using tf.one() method to create one matrix

myones = tf.ones((4,4))

## Creating a random normalized matrix.
## Using random_normal Outputs random values from a normal distribution.
## mean: A 0-D Tensor or Python value of type dtype. stddev: A 0-D Tensor or Python value of type dtype.

myrand = tf.random_normal((4,4),mean =0,stddev =1.0)

myrandu = tf.random_uniform((4,4),minval=0,maxval=1)

my_ops = [const,fill_mat,myzeros,myones,myrand,myrandu]

sess = tf.InteractiveSession()


for op in my_ops:
print(sess.run(op))
print("\n")

--------------------------------------------------
10


[[10 10 10 10]
[10 10 10 10]
[10 10 10 10]
[10 10 10 10]]


[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]


[[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]]


[[-1.1485276 0.6817215 1.6923033 1.0417686 ]
[ 0.75727195 -0.6906906 1.382049 0.26310864]
[-1.3289255 -2.0204604 0.9086128 -1.6753776 ]
[ 0.83860254 0.8221855 -0.01571688 0.33962643]]


[[0.4758848 0.02705026 0.45411873 0.9472964 ]
[0.03372979 0.04275322 0.51311064 0.1727488 ]
[0.38706803 0.29606903 0.17789984 0.97908235]
[0.2033397 0.9660599 0.6367506 0.9244758 ]]



Placeholders


While constants store the value at the time of defining the tensor, placeholders allow you to create empty tensors so that the values can be provided at runtime. 




The TensorFlow library provides the tf.placeholder() function with the following signature to create placeholders:


Syntax:




tf.placeholder(
  dtype,
  shape=None,
  name=None
  )

Lets see with an example




p1 = tf.placeholder(tf.float32)
p2 = tf.placeholder(tf.float32)
print('p1 : ', p1)
print('p2 : ', p2)

--------------------
p1 :  Tensor("Placeholder:0", dtype=float32)
p2 :  Tensor("Placeholder_1:0", dtype=float32)


Computation graph/Tensorflow Graph


A computation graph is the basic unit of computation in TensorFlow. A computation graph consists of nodes and edges. Each node represents an instance of tf.Operation,


while each edge represents an instance of tf.Tensor that gets transferred between the nodes.


A model in TensorFlow contains a computation graph. First, you must create the graph with the nodes representing variables, constants, placeholders, and operations, and then provide the graph to the TensorFlow execution engine.




 The TensorFlow execution engine finds the first set of nodes that it can execute. The execution of these nodes starts the execution of the nodes that follow the sequence of the computation graph.


Thus, TensorFlow-based programs are made up of performing two types of activities on computation graphs:



  • Defining the computation graph
  • Executing the computation graph



A TensorFlow program starts execution with a default graph. Unless another graph is explicitly specified, a new node gets implicitly added to the default graph. Explicit access to the default graph can be obtained using the following command:




graph = tf.get_default_graph()




n1 = tf.constant(1)
n2 = tf.constant(2)
n3 =n1+n2
with tf.Session() as sess:
    result = sess.run(n3)

result
print(tf.get_default_graph())
g =tf.Graph()
print(g)
graph_one = tf.get_default_graph()
print(graph_one)
graph_two = tf.Graph()
with graph_two.as_default():
    print(graph_two is tf.get_default_graph())
--------------------------------------

<tensorflow.python.framework.ops.Graph object at 0x0000022C46285CF8>
<tensorflow.python.framework.ops.Graph object at 0x0000022C47F8D390>
<tensorflow.python.framework.ops.Graph object at 0x0000022C46285CF8>

True
`


Here are the advantages of organizing the computations as a graph,




  • Parallelism. By using explicit edges to represent dependencies between operations, it is easy for the system to identify operations that can execute in parallel.
  • Distributed execution. By using explicit edges to represent the values that flow between operations, it is possible for TensorFlow to partition your program across multiple devices (CPUs, GPUs, and TPUs) attached to different machines.
  • TensorFlow inserts the necessary communication and coordination between devices.
  • Compilation. TensorFlow’s XLA compiler can use the information in your dataflow graph to generate faster code, for example, by fusing together adjacent operations.


Session


We have seen in all the earlier example of running our tensorflow graph into a tensorflow session.



TensorFlow uses tf.Session class to represent a connection between the client program---typically a Python program, although a similar interface is available in other languages---and the C++ runtime. 



A tf.Session object provides access to devices in the local machine, and remote devices using the distributed TensorFlow runtime. It also caches information about your tf.Graph so that you can efficiently run the same computation multiple times.


Syntax:


# Creating a tf.Session
If you are using the low-level TensorFlow API, you can create a tf.Session for the current default graph as follows:

# Create a default in-process session.
with tf.Session() as sess:
  # ...

# Create a remote session.
with tf.Session("grpc://example.org:2222"):
  # ...


Since a tf.Session owns physical resources (such as GPUs and network connections), it is typically used as a context manager (in a with block) that automatically closes the session 



when you exit the block. It is also possible to create a session without using a with a block, but you should explicitly call tf.Session.close when you are finished with it to free the resources.


That's all for the starting I hope you can start practicing Tensorflow along with me and we can deep dive and start exploring more about it soon.



We’ll be back with more exciting discussions not just on building Deep learning models but also on building a robust infrastructure to store, consume and process data at scale.


 Till then, happy coding!💓💓

Hey I'm Venkat
Developer, Blogger, Thinker and Data scientist. nintyzeros [at] gmail.com I love the Data and Problem - An Indian Lives in US .If you have any question do reach me out via below social media


EmoticonEmoticon