TensorFlow — An Introduction

A brief introduction to TensorFlow

(Image Courtesy: tensorflow.org)

A question often asked when one gets started in the Machine Learning (ML) journey is, what tools and frameworks should we use in order to minimize the starting time. Follow-up question to that is, how much do we build from scratch and which pieces in the pipeline should we try to develop with already available libraries. Not surprisingly, the advice usually is that use as much libraries as possible.

In the present-day ML landscape, there are many tools, libraries and frameworks available which can do the heavy lifting, allowing us to focus on the problem at hand.

In general, the steps involved in a typical ML workflow can be described as follows:

  • Gather data
  • Explore data
  • Prepare data
  • Build, train and evaluate model
  • Tune hyperparameters
  • Deploy model

TensorFlow is one of the leading options which provides utilities to help in most of these steps. By using those utilities, we can reduce coding time, maximize re-usability and improve overall workflow.

What is TensorFlow ?

TensorFlow is an end-to-end open source platform for machine learning. It was developed by the Google Brain team for internal use initially and released as open source later. It provides many tools to make it easy for us to build and deploy ML models. ML models can be created for desktop, mobile, web, and cloud. In addition, it has a good ecosystem of tools, libraries and an active open source community supporting it.

In this article, let’s explore a little bit more to understand what TensorFlow is. (Of course, this is just scratching the surface. For detailed information, the best source of information is the official site)

Why TensorFlow ?

Let’s see the top 3 aspects which makes TensorFlow a good choice for machine learning practitioners.

  • Ease of use

TensorFlow offers multiple levels of abstractions to build and train models. We could use TensorFlow’s high level Keras API to perform build and train steps quickly. If more flexibility is needed, it provides options to extend and customize core classes. This allows us to build custom models with minimal overhead and can iterate faster.

  • Smooth transition from experimentation to Production

Machine learning projects are highly iterative. Once we are ready with the trained model, TensorFlow offers mechanisms to move the model to Production easily, regardless of the type of production environment; be it edge devices, dedicated servers, mobile or web.

  • Open sourc and rich ecosystem and

When choosing a framework/platform, we also need to consider the ecosystem in place. TensorFlow has good amount of utilities and add-ons available which will significantly reduce the effort needed in ML workflow. Needless to say, it’s open source nature and a thriving community behind it helps a lot.

I’m pretty sure you may have a few questions in mind after reading so far. Hold your questions for now! Once we go through the following sections, these points will become much clearer.

High level components of TensorFlow

Now, let’s briefly explore various components of TensorFlow platform.

  • TensorFlow (Core)

The core part of the TensorFlow is used for building and evaluating models. It provides high-level APIs for model building, supports Estimators (custom made or pre-made, which encapsulates actions such as training, evaluation, prediction and export for serving), exposes high-level APIs for distributed training and low level APIs for advanced usages.

At a high level, following is the programming stack for TensorFlow:

(Image Courtesy: tensorflow.org)
  • TensorFlow.js

TensorFlow.js is a library for machine learning in JavaScript. Using it, we can develop ML models in JavaScript and use those models directly in the browser or in Node.js. We can also use off-the-shelf JavaScript models. Ability to convert existing Python TensorFlow models, so that it can run in the browser, is also supported. We can re-train existing models or build and train new models directly in JavaScript as well.

  • TensorFlow Lite

TensorFlow Lite can be used to deploy machine learning models on different types of devices. It can be used to run inference on mobile (Android, iOS) and devices like Raspberry Pi.

After training a new TensorFlow model (or re-training existing model), we can convert the model into a compressed flat buffer using TensorFlow Lite Converter. This compressed model can be loaded into a mobile or an embedded device. If further optimization is needed, then, 32-bit floats can be converted to more efficient 8-bit integers.

  • TensorFlow Extended (TFX)

When the ML model is ready to move to production, TFX can be used to create and manage a production pipeline. The pipeline components are built using TFX libraries. These libraries can also be used individually.

Following are some of the underlying libraries:

TensorFlow Data Validation (TFDV)

One of the time consuming activity when preparing for an ML experiment is to explore and validate data. TensorFlow Data Validation helps developers to perform these activities at scale.

TensorFlow Transform

When working with datasets, a good amount of time is spent on pre-processing the data and converting it into a suitable format. This involves many steps such as converting to different formats, applying numerical operation etc. TensorFlow Transform comes handy for this purpose.

TensorFlow Model Analysis (TFMA)

To decide whether a model is working as expected or not, the model needs to be evaluated against defined metrics. TensorFlow Model Analysis helps the developers to compute the evaluation metrics and visualize the same. This allows developers to understand the model’s performance better and plan future course of action accordingly.

TensorFlow Serving

When going through multiple iterations, we may want to do versioning for our models and have an ability to rollback to previous version, if a need arises. We may also want to evaluate multiple models at the same time. TensorFlow Serving provides options to do so.

Other tools and libraries

  • TensorBoard

Visualization is an essential tool in a machine learning practitioner’s arsenal. It allows us to monitor the training progress, verify different metrics and helps to take decisions on next steps in the training cycle.

TensorBoard is TensorFlow’s visualization toolkit. It provides a large array of options to visualize and track various aspects such as model graph, metrics (loss, accuracy etc), histograms (for weights, bias etc) and helps to view profile of the program as well.

(Image Courtesy: tensorflow.org)
  • Datasets

Data is the bloodline of any ML experiment. Availability of a well defined dataset in a ready-to-use state removes a lot of friction during the initial stages. TensorFlow provides a collection of datasets which can be used directly within TensorFlow programs. These datasets are exposed as TensorFlow classes (tf.data.Datasets), facilitating ease of usage. This collection contains audio, image, text and video datasets.

(As a side note, if you are looking for additional datasets, a few good places to look at are here, here and here)

  • TensorFlow Hub

Wouldn’t it be nice if we could find reusable parts of machine learning models? How about sharing parts of a good model we already fine tuned to the wider community?

TensorFlow Hub helps developers to publish and discover reusable parts of machine learning models. These published artefacts, called modules, is a self-contained entity consisting of a TensorFlow graph along with its weights and other relevant information. These modules help facilitate Transfer Learning, which allows us to make use of pre-trained parts of models. This process, needless to say, saves significant amount of time and resources.

To see available modules, take a look at here

  • Model optimization

When we have a rough model ready, next step is to fine tune the model till it reaches a satisfactory level.

TensorFlow model optimization toolkit is a set of tools to optimize ML models for deployment and execution. This provides techniques to reduce latency and inference cost, to name a few usages.

  • TensorFlow Federated

When we need to train models on data which resides locally to the clients and cannot be exported to our servers, standard ML approach hits a roadblock. Federated Learning is a mechanism where a shared global model is trained across many clients that keep their training data locally. For example, Federated Learning has been used to train prediction models for mobile keyboards without uploading typing data to servers.

TensorFlow Federated framework can be used to facilitate Federated Learning.

  • Reference Models

Reference implementations always help when we want to follow best practises. Tensorflow community maintains a collection of example models that use Tensorflow’s high level APIs. These example models acts as good reference points.

Some of the example models available at the time of writing are:

  • bert (Bidirectional Encoder Representations from Transformers): A pre-trained language representation model
  • mnist: A basic model to classify digits from the MNIST dataset
  • resnet: An implementation for the ResNet models
  • transformer: an implementation of the Transformer translation model
  • ncf: an implementation of the Neural Collaborative Filtering (NCF) framework


In this article, we tried to get a high-level understanding of the TensorFlow platform. As you could imagine, it is very difficult to cover all aspects in one article. I plan to write a few deep-dive articles covering specific areas of TensorFlow later. Keep watching this space. 😉

Till we meet again, happy coding! 👍


The best source of information on TensorFlow is the excellent official documentation. In fact, primary reference for this article is the official documentation and related GitHub repos.

In case you would like to explore more, I highly encourage you to go over the following official sources:

“TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.”