Keras API Overview

Keras is a high level neural networking api. It is written in Python.

A common theme of this blog is going to be the notion that while building things from atoms is certainly possible, if you want to get things done quickly, it's better to shoot for using well thought out and developed pre-existing apis for the task you are interested in. Assuming they do exist for the problems you are trying to solve.

This is definitely true when working with building deep learning neural network systems. Yes, you could set out to write your own stochastic gradient descent code, your own neural network layer code, etc, etc. But why not take advantage of pre-existing tool libraries and associated api calls. Libraries that have been extensively tested and debugged already.

Keras was developed by Google research Francois Chollet. At the time it was written, the major deep learning libraries available to researchers included Torch, Theano, and Caffe. Having these tools was great, but also kind of tedious and time consuming. Keras was designed to be easy and fast to use. So it was a backend that sat on top of an underlying deep learning library.

When Keras was initially released, it's backend was Theano.  After Google released TensorFlow, Keras started supporting Tensorflow as a back end. With Keras v1.1.0, TensorFlow became the default back end for Keras.

When Google announced TensorFlow 2.02 in 2019, they declared that Keras was now the official high-level API for TensorFlow. Which was great, because TensorFlow is kind of tedious to work with.

Keras 2.3.0 is the last release that still supports multiple back ends. From now on, if you want to use Keras, then you will also need to be using TensorFlow 2.0. And you want to use tf.keras subpackage, which is the version of keras that is directly integrated inside of TensorFlow 2.0.

The Keras home page is here.


Keras is designed to provide user friendliness, modularity, easy extensibility, and to work with Python.

The core data structure in Keras is called a model. It provides a way to organize layers in a neural network.

Let's consider a linear stack of layers, which can be specified as a sequential mode. To build your sequential neural network in Python using Keras, you do the following:

Specify your sequential model.

from keras.models import Sequential model = Sequential()


Stack on some layers.

from keras.layers import Dense model.add(Dense(units=64, activation='relu', input_dim=100)) model.add(Dense(units=10, activation='softmax'))


Configure the model's learning process by compiling it.

model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])


Wow, that was easy.


You are now ready to iterate your training data. The training data is stored in Numpy arrays.

model.fit(x_train, y_train, epochs=5, batch_size=32)

You can evaluate your trained neural network's performance.

loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)

You can then run new data as input into your trained neural network.

classes = model.predict(x_test, batch_size=128)

Again, wow, that was easy.


Keep in mind that your real world neural network is most likely going to have more then 2 sequential layers. For example, the 2014 Simonyan and Zisserman VGG ImageNet networks had 16 and 19 layers. Current research work with ImageNet may be using 50-200 layers.

We'll talk about deploying your trained neural net in a future post.

We will also talk about how to speed up training in a future post. Especially how to take advantage of cloud based GU clusters for training.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics