Introducing Akka Cloud to Edge Continuum. Build once for the Cloud. Seamlessly deploy to the Edge - Read Blog
Support
openshift kubeflow deploy machine-learning installation kubernetes tensorflow pipelines data-science jupyterhub jupyter notebooks

How To Deploy Kubeflow On Lightbend Platform With OpenShift - Part 4: JupyterHub

Boris Lublinsky Principal Architect, Lightbend, Inc.

How To Use JupyterHub Provided By Kubeflow

In Part 3 of “How To Deploy And Use Kubeflow On OpenShift”, we looked at Kubeflow support components like Argo, Ambassador, Minio, and Spartakus. In this post, we look at using JupyterHub.

JupyterHub is a multi-user Hub which spawns, manages, and proxies multiple instances of the single-user Jupyter notebook. I would say that it is one of the most important components of Kubeflow. Jupyter notebooks allow developers to capture the whole computation process: developing, documenting, and executing code, as well as communication of the results.

In order to access JupyterHub, go to the main Kubeflow page (as described in the previous post of this series):

and click on the JupyterHub button, which will bring up a login screen:

You can sign with any user/password here. I signed in as admin/admin and got the following options screen:

This allows me to pick the image that I want to use. I picked the latest CPU only image, though Kubeflow supports both CPU and GPU. It takes a couple of minutes to start the server, and when it is started, a “standard” JupyterHub screen will be presented:

Here we can logout, stop the server, or create and run a new notebook using either Python 2 or 3 (all the code in this post is using Python 3).

To test the install, I used a basic hello world sample (adapted from mnist_softmax.py ), which worked exactly as expected.

Additionally, Kubeflow’s JupyterHub notebooks are capable of submitting Kubernetes resources. The Jupyter notebook pods are running under jupyter-notebook service account bound to jupyter-notebook role, which has namespace-scoped permissions to the following Kubernetes resources:

  • pods
  • deployments
  • services
  • jobs
  • tfjobs
  • pytorchjobs

This means that these Kubernetes resources can be directly created from a Jupyter notebook. Kubectl is also installed in the notebook, so it is possible create Kubernetes resources running the following command in a Jupyter notebook cell:

!kubectl create -f myspec.yaml

This allows notebook developers to execute all of the Kubernetes commands without leaving the notebook environment.

A More Complex Jupyter Example

A more complex Jupyter notebook example, adopted from this code, shows how to build a model and save it directly to S3 for subsequent serving. Enter the following code into notebook

import os

# Setting up environment for S3 access
os.environ['AWS_REGION'] = 'eu-west-1' 
os.environ['S3_REGION'] = 'eu-west-1' 
os.environ['S3_ENDPOINT'] = 's3.eu-west-1.amazonaws.com'
os.environ['S3_USE_HTTPS'] = 'true' 
os.environ['S3_VERIFY_SSL'] = 'true'

import tensorflow as tf
from tensorflow.python.lib.io import file_io
from tensorflow.examples.tutorials.mnist import input_data
# define input arguments
arg_version = 1
arg_steps = 2000
s3_path = 's3://fdp-killrweather-data/kubeflow/mnist1/' + str(arg_version)
# network parameters
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
feature_size = 784
num_classes = 10
hidden_size = feature_size - 100
hidden_size2 = hidden_size - 100
batch_size = 50
learning_rate = 1e-4
#build the graph
def create_layer(shape, prev_layer, is_output):
    W = tf.Variable(tf.truncated_normal(shape, stddev=0.1))
    b = tf.Variable(tf.constant(0.1, shape=[shape[1]]))
    activation = tf.matmul(prev_layer, W) + b
    if is_output:
        new_layer = tf.nn.softmax(activation)
    else:
        new_layer = tf.nn.relu(activation)
        tf.nn.dropout(new_layer, dropout_prob)
    return new_layer


# define inputs
x = tf.placeholder(tf.float32, [None, feature_size], name='x-input')
y = tf.placeholder(tf.float32, [None, num_classes], name='y-input')
dropout_prob = tf.placeholder(tf.float32)

# define layer structure
layer1 = create_layer([feature_size, hidden_size], x, False)
layer2 = create_layer([hidden_size, hidden_size2], layer1, False)
outlayer = create_layer([hidden_size2, num_classes], layer2, True)
prediction = tf.argmax(outlayer, 1)

# training ops
total_cost = -tf.reduce_sum(y * tf.log(outlayer), reduction_indices=[1])
mean_cost = tf.reduce_mean(total_cost)
train = tf.train.AdamOptimizer(learning_rate).minimize(mean_cost)

# accuracy ops
correct_prediction = tf.equal(prediction, tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# train
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(arg_steps):
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    feed_dict = {x: batch_x, y: batch_y, dropout_prob: 0.5}
    sess.run(train, feed_dict=feed_dict)
    if i % 100 == 0:
        feed_dict = {x: batch_x, y: batch_y, dropout_prob: 0.5}
        train_acc = sess.run(accuracy, feed_dict=feed_dict)
        print("step %d/%d, training accuracy %g" % (i, arg_steps, train_acc))
# print final accuracy on test images
feed_dict = {x: mnist.test.images, y: mnist.test.labels, dropout_prob: 1.0}
print (sess.run(accuracy, feed_dict=feed_dict))
#export trained model
# create signature for TensorFlow Serving
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_pred = tf.saved_model.utils.build_tensor_info(prediction)
tensor_info_scores = tf.saved_model.utils.build_tensor_info(outlayer)
tensor_info_ver = tf.saved_model.utils.build_tensor_info(tf.constant([str(arg_version)]))
prediction_signature = (tf.saved_model.signature_def_utils.build_signature_def(
        inputs={'images': tensor_info_x},
        outputs={'prediction': tensor_info_pred, 'scores': tensor_info_scores,
                 'model-version': tensor_info_ver},
        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

print("saving model to S3")
# save model to s3
export_path = s3_path
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
      sess, [tf.saved_model.tag_constants.SERVING],
      signature_def_map={
           'predict_images': prediction_signature
      },
      legacy_init_op=legacy_init_op)
builder.save()

After the notebook runs, you can see the corresponding directories created in S3. If you do not have access to S3, you can use Minio instead of S3. TensorFlow support for S3 works with Minio. Here is a simple example:



import os

os.environ['AWS_ACCESS_KEY_ID'] = 'minio'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'minio123'
os.environ['AWS_REGION'] = 'us-west-1' 
os.environ['S3_REGION'] = 'us-west-1' 
os.environ['S3_ENDPOINT'] = 'minio-service:9000'
os.environ['S3_USE_HTTPS'] = '0'
os.environ['S3_VERIFY_SSL'] = '0'

import tensorflow as tf
from tensorflow.python.lib.io import file_io

print(file_io.stat('s3://mlpipeline/pipelines/1c2575c4-d5af-43b9-b3bd-580632c9cc0b'))
>>> >

Alternatevely you can use Minio Python APIs used in a Jupyter notebook:

#install Minio
!pip3 install minio --upgrade
#create client
from minio import Minio
from minio.error import ResponseError
minioClient = Minio('minio-service:9000',
                  access_key='minio',
                  secret_key='minio123',
                  secure=False)
>>>Successfully installed minio-4.0.11
#get buckets
buckets = minioClient.list_buckets()
for bucket in buckets:
    print(bucket.name, bucket.creation_date)
>>>mlpipeline 2019-02-11 14:41:20.414000+00:00
# List all object paths in bucket that begin with pipeline.
objects = minioClient.list_objects('mlpipeline', prefix='pipeline', recursive=True)
for obj in objects:
    print(obj.bucket_name, obj.object_name.encode('utf-8'), obj.last_modified,
          obj.etag, obj.size, obj.content_type)
>>>mlpipeline b'pipelines/1c2575c4-d5af-43b9-b3bd-580632c9cc0b' 2019-02-08 19:22:08.663000+00:00 8dc604768f49c1077eba8fdd387562dd-1 4638 None
mlpipeline b'pipelines/544829ff-bbc9-45bb-b892-3efa83f805f3' 2019-02-08 19:22:05.708000+00:00 491dedb107fa232cbb781bf64aad3da1-1 5049 None
mlpipeline b'pipelines/70923f47-7e94-44c0-8b70-4e2f9f7561ee' 2019-02-08 19:22:02.718000+00:00 2715fa739e9bbf7976cee509bcc270bc-1 22224 None

That’s all for this part. Check out the next post on ML with TensorFlow, and thanks for reading!

p.s. If you’d like to get professional guidance on best-practices and how-tos with Machine Learning, simply contact us to learn how Lightbend can help.

PART 5: ML WITH TENSORFLOW

The Total Economic Impact™
Of Lightbend Akka

  • 139% ROI
  • 50% to 75% faster time-to-market
  • 20x increase in developer throughput
  • <6 months Akka pays for itself