Deploy Keras Models on Google Cloud Platform (GCP) and make predictions over the web

栏目: IT技术 · 发布时间: 4年前

内容简介:How to deploy Keras Models on GCP using Colab notebook and make custom predictions using PostmanOnce you have successfully built and trained your Deep Learning model, what is the next step that you would take? Make predictions on it of course. If you have

Deploy Keras Models on Google Cloud Platform (GCP) and make predictions over the web

How to deploy Keras Models on GCP using Colab notebook and make custom predictions using Postman

Overview

Once you have successfully built and trained your Deep Learning model, what is the next step that you would take? Make predictions on it of course. If you have done a good job of training your model, you should get great results on your test data.

But is this enough? You surely want to take it out to the world to get feedback from your end users. But how would you go about doing that? My suggestion is to first deploy it on your localhost to see if your pre- and post- processing is up to the task of facing the real world. How to do that? Well I have something for you here .

But what next? How do you take it to your end user? Your awesome model sitting cozily on your laptop is not helping to solve the problems of your users. The solution is to deploy it on a Cloud Computing service platform.

Google Cloud Platform

Why use Google Cloud Platform? For me this answer was simple.

Having never worked previously on any open Cloud Computing service, I had no bias towards any of the Cloud solutions. However, I use Google Drive to store my data and datasets, TensorFlow for building my Deep Learning models and Google Colab for training these models. I typed the draft version of this post on Google Docs on the Google Chrome browser. You can see where I am going with this.

Apart from this, there are great courses on Coursera for Deep Learning with GCP .And to top it all, GCP provides $300 worth of credit valid for one year in your trial account. If you don’t have a GCP account, you could look into getting one if it suits your purpose. But if you are reading this post, maybe you are already thinking of getting one.

Prerequisites for this post

Well for starters, you should already have a Google Cloud Platform account. We shall be using Google Cloud Storage and the AI Platform Prediction service in this post. We need Google Cloud Storage to store our model and the AI Platform to make predictions on it. The AI Platform is a paid service and I suggest you check its pricing first. Google Cloud Storage is free up to 50 GB-months (which means you can either store 5 GB data for 10 months or 50 GB data for 1 month). Again, I would suggest you go through the pricing for both of them to avoid unnecessary surprises later.

Let’s dive in!

Building or importing your Keras Model

I shall be using Google Colab as this makes uploading data to GCP much easier. In case you are not comfortable with using Google Colab for training your model, you can use any other IDE of your choice. Just ensure that once your model is ready to be deployed, convert it into the SavedModel format using the below line of code and zip the output folder:

#Export path with version
export_path = '/<path>/model/' # Your choice
version = '1' # Your choice
tf.saved_model.save(model, os.path.join(export_path, version))
# Replace 'model' with your modelimport shutil
zip_name = 'model' # Your choice
shutil.make_archive(zip_name, 'zip', export_path)

In case you wish to have a head start to the deployment stage, you can run my Colab notebook , which would download the zip file for you. I have covered the creation of this notebook in this post.

Now create a new Colab notebook and add the necessary libraries as shown below. Ensure that your TensorFlow version is 1.15 (or lower, if you wish) since the AI Platform on Google Cloud Platform does not support TensorFlow 2.0 yet. The source code for my notebook is available here . You can also run it in Colab .

We first load the necessary libraries:

try:
 %tensorflow_version 1.x
except:
 pass# TensorFlow
import tensorflow as tf# Helper libraries
import numpy as np
import osfrom google.colab import auth as google_auth
google_auth.authenticate_user()print(tf.__version__)

Once you authenticate yourself, you can go ahead and build and train your model. In case you are using an existing previously trained model like I am doing here, just unzip your zip file and load your model:

from google.colab import files
import shutil
# You can either upload your zip file in this way or go to 'Files' and upload there
zip_name = files.upload()
# This is because my zip file contains the model folder inside it.
# Based on your zip file structure, adjust the path accordingly.
extract_path = '/content/'
shutil.unpack_archive(list(zip_name.keys())[0], extract_path, 'zip')

You should now see a folder with your model name in the ‘Files’ tab. Open it to verify if the folder contains another folder with your version name, which in turn contains two folders — ‘assets’ and ‘variables’ — and your ProtoBuf (.pb) file.

Once your folder is ready, load the model into a variable using:

model_path = '/content/linear_model/1' # Your model version path
model = tf.keras.models.load_model(model_path)

You can try predicting some values to verify whether your model is loaded correctly or not. (Optional but useful).

round(model.predict([0]).tolist()[0][0]) # Expected value: 1

The output I got was 1.

Everything looks great!

Deploying the model to the AI Platform on Google Cloud Platform

Make sure that you have enabled the following services on Google Cloud Platform:

  1. Google Cloud Storage
  2. AI Platform Training and Prediction

To save our model to the Google Cloud Storage, we need to use buckets . Buckets are containers inside the GCS and help to organize the data. You can either create the buckets using Colab or through the Cloud Console . I personally prefer using the Cloud Console as it provides more options and flexibility. Also having one bucket for all my models makes more sense to me than creating separate buckets for each model, but this is just an individual preference.

Once you have created the buckets using any of the above options, add the following lines in your Colab notebook. Replace each of the below variables with the values relevant to your project.

BUCKET_NAME='bucket-2711298'
PROJECT_NAME = 'project-2711298'
MODEL_NAME = 'model'
MODEL_VERSION = 'v1'
TF_VERSION = '1.15'
PYTHON_VERSION = '3.7'
REGION = 'us-central1'

Create the path for storing your model in Google Cloud Storage.

GS_PATH = 'gs://' + BUCKET_NAME + '/' + MODEL_NAME
export_path = tf.saved_model.save(model, GS_PATH)

Once you see the message “INFO:tensorflow:Assets written to: gs://bucket-2711298/model/assets”, you should be able to view a folder named ‘model’ in your bucket. Now deploy the model to the AI Platform so that you can make predictions from it.

! gcloud config set project $PROJECT_NAME! gcloud ai-platform models create model --regions $REGION! gcloud ai-platform versions create $MODEL_VERSION \
--model $MODEL_NAME \
--runtime-version $TF_VERSION \
--python-version $PYTHON_VERSION \
--framework tensorflow \
--origin $GS_PATH

Once successfully executed, you should be able to see your model and its version on the AI Platform Dashboard.

Making a prediction from Google Colab

To make predictions from Google Colab, you have to use JSON files. You can make multiple predictions in a single batch with this approach. Create a JSON file and add your input data to it. You can add multiple inputs, where each input is your test data. Make sure your data is in the same format that you would normally pass to the ‘predict’ method of your model. In my case, I shall pass the value 0 and expect the prediction to be close to 1. Since the input shape for my model is [1], my JSON file should look like [0].

with open('/content/test_data.json', 'w') as f:
 f.write('[0]')

Once your JSON file is ready, add the below code to execute your query:

! gcloud ai-platform predict \
--model $MODEL_NAME \
--version $MODEL_VERSION \
--json-instances '/content/test_data.json'

For me, the output looked like:

SINGLE
[0.9999986886978149]

Our model is now successfully deployed on GCP!

Making a prediction from Postman

I used Postman for testing my queries, but you can use any form of API calls. One key difference between using Colab and any other client to call the REST API is that Colab can be authenticated quite easily as we saw earlier. If we intend to make calls via other clients, we need to have a Bearer token using OAuth 2.0. If you try calling your API without the token, you would face a 401 Error. Generating a Bearer token on GCP is quite easy. I shall explore two approaches here:

  1. On your GCP Console, go to the APIs & Services Dashboard and navigate to the Credentials page. Then click on the ‘Create Credentials’ button at the top-center and choose ‘OAuth client ID’. Next, choose your Application Type. I chose ‘Web application’. Next, click on ‘Create’. You should now see your ‘Client ID’ and ‘Client Secret’. Copy them both in a secure place. Once you have your ID and Secret, you can create the Bearer tokens and use it in your API Calls.
  2. Go to OAuth 2.0 Playground and select AI Platform Training & Prediction API v1. Choose https://www.googleapis.com/auth/cloud-platform and click on Authorize APIs. Once you authenticate yourself, you should be able to see the generated Access token for you. This token is only valid for 1 hour.

Now go to Postman and enter your REST API end-point. The end-point would be in the form:

https://ml.googleapis.com/v1/projects/{PROJECT_NAME}/models/{MODEL_NAME}:predict

So as per the data followed in this post, the API end-point looks like:

https://ml.googleapis.com/v1/projects/ project-2711298/models/model:predict

Choose the request type as ‘POST’. Add Authentication Type as ‘Bearer Token’ and enter the token that you received above. Add ‘Content-Type’ as ‘application/json’ in your header and your body as:

{
 "instances": [[
 0
 ]]
}

Here, your input would be of the form

{
 "instances": [
 <YOUR DATA>
 ]
}

Execute the query. For me, the output was:

{
 "predictions": [{
 "Single": [
 0.9999986886978149
 ]
 }]
}

Well, that looks pretty much like 1 to me. Play around with your model by making POST queries. Remember the pricing!

Conclusion

We were able to deploy our own model on the Google Cloud Platform and make predictions on it. We learned how to use the Cloud Console, Google Cloud Storage and the AI Platform for prediction.

If you have reached here, thanks for reading. If you have any queries, suggestions or comments, please feel free to comment on this post. I would gladly appreciate any and all feedback.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

创新者

创新者

[美] 沃尔特 · 艾萨克森 / 关嘉伟、牛小婧 / 中信出版社 / 2016-6 / 88.00

讲述了计算机和互联网从无到有的发展历程,并为我们生动地刻画出数字时代的创新者群像。 在近200年的数字化进程中群星闪耀,艾萨克森从一个计算机程序的创造者、诗人拜伦之女埃达说起,细数了这一群站在科学与人文交叉路口的创新者,他们包括通用型电子计算机的创造者奠奇利、科学家冯·诺依曼、仙童半导体公司的“八叛逆”、天才图灵、英特尔的格鲁夫、微软的比尔·盖茨、苹果公司的乔布斯、谷歌的拉里·佩奇等。《创新......一起来看看 《创新者》 这本书的介绍吧!

URL 编码/解码
URL 编码/解码

URL 编码/解码

MD5 加密
MD5 加密

MD5 加密工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具