Deploy model on web service

This tutorial is about deploying a machine learning model on a web service end point, so that you can send http request to this server to gain prediction result generated by the model.

Project structure

Let’s take a glance on our project’s directory structure in order to get an overview perspective:

More detail:

  • The .azureml subdirectory include the config.json file which contains the configuration information for our workspace
  • The data directory includes the train and test dataset for the model
  • The contains some functions to read the dataset
  • The file takes responsibility for creating a compute target on our Azure Machine Learning Workspace to run our submitted code
  • The is used for downloading data to our project’s directory and registering it on our workspace.
  • The creates an environment which specifies the needed packages to run our code
  • The sklearn-mnist is the directory that will be submitted to our remote workspace and executed by a computer target.
  • The file contains some lines of code to submit the sklearn-mnist directory to remote workspace
  • The is used for deploying our model on a web service
  • defines the behavior of the web service
  • The file is used to test the web service end point

Create a compute target

In the, we have:

More details:

  • We create a workspace object with the configuration information defined in the .azureml directory
  • The os.environ.get(“A”, B) will return the value of the environment variable named A, if it is not defined, B will be returned as a substitution
  • We will check if the compute target has been created, if not, we create a new one. Finally, print the information of the compute target:

Define the environment to run our code

These lines of code define an environment named “tutorial-env” and needed package, then register it on our workspace:

Create the dataset

We will download the data from Azure Machine Learning open datasets, then register it on our workspace

After running these codes, check the data directory, we can see the downloaded files:

Open your web browser, access to your workspace, on the Datasets tab, you can see a dataset named “mnist_opendataset” has just been created.

In the file, we define some functions to read the data, the downloaded dataset includes some gz file, these functions take responsibility for reading these files and return a numpy array

Define code to train model

Create a file named in the sklearn-mnist, remember that all files in this folder will be submitted to our remote workspace and executed by the pre-defined compute target.

Import necessary modules for training:

Because we will call the function defined in file, so please remember to copy this file into this directory, or it will fail to run your submitted code on Azure Machine Learning Workspace.

Next, getting the arguments for our program, these arguments are set on the file:

Use function defined in Utils file to get training and testing data:

We also need to create a run object so that we can trace some metrics when running for further comparisons. To get current run, use this method:

Define the model, train and test:

As mentioned above, we can keep track of some metrics for observation.

Finally, save the model on the outputs subdirectory, the register the model:

Submit the code

After defining some lines of code to train a model, we can submit them to Azure Machine Learning Workspace and run with cloud computing resource, instead of running in your local computer. The will help you to do that.

Firstly, create a workspace object, and the experiment you want your code to be executed on:

As you can see, we also specify the computing resource, and the environment which includes necessary package.

The script_folder variable stores the location of the directory that will be submitted to remote workspace. We also set value of the arguments for out program.

Submit our code:

Open your workspace, in the Experiments tab, you can see an experiment named “tutorial_experiment”

More details:

Click on Run 1, on the Outputs+logs tab, you can see an directory named outputs which store your pkl file. Open the 70_driver_log.txt file, you can see the output of our program:

Deploy model on an end-point service

Define the behavior of the service

The is used by the web service call to show to use the model. You must include 2 required functions into the scoring script:

  • The init() function, which typically loads the model into a global object. This function is run only once when the Docker container is started
  • The run(input_data) function used the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization. Other formats are also supported

Create an end-point service

Import the necessary packages to deploy the model:

Next, create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for our Azure Container Instance.

Get the registered model on our workspace:

Create inference configuration necessary to deploy the model as a web service using the scoring file and the pre-defined environment object

Finally, deploy the model on the web service, you need to specify the name of the service:

Test the service

Import required packages:

Get the test data:

Make a request to our web service:

Print the result