• Home
  • Free consultation
  • Learn ML
  • Blog
  • Past Projects
  • About Us
Machine Village
  • Home
  • Free consultation
  • Learn ML
  • Blog
  • Past Projects
  • About Us

3 Stages of Machine Learning and Model Deployment

5/21/2020

0 Comments

 
The steps to complete a machine learning project can range from finding data and building a model to having a completely automated end-to-end system that can train and deploy models every hour.  

Stage 1:  Building a model 

The steps to building a good model follow the ML lifecycle:
  1. Raw data: Gather/store raw data relevant for your ML usecase
  2. Pre-processing: Data filtering, transforming, variable selection. etc. 
  3. Feature engineering:  Use processed data to build features for the model.  Normalization, padding, PCA, are common methods for feature engineering.   
  4. Model Training Cycle: 
    1. Train
    2. Evaluate 
    3. Error analysis
    4. Hyper-parameter optimization 
    5. Return to step (a) until model is ready 

We won’t go into detail on each step here - but if you’re looking for more detail, be sure to check out our article on the model-building process.  


Stage 2:  Deploying a model 

Once you have a model ready for deployment, the next step is to deploy the model so other applications (website, mobile apps, etc.) can use your model.  There are many ways to deploy a model, but most deployment methods are some type of service that accepts data calls and return model predictions.  

These deployments can range from a simple flask app that takes data input, runs it through a model, and returns a request. Or they can be as complicated as a large backend service that involve many servers, containers, databases, polling queues, and more. How big or small to make your service depends on how many requests your model will get, how fast the model needs to respond to those requests, the engineering architecture of existing systems at your company. 

A good first step is to start simple so you can see how your model will behave once it’s deployed and work through any issues that arise.  It’s essential to work closely with teams that will consume your model to work out exactly how your model will interact with model consumers.  For example, if you model will be used in a website, you’ll want to work closely with the front-end developers who will incorporate the model into the website.  


Stage 3:  Automated end-to-end ML pipeline 

Having a deployed model is great, but what if you want to train and deploy your model every day, or every hour?  If you deployed your model using a flask app running on EC2, you would need to manually retrain your model, copy the new model to the server, and re-launch your deployment service every time you want to update the model.  This is obviously a time-consuming and manual process.  

Instead of manually training and deploying your model, a better option is to create an automated end-to-end ML pipeline that you can run at any time frame. Then, all your need to do is schedule your pipeline to run at the frequency you need.  

Automated pipelines are usually some form of linked containers, where each container represents one (or multiple) parts of the ML lifecycle.  You might have a container that fetches raw data and passes that data to another container that processes the data into a model-ready format, and so on.  These pipelines involve much more engineering effort that just deploying a model via SageMaker or flask, but provide more flexibility and automation for model deployment, especially scheduled deployments.  

AWS has orchestration services like Step Functions that can coordinate and automate these pipelines.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

  • Home
  • Free consultation
  • Learn ML
  • Blog
  • Past Projects
  • About Us