Whitepaper 'FinOps and cost management for Kubernetes'
Please consider giving OptScale a Star on GitHub, it is 100% open source. It would increase its visibility to others and expedite product development. Thank you!
Ebook 'From FinOps to proven cloud cost management & optimization strategies'
menu icon
OptScale — FinOps
FinOps overview
Cost optimization:
AWS
MS Azure
Google Cloud
Alibaba Cloud
Kubernetes
menu icon
OptScale — MLOps
ML/AI Profiling
ML/AI Optimization
Big Data Profiling
OPTSCALE PRICING
menu icon
Acura — Cloud migration
Overview
Database replatforming
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM
Public Cloud
Migration from:
On-premise
menu icon
Acura — DR & cloud backup
Overview
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM

Key MLOps processes (part 5): Project initiation, or project initialization

In this article, we describe the block of the scheme, devoted to project initiation, or project initialization.

Please, find the whole scheme, which describes key MLOps processes here. The main parts of the scheme are horizontal blocks, inside of which the procedural aspects of MLOps are described. Each of them is designed to solve specific tasks within the framework of ensuring the uninterrupted operation of the company’s ML services. 

MLOps_ML-project-initiation

Taking all of the above into account, it turns out that the ML team:

  • forms datasets,
  • conducts experiments on ML models with them,
  • develops new features to expand datasets and improve model performance,
  • saves the best models in the Model Registry for further reuse,
  • configures the processes of Serving and Deploying models,
  • configures model monitoring in production and automatic processes for retraining the current or creating new models.

It looks very expensive and is not always justified. Therefore, the scheme has a separate MLOps Project Initiation block (A), which is responsible for rational goal-setting.

MLOps Project Initiation block_A

It has five stages:

  1. business problem analysis,
  2. designing architecture and technologies to be used,
  3. defining ML problems from business goals,
  4. understanding required data to solve problems,

connect to raw data for initial data analysis. (image)

cost optimization ML resource management

Free cloud cost optimization & enhanced ML/AI resource management for a lifetime

The thought process here can be demonstrated using the example of an IT director in a company. An inspired project manager comes to him and requests a new installation of a platform for building an ML system. If both act in the interests of the company, the IT director will ask clarifying questions:

  • What business problem are you trying to solve with the new ML system?
  • Why did you decide that this problem needs to be solved with a new ML system? Perhaps it would be easier and cheaper to change processes or hire more people for technical support.
  • Where can we see a comparative analysis of the components of the ML system, based on which you chose the current set?
  • How will the selected architecture of the ML system help solve the business problem?
  • Are you sure that ML has the necessary mathematical apparatus to solve the stated problem? What is the problem statement for the solution?
  • Do you know where you will get the data for model training? Do you know what data you need for model training?
  • Have you already studied the available data? Is the quality of the data sufficient to solve the model problem?

The IT director will question like a professor at a college, but this will save the company money. If all the questions are answered, then there is a real need for the ML system.

The next question is: should MLOps be done in it?

It depends on the task. If you need to find a one-time solution, for example, a Proof of Concept (PoC), then MLOps is not needed. If it is important to process a large number of incoming requests, then MLOps is needed. Essentially, the approach is similar to optimizing any corporate process.

To justify the need for MLOps to management, you need to prepare answers to the questions:

  • What will improve?
  • How much money will we save?
  • Do we need to expand the staff?
  • What do we need to buy?
  • Where can we get expertise?

And then take the IT director’s exam again.

But the difficulties don’t end there, as the team also needs to be convinced of the need for process changes and technology stack. Sometimes, this is more difficult than asking management for a budget.

To convince the team, prepare answers to the questions:

  • Why can’t we continue to work as before?
  • What is the goal of the changes?
  • What will be the technology stack?
  • What and who do we need to learn?
  • How will the company help in implementing the changes?
  • Within what time frame do we need to make changes to the ML approach?
  • What will happen to those who don’t have time?

Conclusion

It seems that we have finished studying the MLOps scheme in detail here. However, these are only theoretical aspects. Practical implementation always reveals additional details that can change a lot. A ready-made MLOps platform can solve some implementation problems – a pre-configured infrastructure for training and deploying ML models.

💡 You might be also interested in our article ‘Key MLOps processes (part 4): Serving and monitoring machine learning models’ → https://optscale.ai/key-mlops-processes-part-4-serving-and-monitoring-machine-learning-models.

✔️ OptScale, a FinOps & MLOps open source platform, which helps companies optimize cloud costs and bring more cloud usage transparency, is fully available under Apache 2.0 on GitHub → https://github.com/hystax/optscale.

Enter your email to be notified about new and relevant content.

Thank you for joining us!

We hope you'll find it usefull

You can unsubscribe from these communications at any time. Privacy Policy

News & Reports

MLOps open source platform

A full description of OptScale as an MLOps open source platform.

Enhance the ML process in your company with OptScale capabilities, including

  • ML/AI Leaderboards
  • Experiment tracking
  • Hyperparameter tuning
  • Dataset and model versioning
  • Cloud cost optimization

How to use OptScale to optimize RI/SP usage for ML/AI teams

Find out how to: 

  • enhance RI/SP utilization by ML/AI teams with OptScale
  • see RI/SP coverage
  • get recommendations for optimal RI/SP usage

Why MLOps matters

Bridging the gap between Machine Learning and Operations, we’ll cover in this article:

  • The driving factors for MLOps
  • The overlapping issues between MLOps and DevOps
  • The unique challenges in MLOps compared to DevOps
  • The integral parts of an MLOps structure