Whitepaper 'FinOps and cost management for Kubernetes'
Please consider giving OptScale a Star on GitHub, it is 100% open source. It would increase its visibility to others and expedite product development. Thank you!
Ebook 'From FinOps to proven cloud cost management & optimization strategies'
menu icon
OptScale — FinOps
FinOps overview
Cost optimization:
AWS
MS Azure
Google Cloud
Alibaba Cloud
Kubernetes
menu icon
OptScale — MLOps
ML/AI Profiling
ML/AI Optimization
Big Data Profiling
OPTSCALE PRICING
menu icon
Acura — Cloud migration
Overview
Database replatforming
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM
Public Cloud
Migration from:
On-premise
menu icon
Acura — DR & cloud backup
Overview
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM

ML/AI Leaderboards to optimize your ML workflow

Ensure consistent evaluation datasets and metrics across all model runs for apples-to-apples model comparisons
ML/AI Leaderboards
ML-evaluation-protocol-OptScale

Evaluation protocol

ML-apples-to-apples-comparison-OptScale

Apples-to-apples comparison

Evaluation protocol

OptScale Evaluation protocol contains a set of rules by which candidates are compared, filtered, and discarded. It ensures that trained models are tested in a consistent, transparent, and repeatable way.

Users can define a priority metric for ranking candidates on the leaderboard, set conditions on the values of other metrics to filter out unsuitable candidates and select the datasets on which the candidates will be evaluated.

For example, ML specialists may consider only models with an Accuracy above 0.95, a specified Runtime, Loss, Precision, Sensitivity, F1 Score, Cost, and so on.

Evaluation protocol OptScale

The evaluation protocol ensures that the evaluation can be repeated with the same results. 

For example, a particular model may show excellent accuracy on a specific dataset, and ML specialists can evaluate the model on another dataset.

Apples-to-apples comparison OptScale

Apples-to-apples comparison

With OptScale, ML specialists can make a fair comparison between models, ensuring that the differences in performance are due to the models themselves and not external factors.

Users can compare models using the same datasets, data preprocessing steps (such as normalization, scaling, or feature engineering), hyperparameters, evaluation metrics, and training conditions. 

OptScale Leaderboards guarantees a consistent evaluation dataset and metrics across all model runs to enforce an apples-to-apples comparison.

Supported platforms

aws
MS Azure
google cloud platform
Alibaba Cloud
Kubernetes
databricks
PyTorch
kubeflow
TensorFlow
spark-apache

News & Reports

MLOps open source platform

A full description of OptScale as an MLOps open source platform.

Enhance the ML process in your company with OptScale capabilities, including

  • ML/AI Leaderboards
  • Experiment tracking
  • Hyperparameter tuning
  • Dataset and model versioning
  • Cloud cost optimization

How to use OptScale to optimize RI/SP usage for ML/AI teams

Find out how to: 

  • enhance RI/SP utilization by ML/AI teams with OptScale
  • see RI/SP coverage
  • get recommendations for optimal RI/SP usage

Why MLOps matters

Bridging the gap between Machine Learning and Operations, we’ll cover in this article:

  • The driving factors for MLOps
  • The overlapping issues between MLOps and DevOps
  • The unique challenges in MLOps compared to DevOps
  • The integral parts of an MLOps structure