This tutorial is derived from Data School's Machine Learning with scikit-learn tutorial. I added my own notes so anyone, including myself, can refer to this tutorial without watching the videos. Add the Tune Model Hyperparameters module to your experiment. Connect one of the machine learning modules in this category to the left-hand input of Tune Model Hyperparameters. In the Properties pane for the learner, set the Create trainer mode option to Parameter Range and use the Range Builder to specify a range of values to use in the parameter sweep.

-->

Automated machine learning, also referred to as automated ML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML is based on a breakthrough from our Microsoft Research division.

Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.

Jan 21, 2019  A machine learning algorithm connected to a set of quantum dots can automatically set them into the desired state. Jan 04, 2018  Ready to learn how to tune vocals like a pro? Click below to download your FREE AutoTune Cheatsheet, with the exact settings I recommend for the most transpa.

When to use automated ML

Apply automated ML when you want Azure Machine Learning to train and tune a model for you using the target metric you specify. Automated ML democratizes the machine learning model development process, and empowers its users, no matter their data science expertise, to identify an end-to-end machine learning pipeline for any problem.

Data scientists, analysts, and developers across industries can use automated ML to:

  • Implement machine learning solutions without extensive programming knowledge
  • Save time and resources
  • Leverage data science best practices
  • Provide agile problem-solving

The following table lists common automated ML use cases.

ClassificationTime series forecastingRegression
Fraud DetectionSales ForecastingCPU Performance Prediction
Marketing PredictionDemand Forecasting
Newsgroup Data ClassificationBeverage Production Forecast

Design automated ML experiments

Using Azure Machine Learning, you can design and run your automated ML training experiments with these steps:

  1. Identify the ML problem to be solved: classification, forecasting, or regression

  2. Specify the source and format of the labeled training data: Numpy arrays or Pandas dataframe

  3. Configure the compute target for model training, such as your local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks. Learn about automated training on a remote resource.

  4. Configure the automated machine learning parameters that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model. You can configure the settings for automatic training experiment in Azure Machine Learning studio, or with the SDK.

    Important

    The functionality in this studio, https://ml.azure.com, is accessible from Enterprise workspaces only. Learn more about editions and upgrading.

  5. Submit the training run.

How automated ML works

During training, Azure Machine Learning creates a number of in parallel pipelines that try different algorithms and parameters. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to 'fit' your data. It will stop once it hits the exit criteria defined in the experiment. The following diagram illustrates this process.

You can also inspect the logged run information, which contains metrics gathered during the run. The training run produces a Python serialized object (.pkl file) that contains the model and data preprocessing.

While model building is automated, you can also learn how important or relevant features are to the generated models.

Preprocessing

In every automated machine learning experiment, your data is preprocessed using the default methods and optionally through advanced preprocessing.

Note

Automated machine learning pre-processing steps (feature normalization, handling missing data,converting text to numeric, etc.) become part of the underlying model. When using the model forpredictions, the same pre-processing steps applied during training are applied toyour input data automatically.

Automatic preprocessing (standard)

In every automated machine learning experiment, your data is automatically scaled or normalized to help algorithms perform well. During model training, one of the following scaling or normalization techniques will be applied to each model.

Scaling & normalizationDescription
StandardScaleWrapperStandardize features by removing the mean and scaling to unit variance
MinMaxScalarTransforms features by scaling each feature by that column's minimum and maximum
MaxAbsScalerScale each feature by its maximum absolute value
RobustScalarThis Scaler features by their quantile range
PCALinear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space
TruncatedSVDWrapperThis transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition, which means it can work with scipy.sparse matrices efficiently
SparseNormalizerEach sample (that is, each row of the data matrix) with at least one non-zero component is rescaled independently of other samples so that its norm (l1 or l2) equals one

Advanced preprocessing: optional featurization

Additional advanced preprocessing and featurization are also available, such as data guardrails, encoding, and transforms. Learn more about what featurization is included.Enable this setting with:

  • Azure Machine Learning studio: Enable Automatic featurization in the View additional configuration section with these steps.

  • Python SDK: Specifying 'feauturization': 'auto' / 'off' / 'FeaturizationConfig' for the AutoMLConfig class.

Classification & regression

Classification and regression are the most common types of machine learning tasks. Both are types of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about featurization options.

The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example of classification with automated machine learning.

Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of regression with automated machine learning.

Time-series forecasting

Karaoke Machines With Auto Tune

Building forecasts is an integral part of any business, whether it's revenue, inventory, sales, or customer demand. You can use automated ML to combine techniques and approaches and get a recommended, high-quality time-series forecast.

Tune

An automated time-series experiment is treated as a multivariate regression problem. Past time-series values are 'pivoted' to become additional dimensions for the regressor together with other predictors. This approach, unlike classical time series methods, has an advantage of naturally incorporating multiple contextual variables and their relationship to one another during training. Automated ML learns a single, but often internally branched model for all items in the dataset and prediction horizons. More data is thus available to estimate model parameters and generalization to unseen series becomes possible.

Learn more and see an example of automated machine learning for time series forecasting. Or, see the energy demand notebook for detailed code examples of advanced forecasting configuration including:

  • holiday detection and featurization
  • time-series and DNN learners (Auto-ARIMA, Prophet, ForecastTCN)
  • many models support through grouping
  • rolling-origin cross validation
  • configurable lags
  • rolling window aggregate features

Ensemble models

Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your run. Automated machine learning uses both voting and stacking ensemble methods for combining models:

  • Voting: predicts based on the weighted average of predicted class probabilities (for classification tasks) or predicted regression targets (for regression tasks).
  • Stacking: stacking combines heterogenous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.

The Caruana ensemble selection algorithm with sorted ensemble initialization is used to decide which models to use within the ensemble. At a high level, this algorithm initializes the ensemble with up to five models with the best individual scores, and verifies that these models are within 5% threshold of the best score to avoid a poor initial ensemble. Then for each ensemble iteration, a new model is added to the existing ensemble and the resulting score is calculated. If a new model improved the existing ensemble score, the ensemble is updated to include the new model.

See the how-to for changing default ensemble settings in automated machine learning.

Use with ONNX

With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about accelerating ML models with ONNX.

See how to convert to ONNX format in this Jupyter notebook example. Learn which algorithms are supported in ONNX.

The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about inferencing ONNX models with the ONNX runtime C# API.

Automated ML in Azure Machine Learning

Azure Machine Learning offers two experiences for working with automated ML

  • For code experienced customers, Azure Machine Learning Python SDK

  • For limited/no code experience customers, Azure Machine Learning studio at https://ml.azure.com

Auto-tune Microphone

The following summarizes the high-level automated ML capabilities supported in each experience.

Experiment settings

The following settings allow you to configure your automated ML experiment.

Python SDKstudio
Split data into train/validation sets
Supports ML tasks: classification, regression, and forecasting
Optimizes based on primary metric
Supports AML compute as compute target
Configure forecast horizon, target lags & rolling window
Set exit criteria
Set concurrent iterations
Drop columns
Block algorithms
Cross validation
Supports training on Azure Databricks clusters
View engineered feature names
Featurization summary
Holiday featurization
Verbosity level for log files

Model settings

These settings can be applied to the best model as a result of your automated ML experiment.

Python SDKstudio
Best model registration
Best model deployment
Best model explainability
Enable voting ensemble & stack ensemble models
Show best model based on non-primary metric
Enable/disable ONNX model compatibility
Test the model

Run control settings

These settings allow you to review and control your experiment runs and its child runs.

Python SDKstudio
Run summary table
Cancel run
Cancel child run
Get guardrails
Pause run
Resume run

Next steps

See examples and learn how to build models using automated machine learning:

  • Follow the Tutorial: Automatically train a regression model with Azure Machine Learning

  • Configure the settings for automatic training experiment:

    • In Azure Machine Learning studio, use these steps.
    • With the Python SDK, use these steps.
  • Learn how to auto train using time series data, use these steps.

  • Try out Jupyter Notebook samples for automated machine learning

  • Automated ML is also available in other Microsoft solutions such as,ML.NET,HDInsight, Power BI and SQL Server
-->

Performs a parameter sweep on a model to determine the optimum parameter settings

Category: Machine Learning / Train

Note

Applies to: Machine Learning Studio (classic)

This content pertains only to Studio (classic). Similar drag and drop modules have been added to Azure Machine Learningdesigner (preview). Spire vst download demo. Learn more in this article comparing the two versions.

Module overview

This article describes how to use the Tune Model Hyperparameters module in Azure Machine Learning Studio (classic), to determine the optimum hyperparameters for a given machine learning model. The module builds and tests multiple models, using different combinations of settings, and compares metrics over all models to get the combination of settings.

The terms parameter and hyperparameter can be confusing. The model's parameters are what you set in the properties pane. Basically, this module performs a parameter sweep over the specified parameter settings, and learns an optimal set of hyperparameters, which might be different for each specific decision tree, dataset, or regression method. The process of finding the optimal configuration is sometimes called tuning.

The module support two methods for finding the optimum settings for a model:

  • Integrated train and tune: You configure a set of parameters to use, and then let the module iterate over multiple combinations, measuring accuracy until it finds a 'best' model. With most learner modules, you can choose which parameters should be changed during the training process, and which should remain fixed.

    Depending on how long you want the tuning process to run, you might decide to exhaustively test all combinations, or you could shorten the process by establishing a grid of parameter combinations and testing a randomized subset of the parameter grid.

  • Cross validation with tuning: With this option, you divide your data into some number of folds and then build and test models on each fold. This method provides the best accuracy and can help find problems with the dataset; however, it takes longer to train.

Auto Tune Machine Learning Center

Both methods generate a trained model that you can save for re-use.

Related tasks

  • If you are building a clustering model, use Sweep Clustering to automatically determine the optimum number of clusters and other parameters.

  • Before tuning, apply feature selection to determine the columns or variables that have the highest information value. For more information, see Feature Selection.

How to configure Tune Model Hyperparameters

Generally, learning the optimal hyperparameters for a given machine learning model requires considerable experimentation. This module supports both the initial tuning process, and cross-validation to test model accuracy:

Train a model using a parameter sweep

This section describes how to perform a basic parameter sweep, which trains a model by using the Tune Model Hyperparameters module.

  1. Add the Tune Model Hyperparameters module to your experiment in Studio (classic).

  2. Connect an untrained model (a model in the iLearner format) to the leftmost input.

  3. Set the Create trainer mode option to Parameter Range and use the Range Builder to specify a range of values to use in the parameter sweep.

    Almost all the classification and regression modules support an integrated parameter sweep. For those learners that do not support configuring a parameter range, only the available parameter values can be tested.

    You can manually set the value for one or more parameters, and then sweep over the remaining parameters. This might save some time.

  4. Add the dataset you want to use for training and connect it to the middle input of Tune Model Hyperparameters.

    Optionally, if you have a tagged dataset, you can connect it to the rightmost input port (Optional validation dataset). This lets you measure accuracy while training and tuning.

  5. In the Properties pane of Tune Model Hyperparameters, choose a value for Parameter sweeping mode. This option controls how the parameters are selected.

    • Entire grid: When you select this option, the module loops over a grid predefined by the system, to try different combinations and identify the best learner. This option is useful for cases where you don't know what the best parameter settings might be and want to try all possible combination of values.

    You can also reduce the size of the grid and run a random grid sweep. Research has shown that this method yields the same results, but is more efficient computationally.

    • Random sweep: When you select this option, the module will randomly select parameter values over a system-defined range. You must specify the maximum number of runs that you want the module to execute. This option is useful for cases where you want to increase model performance using the metrics of your choice but still conserve computing resources.
  6. For Label column, launch the column selector to choose a single label column.

  7. Choose a single metric to use when ranking the models.

    When you run a parameter sweep, all applicable metrics for the model type are calculated and are returned in the Sweep results report. Separate metrics are used for regression and classification models.

    However, the metric you choose determines how the models are ranked. Only the top model, as ranked by the chosen metric, is output as a trained model to use for scoring.

  8. For Random seed, type a number to use when initializing the parameter sweep.

    If you are training a model that supports an integrated parameter sweep, you can also set a range of seed values to use and iterate over the random seeds as well. This can be useful for avoiding bias introduced by seed selection.

  9. Run the experiment.

Results of hyperparameter tuning

When training is complete:

  • To view a set of accuracy metrics for the best model, right-click the module, select Sweep results, and then select Visualize.

    All accuracy metrics applicable to the model type are output, but the metric that you selected for ranking determines which model is considered 'best'. Metrics are generated only for the top-ranked model.

  • To view the settings derived for the 'best' model, right-click the module, select Trained best model, and then click Visualize. The report includes parameter settings and feature weights for the input columns.

  • To use the model for scoring in other experiments, without having to repeat the tuning process, right-click the model output and select Save as Trained Model.

Perform cross-validation with a parameter sweep

This section describes how to combine a parameter sweep with cross-validation. This process takes longer, but you can specify the number of folds, and you get the maximum amount of information about your dataset and the possible models.

  1. Add the Partition and Sample module to your experiment, and connect the training data.

  2. Choose the Assign to Folds option and specify some number of folds to divide the data into. If you don't specify a number, by default 10 folds are used. Rows are apportioned randomly into these folds, without replacement.

  3. To balance the sampling on some column, set the Stratified split to TRUE, and then select the strata column. For example, if you have an imbalanced dataset, you might want to divide the dataset such that each fold gets the same number of minority cases.

  4. Add the Tune Model Hyperparameters module to your experiment.

  5. Connect one of the machine learning modules in this category to the left-hand input of Tune Model Hyperparameters.

  6. In the Properties pane for the learner, set the Create trainer mode option to Parameter Range and use the Range Builder to specify a range of values to use in the parameter sweep.

    You don’t need to specify a range for all values. You can manually set the value for some parameters, and then sweep over the remaining parameters. This might save some time.

    For a list of learners that don't support this option, see the Technical Notes section.

  7. Connect the output of Partition and Sample to the labeled Training dataset input of Tune Model Hyperparameters.

  8. Optionally, you can connect a validation dataset to the rightmost input of Tune Model Hyperparameters. For cross-validation, you need only a training dataset.

  9. In the Properties pane of Tune Model Hyperparameters, indicate whether you want to perform a random sweep or a grid sweep. A grid sweep is exhaustive, but more time-consuming. A random parameter search can get good results without taking quite so much time.

    Maximum number of runs on random sweep: If you choose a random sweep, you can specify how many times the model should be trained, using a random combination of parameter values.

    Maximum number of runs on random grid: This option also controls the number of iterations over a random sampling of parameter values, but the values are not generated randomly from the specified range; instead, a matrix is created of all possible combinations of parameter values and a random sampling is taken over the matrix. This method is more efficient and less prone to regional oversampling or undersampling.

    Tip

    For a more in-depth discussion of these options, see the Technical notes section.

  10. Choose a single label column.

  11. Choose a single metric to use in ranking the model. Many metrics are computed, so select the most important one to use in ordering the results.

  12. For Random seed, type a number to use when initializing the parameter sweep.

    If you are training a model that supports an integrated parameter sweep, you can also set a range of seed values to use and iterate over the random seeds as well. This is optional, but can be useful for avoiding bias introduced by seed selection.

  13. Add the Cross-Validate Model module. Connect the output of Partition and Sample to the Dataset input, and connect the output of Tune Model Hyperparameters to the Untrained model input.

  14. Run the experiment.

Results of cross-validation

When cross-validation is complete:

  • To view the evaluation results, right-click the module, select Evaluation results by fold, and then select Visualize.

    The accuracy metrics are calculated from the cross-validation pass, and may vary slightly depending on how many folds you selected.

  • To see how the dataset was divided, and how the 'best' model would score each row in the dataset, right-click the module, select Scored results, and then select Visualize.

  • If you save this dataset for later re-use, the fold assignments are preserved. For example, the saved datsaet might look like this:

    Fold assignmentsClassAge(1st feature column)
    2035
    1117
    3062
  • To get the parameter settings for the 'best' model, right-click Tune Model Hyperparameters

Examples

For examples of how this module is used, see the Azure AI Gallery:

  • Prediction of student performance: Uses the Two-Class Boosted Decision Tree algorithm with different parameters to generate a model with the best possible root mean squared error (RMSE).

  • Learning with Counts: Binary Classification: Generates a compact set of features using count-based learning, and then applies a parameter sweep to find the best model parameters.

  • Binary Classification: Network intrusion detection: Uses Tune Model Hyperparameters in cross-validation mode, with a custom split into five folds, to find the best hyperparameters for a Two-Class Logistic Regression model.

Technical notes

This section contains implementation details, tips, and answers to frequently asked questions.

How a parameter sweep works

This section describes how parameter sweep works in general, and how the options in this module interact.

When you set up a parameter sweep, you define the scope of your search, to use either a finite number of parameters selected randomly, or an exhaustive search over a parameter space you define.

  • Random sweep: This option trains a model using a set number of iterations.

    You specify a range of values to iterate over, and the module uses a randomly chosen subset of those values. Values are chosen with replacement, meaning that numbers previously chosen at random are not removed from the pool of available numbers. Thus, the chance of any value being selected remains the same across all passes.

  • Grid sweep: This option creates a matrix, or grid, that includes every combination of the parameters in the value range you specify. When you start tuning with this module, multiple models are trained using combinations of these parameters.

  • Entire grid: The option to use the entire grid means just that: each and every combination is tested. This option can be considered the most thorough, but requires the most time.

  • Random grid: If you select this option, the matrix of all combinations is calculated and values are sampled from the matrix, over the number of iterations you specified.

    Recent research has shown that random sweeps can perform better than grid sweeps.

Controlling the length and complexity of training

Iterating over many combinations of settings can be time-consuming, so the module provides several ways to constrain the process:

  • Limit the number of iterations used to test a model
  • Limit the parameter space
  • Limit both the numer of iterations and the parameter space

We recommend that you experiment with the settings to determine the most efficient method of training on a particular dataset and model.

Choosing an evaluation metric

A report containing the accuracy for each model is presented at the end so that you can review the metric results. /little-snitch-424-serial.html. A uniform set of metrics is used for all classification models, and a different set of metrics is used for regression models. However, during training, you must choose a single metric to use in ranking the models that are generated during the tuning process. You might find that the best metric varies, depending on your business problem, and the cost of false positives and false negatives.

For more information, see How to evaluate model performance in Azure Machine Learning

These blogs also provide a good description of how to interpret performance metrics when assessing a model's fit:

Metrics used for classification

  • Accuracy The proportion of true results to total cases.

  • Precision The proportion of true results to positive results.

  • Recall The fraction of all correct results over all results.

  • F-score A measure that balances precision and recall.

  • AUC A value that represents the area under the curve when false positives are plotted on the x-axis and true positives are plotted on the y-axis.

  • Average Log Loss The difference between two probability distributions: the true one, and the one in the model.

  • Train Log Loss The improvement provided by the model over a random prediction.

Metrics used for regression

  • Mean absolute error Averages all the error in the model, where error means the distance of the predicted value from the true value. Often abbreviated as MAE.

  • Root of mean squared error Measures the average of the squares of the errors, and then takes the root of that value. Often abbreviated as RMSE

  • Relative absolute error Represents the error as a percentage of the true value.

  • Relative squared error Normalizes the total squared error it by dividing by the total squared error of the predicted values.

  • Coefficient of determination A single number that indicates how well data fits a model. A value of 1 means that the model exactly matches the data; a value of 0 means that the data is random or otherwise cannot be fit to the model. Often referred to as r2, R2, or r-squared.

Modules that do not support a parameter sweep

Almost all learners in Azure Machine Learning support cross-validation with an integrated parameter sweep, which lets you choose the parameters to experiment with. If the learner doesn't support setting a range of values, you can still use it in cross-validation. In this case, some range of allowed values is selected for the sweep.

The following learners do not support setting a range of values to use in a parameter sweep:

Expected inputs

NameTypeDescription
Untrained modelILearner interfaceUntrained model for parameter sweep
Training datasetData TableInput dataset for training
Validation datasetData TableInput dataset for validation (for Train/Test validation mode). This input is optional.

Module parameters

NameRangeTypeDefaultDescription
Specify parameter sweeping modeListSweep MethodsRandom sweepSweep entire grid on parameter space, or sweep with using a limited number of sample runs
Maximum number of runs on random sweep[1;10000]Integer5Execute maximum number of runs using random sweep
Random seedanyInteger0Provide a value to seed the random number generator
Label columnanyColumnSelectionLabel column
Metric for measuring performance for classificationListBinary Classification Metric TypeAccuracySelect the metric used for evaluating classification models
Metric for measuring performance for regressionListRegressionMetric TypeMean absolute errorSelect the metric used for evaluating regression models

Outputs

NameTypeDescription
Sweep resultsData TableResults metric for parameter sweep runs
Trained best modelILearner interfaceModel with best performance on the training dataset

Auto-tune Machine For Sale

See also

A-Z Module List
Train
Cross-Validate Model

Coments are closed
Scroll to top