Skip to main content

4. Train and assess model

Train and assess a set of models to help find the best model for production.

4.1 Train models

Model developer: Train and fine-tune several models. Document a set of promising models. Note the location of the models and modeling assets:










4.2 Document model evaluation metrics

Model developer: Document all fit statistics used for model evaluation. Note resulting values for promising models.










4.3 Assess model bias

4.3.1 Does the model need a bias evaluation?

Model developer: Does the model require a bias evaluation, based on the implications for the use of the AI system, among other factors? For example, the AI system needs bias evaluation if it could perform differently under varying conditions.

  • Yes. If selected, continue to the next step.
  • No. If selected, move to step 4.3.4.

If applicable, add any additional details:









4.3.2 Compare and document subgroup model performance

Model developer: Calculate and compare model performance values and additional fairness metrics for each protected class or subgroup. Subgroups are often protected classes, or groups of people that are legally protected from discrimination based on a shared characteristic, such as a disability, sexual orientation, or race. However, subgroups could also be important groups within the data based on the model use case, even though they are not legally defined as protected classes. To calculate model performance values, use model performance metrics defined by your organization in the testing strategy outlined in step 2.1.6. Fairness metrics might include equal opportunity, demographic parity, predictive parity, equal accuracy, or equalized odds.

Document or save results.









4.3.3 Compare and document average model predictions per subgroup

Model developer: Calculate and compare average model predictions for each protected class or subgroup.

Document or save results.









4.3.4 Bias metrics differences approval

Model owner: Is the documentation provided by the model developer satisfactory? If bias evaluation is required, are the differences in bias metric values satisfactory?

  • Yes
  • No

If no, which areas need additional review?

  • Retrain models with new data. If selected, return to step 3.
  • Fine-tune the models with existing data. If selected, return to step 4.
  • Set a new champion model. If selected, return to step 5.
  • Update the project documentation. If selected, return to step 2.
  • End the workflow. If selected, depreciate the project and update step 2.1.1.
  • Move forward with the model. If selected, provide additional details or a justification, and then continue to the next step.








4.4 Assess model explainability

4.4.1 Is model explainability important?

Model developer: Is model explainability or interpretability important for this use case? An explainable model allows human users to comprehend and trust the results of the output generated by the model. Explainability is important in most use cases.

Is explainability important for this use case?

  • Yes. If selected, continue to the next step.
  • No. If selected, move to step 4.4.3.

If applicable, add any additional details:









4.4.2 Document model explainability

Model developer: Document model explainability method and results. Ensure that explainability information is made available to the model end user. Select the most appropriate explainability methods for the use case and model type. SAS Viya includes explainability tools such as Partial Dependence (PD) plots, Individual Conditional Expectation (ICE) plots, Local Interpretable Model-Agnostic Explanation (LIME), and Kernel Shapley values (Kernel SHAP). These techniques are model-agnostic, which means that these techniques can be applied to any model that is generated by a supervised learning node.

Document or save results.









4.4.3 Model explanations approval

Model owner: Is the documentation provided by the model developer satisfactory? If explainability is required, are the models' level of explainability acceptable?

  • Yes
  • No

If no, which areas need additional review?

  • Retrain models with new data. If selected, return to step 3.
  • Fine-tune the models with existing data. If selected, return to step 4.
  • Set a new champion model. If selected, return to step 5.
  • Update the project documentation. If selected, return to step 2.
  • End the workflow. If selected, depreciate the project and update step 2.1.1.
  • Move forward with the model. If selected, provide additional details or a justification, and then continue to the next step.