Results 1 - 1 of 1
Results 1 - 1 of 1. Search took: 0.019 seconds
[en] This paper investigates model validation under a variety of different data scenarios and clarifies how different validation metrics may be appropriate for different scenarios. In the presence of multiple uncertainty sources, model validation metrics that compare the distributions of model prediction and observation are considered. Both ensemble validation and point-by-point approaches are discussed, and it is shown how applying the model reliability metric point-by-point enables the separation of contributions from aleatory and epistemic uncertainty sources. After individual validation assessments are made at different input conditions, it may be desirable to obtain an overall measure of model validity across the entire domain. This paper proposes an integration approach that assigns weights to the validation results according to the relevance of each validation test condition to the overall intended use of the model in prediction. Since uncertainty propagation for probabilistic validation is often unaffordable for complex computational models, surrogate models are often used; this paper proposes an approach to account for the additional uncertainty introduced in validation by the uncertain fit of the surrogate model. The proposed methods are demonstrated with a microelectromechanical system (MEMS) example. - Highlights: • Appropriate validation comparisons for different data scenarios are identified. • Point-by-point comparisons are used to separate aleatory and epistemic uncertainty. • An aggregation approach to obtain an overall metric across inputs is developed. • A method to include surrogate model uncertainty in validation results is proposed.