Interval estimates are a common way to express uncertain knowledge of experts. To model them and aggregate multiple judgments, probability and possibility theories are both applicable. Previous studies showed that the performances of aggregated distributions obtained by these two approaches are similar on average; however, there is a lack of works investigating how we can make preferences on them in certain cases. Distribution of expert-based interval estimates on the latent Dunning-Kruger curve, i.e., the correlation between their accuracy (estimation error) and confidence/precision (interval width) were determined. The judgments were modeled using probabilistic and possibilistic approaches as well, and the estimation errors of the obtained aggregated distributions were compared, described by an advantage score. Its dependence on the confidence-accuracy interdependence of expert judgments was investigated involving estimates about multiple variables. Our basic intuition is that narrower interval estimates are more accurate than wider ones; in this case, the probabilistic approach for modeling expert knowledge is appropriate. However, as the Dunning-Kruger effect highlights, sometimes its reverse is true; then, the possibilistic approach tends to be more preferable as it does not amplify the effect of narrow estimates. The results show that the choice between the two concepts can be based on the correlation trend between accuracy and precision of judgments that could be deduced, e.g., from the composition of the expert group.
This paper introduces a methodology for handling different types of uncertainties during robust optimization. In real-world industrial optimization problems, many types of uncertainties emerge, e.g., inaccurate setting of control variables, and the parameters of the system model are usually not known precisely. For these reasons, the global optimum considering the nominal values of the parameters may not give the best performance in practice. This paper presents a widely usable sampling-based methodology by improving the Particle Filter Optimization (PFO) algorithm. Case studies on benchmark functions and even on a practical example of a styrene reactor are introduced to verify the applicability of the proposed method on finding robust optimum, and show how the users can tune this algorithm according to their requirement. The results verify that the proposed method is able to find robust optimums efficiently under parameter and decision variable uncertainties, as well.
Machine learning classifier-based metrics has a promising potential to gain information about the performance of separation systems. Industrial separation systems can be considered to perform a classification task. Initialized by this analogy, existing metrics from the machine learning field (e.g., entropy and information gain) to qualify a classifier can be used to evaluate the effectiveness of these systems. Our research investigates this idea generally, and also introduces a case study of an industrial manual waste sorting system. The contributions of the paper are the following: 1.) Overview of the possible applications of classifier-based metrics for process development aims. 2.) Entropy and information gain is shown to be applicable to evaluate the efficiency of separation systems and their operation units as well. 3.) Monte Carlo simulation is involved to produce robust results in a separation system with stochastic phenomena. 4.) ROC curve is shown to be applicable to determine the optimal cut point in a separation system. The ideas above are verified by simulation experiments conducted on the stochastic model of a waste sorting system.