> Any CAEBM project has to start with setting up the specific list of parameters for the problem at hand. First the desired Output Parameters (OP), then the Input Parameters (IP) have to be identified, and all of them have their (supposed) name, dimension, and range or set of values.
> Availability of "Big Data" is a helpful but not sufficient basis to setup good and enough examples. With respect to the parameter space to be covered, additional data from available data sources and missing parameter values for "incomplete examples" have to be added. Then, if appropriate, the data for nonnumeric parameters have to be coded into numerical ones (because our computers at this time are specialized for numerics!), and all raw data for convenience are normalized then to show comparable value ranges in every dimension. Computers of course are a big help or even a need to fullfil these tasks, often automatically.
> As "associative models", we deploy a special type of Neural Networks, which's representation capability is unlimited (see eg the " approximation theorem" at Wikipedia), and which show high scalability as well in size (= complexity of the problem at hand) as in accuracy of the resulting models at the same time. Training of the neural networks, including optimization of their size, is done with the aid of appropriate Genetic Algorithms, which ensure, that the actual complexity of the multi-dimensional relationships between all of the parameters is met, and strong generalization requirements can be fullfiled. Again, the deployment of computers makes this modeling process possible.
> As "generalization" and "quality assurance" are most important aspects of the modeling process, we deal very carefully with those aspects. Generalization is addressed by "rotating subsets" of training and test examples, resulting in clear info about the complexity of the underlying problem and the stability of it's modeling. And of course any meaningful (eg 2-3 dim) partial derivativa of the parameters can be used to be compared to practical experiences of domain experts. Additionally, we use a special "reliability indicator" for any single answer drawn from the models at deployment time, to ensure, that the extrapolation capabilities of the models are not over-strained.
And of course again: Computers are the tools of choice to make these generalization and QA measures possible.
SMS\ WhatsApp +49 160 843 5298
mailto: rst.tbus@gmail.com