The ML core is the vital center to the BZML in researching methods of deep learning . These techniques will make it possible to represent complex information flows and hierarchically structured data on the basis of learning and optimization on different degrees of abstraction. This is the only way to make statistical models
of never-before-seen generalization ability to solve challenging scientific problems. The efficient utilization of a priori knowledge (topologies; symmetries; known network structures underlying the data and other page information) in learning processes as well as the effects of erroneous or incomplete data should be investigated both theoretically and practically. The aim is to increase accuracy and reduce uncertainty with less learning data. As a further key technique, the interpretability and explainability of complex nonlinear learning methods will be investigated in order achieve more robust and trustworthy models.
The integration of a priori knowledge and the interpretability of ML are decisive requirements in the Biomedicine which modern learning methods are challenged by. Biomedicine forms a spectrum of interdisciplinary ML challenges in the biomedical research field that ranges from basic scientific questions to complex gene regulation mechanisms and networks. The project will focus on the integration of microscopic-histological image data and proteogenomic “omics” data in translational cancer research, radiological and proteomic data in cardiology as well as integration of heterogeneous clinical and highly noisy real-time data from intensive care medicine.
A central question in the Digital Humanities is how to efficiently use complex a priori knowledge to develop powerful interactive methods for dealing with highly structured heterogeneous data. Typically, patterns and pattern groups from characteristics of the historical sources such as layout, images in texts, text-image constellations, word combinations or word clusters are to be recognized and statistically analyzed. On the one hand, this would allow the exploration of models and the simulation of their consequences, and on the other hand, move forward the generation of heuristics that emulate scientific analysis methods. Both significantly increase the predictive power of the models and in addition, using methods to interpret the models, hypotheses might even be formed automatically. The fundamental ML-problems of complex a priori knowledge and the graphene structures in the Digital Humanities are to be investigated together with the ML core.
In application area communication , ML methods for coordination, compression and caching in ultra-dense meshed networks are to be explored. In this way, reliability as well as spectral and energy efficiency and latency times shall be reduced. Special challenges are posed by the fact that in many cases only both limited and decentral as well as online data is available. Therefore the development and application of novel methods will be necessary to enable distributed learning from minimal data sets during runtime of the system. The practical boundary conditions of communication applications and the statistical processes to be considered require novel algorithms to overcome these problems.