Deep neural networks have considerable applications to services, industry, and science but remain a black box whose properties are not well understood. Robustness and interpretability of deep neural networks become a major issue for their applications. Understanding deep networks involves many branches of mathematics, including statistics, harmonic analysis, geometry, and optimization in high dimension, together with algorithmic experiments on real data. Working on very different types of data and applications, gives an access to generic mathematical and algorithmic properties of these networks. Simplifying network architectures, while preserving performance is an important direction of this research.

