My research aims at developing optimization methods for artificial intelligence that leverage existing methodology and advances from scientific computing along two axes. On one hand, we motivate the use of standard algorithmic frameworks for scientific computing in modern learning tasks by proposing practical schemes with complexity guarantees. Our research will aim at analyzing the complexity of classical second-order methods used in scientific computing so as to design frameworks with theoretical grounds and practical appeal for artificial intelligence. On the other hand, we develop derivative-free algorithms for automated parameter tuning of complex data science models. Our setting will be that of expensive, black-box systems for which a number of parameters require calibration.

