Universal coding theorems show that a multitude of different neural architectures can be used to represent any function. Thus, the intricate architecture of biological neural networks probably determine not what they can learn, but rather how they encode information in order to provide good inductive biases that enable robust and efficient learning. Focusing on small animals, such as Drosophila larva, whose neural wiring has been mapped at full resolution and whose neurons can be individually controlled in freely behaving animals, will allow us to link the structures of link neural microcircuits to their functions. This will help uncover how biological neural networks differ from artificial neural networks and may provide inspiration for more efficient deep learning architectures.
