Description
Within deep neural networks, simplifying and pruning highly over-parameterized models have shown to improve accuracy, and results in sparse networks that also are more computationally efficient and sometimes easier to interpret. This is very important for next generation machine learning algorithms. In this project we aim at exploring Bayesian approaches to sparsification/pruning for neural networks. Focus will be on prior specifications, statistical properties and efficient algorithms. Exploration of the pruning approach in other machine/statistical learning settings will also be of interest. While sparse models usually are obtained by going from large to small, also procedures going from small to large will be considered.
Specific project requirements
-
Master’s degree in statistics, mathematics, computer science or a related quantitative subject with proven competence in statistics.
-
Documented experience in scientific programming is an advantage.
Supervisors
-
Professor Geir Olve Storvik, geirs@math.uio.no contact person for inquiries about the position