Bigmath Advanced Course 4: Large scale and distributed optimization
The goal of the training course, realized within the H2020 Marie Skłodowska-Curie project Big Data Challenges for Mathematics, Grant Agreement No 812912, is to provide an overview of tools and algorithms in the area of large scale and distributed optimization. An illustrative examples which help in understanding how optimization-based modelling can be applied on machine learning problems will be part of the course. The topics covered by the course are the following: Machine learning and optimization; Optimality conditions for unconstrained problems; Convexity; Line search methods; Gradient methods; Second order methods; Optimality conditions for constrained problems; Augmented Lagrangian methods; Parallel methods: duality theory and dual subgradient method; primal decomposition; dual decomposition; augmented Lagrangian; alternating direction method of multipliers. Distributed methods: distributed gradient descent; stochastic distributed methods. Part of the course will be devoted to computer labs for software/implementation tutorial. The course is carried out in 4 days, with 6 hours of lectures each day.
The morning lectures will be livestreamed while the afternoon sessions will be devoted to practical tutorials.
Venue: University of Novi Sad, Main Recorate Building, room 1/9, Dr. Zorana Djindjica Street No. 1, 21000 Novi Sad, Serbia
January 27, 9.30-13, 14-17
January 28, 9.30-13, 14-17
January 30, 9.30-13, 14-17
January 31, 9.30-13.14-17
The course is opened for everybody, if you plan to attend please contact Natasa Krejic at email@example.com
You must be logged in to post a comment.