Optimal feedback synthesis for nonlinear dynamics, a fundamental problem in optimal control, is enabled by solving fully nonlinear Hamilton-Jacobi-Bellman type PDEs arising in dynamic programming. While our theoretical understanding of dynamic programming and HJB PD Es has seen a remarkable development over the last decades, the numerical approximation of HJB -based feedback laws has remained largely an open problem due to the curse of dimensionality. More precisely, the associated HJB PDE must be solved over the state space of the dynamics, which is extremely high-dimensional in applications such as distributed parameter systems or agent-based models. In this talk we will review recent approaches regarding the effective numerical approximation of very high-dimensional HJB PD Es via data-driven schemes in supervised and semi-supervised learning environments. We will discuss the use of representation formulas as synthetic data generators, and different architectures for the value function, such a polynomial approximation, tensor decompositions, and deep neural networks.