Wednesday, 03 November 2021, 14:00 at Virtual (Zoom details under abstract) – To address the poor scalability of training algorithms for orthogonal recurrent neural networks, we propose to use a coordinate descent method on the orthogonal group. This algorithm has a cost per iteration that evolves linearly with the number of recurrent states, in contrast with the cubic dependency of typical algorithms …