Deep autoencoder neural networks can generate highly accurate, low-order
representations of turbulence. We design a new family of autoencoders which are
a combination of a 'dense-block' encoder-decoder structure (Page et al, J.
Fluid Mech. 991, 2024), an 'implicit rank minimization' series of linear layers
acting on the embeddings (Zeng et al, Mach. Learn. Sci. Tech. 5, 2024) and a
full discrete+continuous symmetry reduction. These models are applied to
two-dimensional turbulence in Kolmogorov flow for a range of Reynolds numbers
25≤Re≤400, and used to estimate the dimension of the chaotic
attractor,
dA(Re). We find that the dimension scales like $\sim
Re^{1/3}$ -- much weaker than known bounds on the global attractor which grow
like
Re4/3. In addition, two-dimensional maps of the latent space in our
models reveal a rich structure not seen in previous studies, including multiple
classes of high-dissipation events at lower
Re which guide bursting
trajectories. We visualize the embeddings of large numbers of "turbulent"
unstable periodic orbits, which the model indicates are distinct (in terms of
features) from any flow snapshot in a large turbulent dataset, suggesting their
dynamical irrelevance. This is in sharp contrast to their appearance in more
traditional low-dimensional projections, in which they appear to lie within the
turbulent attractor.