Assessing Super-Resolution GANs For Randomly-Seeded Particle Fields
A. Güemes (1), C. Sanmiguel Vila (2), S. Discetti (1)
(1) Aerospace Engineering Research Group, Universidad Carlos III de Madrid, Spain
(2) Sub-Directorate General of Terrestrial Systems, Spanish National Institute for Aerospace Technology (INTA), Ctra. M-301, Km 10,500, 28330, San Martín de la Vega, Spain
In this work, we demonstrate and assess the performances of a novel super-resolution generative adversarial network (GAN) framework. The algorithm, recently introduced by the authors, leverages random spatial sampling in particle images to provide incomplete views of the high-resolution underlying fields. The main novelty is that the architecture, named Randomly Seeded GAN (RaSeedGAN), does not need full high-resolution training samples. The training is performed directly using the sparse sensors (e.g. particles) available in each snapshot, reduced on a regular grid by spatial averaging in bins. Bins without vectors are simply skipped during training. Provided that the particles randomly sample the space within the dataset, it is possible to reconstruct the mapping from low to high-resolution input with such incomplete ``gappy'' views. The proposed technique is tested on several synthetic datasets based on simulations, ocean surface temperature distribution measurements, and particle image velocimetry data of a zero-pressure-gradient turbulent boundary layer flow. While the applications in this work target fluid mechanics examples, the proposed method can be applied to more general frameworks where mapping is performed by moving sensors and/or with random on-off status. The results show increased accuracy compared to standard processing and direct cubic interpolation of the scattered velocity vectors. An analysis of the turbulent flow features, turbulence statistics, and spectra demonstrates an increase in spatial resolution of at least a factor of 3 in terms of cut-off frequency, with physically consistent estimated fields.