Speaker
Description
Modern scientific simulations generate petabyte-scale datasets that exceed available memory, forcing researchers to compromise between simulation duration and resolution. We present Adaptive Quantization Networks (AQN), a neural compression method that learns to identify scientifically important features and allocates bits accordingly, rather than spreading error uniformly like traditional codecs. Our shallow convolutional encoder-decoder predicts spatially-varying quantization steps and importance weights without supervision, discovering structure purely from data patterns. Using cosmological density fields as a testbed, we demonstrate that AQN achieves ~53× compression while maintaining 99% halo detection accuracy with 0.04-pixel positional error—significantly outperforming state-of-the-art scientific compressor SZ3 (94% accuracy, 0.2-pixel error) and JPEG at comparable ratios. Critically, AQN preserves the cosmic web's filamentary structure that traditional compressors destroy, correctly recovering all 10 most massive halos with accurate positions and masses while SZ3 misses halos and misattributes mass. Unlike uniform compression methods that produce largest errors at density peaks—corrupting the mass functions and clustering statistics essential for constraining cosmological parameters—our importance-aware approach maintains scientific fidelity of downstream measurements. We want to validate that compressed data preserves the information needed to restart simulations without degrading physical evolution, enabling 50× checkpoint reduction that could allow simulations to run 4× longer at 4× higher resolution. Extensions to 3D and integration with entropy coding could achieve 100× compression ratios. This learned compression paradigm generalizes beyond cosmology to any domain with rare critical events in massive datasets, including climate modeling, plasma physics, molecular dynamics, and observational astronomy, establishing a new approach where neural networks learn what makes data scientifically valuable rather than merely minimizing reconstruction error.
| Affiliation of the submitter | University of Washington |
|---|---|
| Attendance |