DINR: Dynamical Implicit Neural Representations

1University of California, Los Angeles
2Brookhaven National Laboratory

Abstract

Implicit Neural Representations (INRs) provide a powerful continuous framework for modeling complex visual and geometric signals, but spectral bias remains a fundamental challenge, limiting their ability to capture high-frequency details. Orthogonal to existing remedy strategies, we introduce Dynamical Implicit Neural Representations (DINR), a new INR modeling framework that treats feature evolution as a continuous-time dynamical system rather than a discrete stack of layers. This dynamical formulation mitigates spectral bias by enabling richer, more adaptive frequency representations through continuous feature evolution. Theoretical analysis based on Rademacher complexity and the Neural Tangent Kernel demonstrates that DINR enhances expressivity and improves training dynamics. Moreover, regularizing the complexity of the underlying dynamics provides a principled way to balance expressivity and generalization. Extensive experiments on image representation, field reconstruction, and data compression confirm that DINR delivers more stable convergence, higher signal fidelity, and stronger generalization than conventional static INRs.

Introduction

Modern science is awash with rich, high-resolution data, from turbulent flows and weather fields to 3D volumes and complex visual scenes, but turning all of this into compact, faithful digital representations is still surprisingly hard ⚙️. Our project tackles this challenge with Dynamical Implicit Neural Representations (DINR), a new approach that treats learned features not as a static output of a network, but as a smooth trajectory evolving in time under a learned dynamical system. By letting representations “flow” through this continuous evolution, DINR can naturally capture fine, high-frequency details that standard neural fields struggle with, while remaining compact, stable, and data-efficient 🚀.

Method

The proposed method, Dynamical Implicit Neural Representations (DINR), upgrades standard coordinate-based INRs by replacing their one-shot latent mapping with a learned continuous-time flow in feature space. Given an input coordinate, DINR first embeds it into a latent vector and then treats that vector as the initial condition of a neural ODE: a small network defines a vector field that drives how the latent state evolves over an artificial “time” axis, and an ODE solver follows this trajectory to a final state that is decoded into the target signal value. This viewpoint turns the middle of the network into many tiny, coupled transformations rather than a fixed stack of layers, greatly enlarging the class of functions the model can represent and allowing fine, high-frequency structure to emerge gradually along the trajectory. To keep these dynamics smooth, stable, and physically intuitive, we introduce a kinetic-energy regularizer that penalizes unnecessarily fast or twisting motion in feature space, encouraging efficient latent paths that still retain expressive power while adding almost no computational overhead. The full DINR model is trained end-to-end with a standard reconstruction loss plus this regularizer, yielding a plug-and-play, model-agnostic module that injects rich latent dynamics into existing INR backbones without increasing their depth or parameter count.

Results

FFNet
Dynamical FFNet
SIREN
Dynamical SIREN

Next Steps

For future work, it is important to address the increased computational cost of the dynamical structure, potentially by exploring adaptive step-size or higher-order ODE integration to balance expressivity and efficiency. Extending DINR to multi-modal and large-scale datasets could broaden its practical applicability. Given the observed effectiveness of dynamical neural architectures in time-series and generative modeling, exploring a unified understanding of such architectures could provide valuable insights and guide the design of more effective models across diverse tasks. Finally, a more rigorous theoretical study of the interplay between latent dynamics, expressivity, and generalization would yield deeper insights into the framework and guide the design of even more powerful INRs.

BibTeX

@article{park2025dynamical,
  title={Dynamical Implicit Neural Representations},
  author={Park, Yesom and Kan, Kelvin and Flynn, Thomas and Huang, Yi and Yoo, Shinjae and Osher, Stanley and Luo, Xihaier},
  journal={arXiv preprint arXiv:2511.21787},
  year={2025}
        }