FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions
CVPR 2024

1Nanjing University     2Tencent AI Lab    
*Denotes Equal Contribution

Abstract

activations

Implicit Neural Representation (INR), which utilizes a neural network to map coordinate inputs to corresponding attributes, is causing a revolution in the field of signal processing. However, current INR techniques suffer from a restricted capability to tune their supported frequency set, resulting in imperfect performance when representing complex signals with multiple frequencies. We have identified that this frequency-related problem can be greatly alleviated by introducing variable-periodic activation functions, for which we propose FINER. By initializing the bias of the neural network within different ranges, sub-functions with various frequencies in the variable-periodic function are selected for activation. Consequently, the supported frequency set of FINER can be flexibly tuned, leading to improved performance in signal representation. We demonstrate the capabilities of FINER in the contexts of 2D image fitting, 3D signed distance field representation, and 5D neural radiance fields optimization, and we show that it outperforms existing INRs.

Flexible spectral-bias tuning

Experiments

Citation

@inproceedings{liu2024finer,
    title={FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions},
    author={Liu, Zhen and Zhu, Hao and Zhang, Qi and Fu, Jingde and Deng, Weibing and Ma, Zhan and Guo, Yanwen and Cao, Xun},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pags={2713--2722},
    year={2024}
}

Acknowledgements

The website template was borrowed from Michaƫl Gharbi.