V3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians


SIGGRAPH Asia 2024 (TOG)

Penghao Wang*, Zhirui Zhang*, Liao Wang*, Kaixin Yao, Siyuan Xie, Jingyi Yu†, Minye Wu†, Lan Xu†

Paper Video Code (Training) Code (Viewer) Code (IOS Viewer)

Code will be released before October

Experiencing high-fidelity volumetric video as seamlessly as 2D videos is a long-held dream. However, current dynamic 3DGS methods, despite their high rendering quality, face challenges in streaming on mobile devices due to computational and bandwidth constraints. In this paper, we introduce V^3(Viewing Volumetric Videos), a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians. Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs. Additionally, we propose a two-stage training strategy to reduce storage requirements with rapid training speed. The first stage employs hash encoding and shallow MLP to learn motion, then reduces the number of Gaussians through pruning to meet the streaming requirements, while the second stage fine tunes other Gaussian attributes using residual entropy loss and temporal loss to improve temporal continuity. This strategy, which disentangles motion and appearance, maintains high rendering quality with compact storage requirements. Meanwhile, we designed a multi-platform player to decode and render 2D Gaussian videos. Extensive experiments demonstrate the effectiveness of V^3, outperforming other methods by enabling high-quality rendering and streaming on common devices, which is unseen before. As the first to stream dynamic Gaussians on mobile devices, our companion player offers users an unprecedented volumetric video experience, including smooth scrolling and instant sharing.

Overview



First, we divide the long sequences into groups for training. In the first stage, we use hash encoding following a shallow MLP with position as input to estimate the motion of the human subjects. In the second stage, we fine-tune the attributes of the wrapped Gaussians from stage 1 with residual entropy loss and temporal loss, which yields 2D Gaussian video with highly temporal consistency and thus can use video codec to perform efficient compression.