Pre-Rendered Multi-View VR

Perspective based render via AI Interpolation

SIGGRAPH Asia

Authors: Danyal Sarfraz, Prof. Raja Mubashir Karim, Prof. KwanMyung Kim

We propose a novel VR rendering technique that enables HMD's movement-based perspective shifts without requiring actual 3D geometry. Our method leverages a grid of pre-rendered frames captured from various viewpoints, paired with AI-based frame interpolation to generate intermediate frames. Each frame corresponds to a specific head position of the user, allowing the system to dynamically select the closest perspective-aligned render based on real-time tracking. This technique simulates realistic perspective changes while significantly reducing computational demands.

The approach supports deployment across varying movement scopes—from single-axis (X) tracking to full 3D positional interpolation (XYZ)—and is especially well-suited for static VR objects such as product visualization, background props in games, and virtual windows. We implemented our approach using PCVR on Unreal Engine 5 and evaluated it against two baselines: image-based VR and geometry-based VR. In the user study, our technique achieved significantly higher interactivity ratings than image-based VR, and higher photo-realism results compared to geometry-based VR, while maintaining substantially lower computational load. These findings suggest that AI-interpolated, perspective-aligned renders can offer a compelling middle ground between photorealism, interactivity, and performance in VR environments.

Flowframes

RIFE based AI interpolation

Unreal Engine

Deployment tool

0%

Lower Computational Load than conventional Solutions

45%

Higher Photo-realism score than Geometry-based VR

15%

Higher interactivity rating than Image-based VR

Read Full Paper

Read Full Paper

Copyright 2025

Mail: Danialsarfraz2000@gmail.com