One of the greatest challenges for this is to provide realistic lighting of the virtual objects that match the real world lighting. This becomes even more difficult with the limited capabilities of mobile phone GPUs. The approach of this paper is to differ in the following important aspects compared to previous attempts.
ClonAR enables users to rapidly clone and edit real-world objects. First, real-world objects can be scanned using KinectFusion. Second, users can edit the scanned objects in our visuo-haptic Augmented Reality environment. Our whole pipeline does not use mesh representation of objects, but rather Signed Distance Fields, which are the output of KinectFusion. We directly render Signed Distance Fields haptically and visually. We do direct haptic rendering of Signed Distance Fields, which is faster and more flexible than rendering meshes. Visual rendering is performed by our custom-built raymarcher, which facilitates realistic illumination effects like ambient occlusions and soft shadows. Our prototype demonstrates the whole pipeline. We further present several results of redesigned real-world objects.
Global Illumination for Augmented Reality on Mobile Phones
The goal of our work is to create highly realistic graphics for Augmented Reality on mobile phones. One of the greatest challenges for this is to provide realistic lighting of the virtual objects that matches the real world lighting. This becomes even more difficult with the limited capabilities of mobile phone GPUs. Our approach differs in the following important aspects compared to previous attempts: (1) most have relied on rasterizer approaches, while our approach is based on raytracing; (2) we perform distributed rendering in order to address the limited mobile GPU capabilities; (3) we use image-based lighting from a pre-captured panorama to incorporate real-world lighting. We utilize two markers: one for object tracking and one for registering the panorama. Our initial results are encouraging, as the visual quality resembles real objects and also the reference renderings which were created offline. However, we still need to validate our approach in human subject studies, especially with regards to the trade-off between latency of remote rendering and visual quality.
Published 2014 IEEE Virtual Reality (VR), Minneapolis, MN, USARelated posts: