TY - GEN
T1 - Synthesis of environment maps for mixed reality
AU - Walton, David R.
AU - Thomas, Diego
AU - Steed, Anthony
AU - Sugimoto, Akihiro
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/11/20
Y1 - 2017/11/20
N2 - When rendering virtual objects in a mixed reality application, it is helpful to have access to an environment map that captures the appearance of the scene from the perspective of the virtual object. It is straightforward to render virtual objects into such maps, but capturing and correctly rendering the real components of the scene into the map is much more challenging. This information is often recovered from physical light probes, such as reflective spheres or fisheye cameras, placed at the location of the virtual object in the scene. For many application areas, however, real light probes would be intrusive or impractical. Ideally, all of the information necessary to produce detailed environment maps could be captured using a single device. We introduce a method using an RGBD camera and a small fisheye camera, contained in a single unit, to create environment maps at any location in an indoor scene. The method combines the output from both cameras to correct for their limited field of view and the displacement from the virtual object, producing complete environment maps suitable for rendering the virtual content in real time. Our method improves on previous probeless approaches by its ability to recover high-frequency environment maps. We demonstrate how this can be used to render virtual objects which shadow, reflect and refract their environment convincingly.
AB - When rendering virtual objects in a mixed reality application, it is helpful to have access to an environment map that captures the appearance of the scene from the perspective of the virtual object. It is straightforward to render virtual objects into such maps, but capturing and correctly rendering the real components of the scene into the map is much more challenging. This information is often recovered from physical light probes, such as reflective spheres or fisheye cameras, placed at the location of the virtual object in the scene. For many application areas, however, real light probes would be intrusive or impractical. Ideally, all of the information necessary to produce detailed environment maps could be captured using a single device. We introduce a method using an RGBD camera and a small fisheye camera, contained in a single unit, to create environment maps at any location in an indoor scene. The method combines the output from both cameras to correct for their limited field of view and the displacement from the virtual object, producing complete environment maps suitable for rendering the virtual content in real time. Our method improves on previous probeless approaches by its ability to recover high-frequency environment maps. We demonstrate how this can be used to render virtual objects which shadow, reflect and refract their environment convincingly.
UR - http://www.scopus.com/inward/record.url?scp=85041656241&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041656241&partnerID=8YFLogxK
U2 - 10.1109/ISMAR.2017.24
DO - 10.1109/ISMAR.2017.24
M3 - Conference contribution
AN - SCOPUS:85041656241
T3 - Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2017
SP - 72
EP - 81
BT - Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2017
A2 - Broll, Wolfgang
A2 - Regenbrecht, Holger
A2 - Swan, J Edward
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 16th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2017
Y2 - 9 October 2017 through 13 October 2017
ER -