A Real Time Generation Method of Depth Fused 3D Images Considering the Viewing Position

  • Tamura Tohru
    Graduate School of Engineering, Tokyo Polytechnic University
  • Hirano Toshizo
    Graduate School of Engineering, Tokyo Polytechnic University

Bibliographic Information

Other Title
  • 視点位置を考慮したDepth Fused 3D画像のリアルタイム生成法
  • シテン イチ オ コウリョ シタ Depth Fused 3D ガゾウ ノ リアルタイム セイセイホウ

Search this article

Abstract

The Object is displayed in the corresponding position on the two displays placed at different viewing distances from the observer. Then, the depth of the object is perceived at a position corresponding to the display luminance ratio of the object on the two displays. This is called depth fused 3D illusion. We report the real-time method of generating three-dimensional images for the Depth Fused 3D Display using this illusion phenomenon. In this study, we used the KINECT that can be inexpensive and easy to measure the distance between the camera and the objects. In addition, in order to three-dimensional image by Depth Fused 3D illusion is well perceived, it is necessary that front and rear images are displayed overlapped from the observer’s face position. In the present study, the viewer's face position is tracked by using Haar-like features, the proportion of the skin color pixel and the size of the face candidate region. We also show a method of correcting the position of the front and rear image at the observer’s face position.

Journal

Details 詳細情報について

Report a problem

Back to top