-
- Lei Xiao
- Facebook Reality Labs
-
- Anton Kaplanyan
- Facebook Reality Labs
-
- Alexander Fix
- Facebook Reality Labs
-
- Matthew Chapman
- Facebook Reality Labs
-
- Douglas Lanman
- Facebook Reality Labs
書誌事項
- タイトル別名
-
- learned image synthesis for computational displays
説明
<jats:p> Addressing vergence-accommodation conflict in head-mounted displays (HMDs) requires resolving two interrelated problems. First, the hardware must support viewing sharp imagery over the full accommodation range of the user. Second, HMDs should accurately reproduce retinal defocus blur to correctly drive accommodation. A multitude of <jats:italic>accommodation-supporting</jats:italic> HMDs have been proposed, with three architectures receiving particular attention: varifocal, multifocal, and light field displays. These designs all extend depth of focus, but rely on computationally expensive rendering and optimization algorithms to reproduce accurate defocus blur (often limiting content complexity and interactive applications). To date, no unified framework has been proposed to support driving these emerging HMDs using commodity content. In this paper, we introduce <jats:italic>DeepFocus</jats:italic> , a generic, end-to-end convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs. </jats:p>
収録刊行物
-
- ACM Transactions on Graphics
-
ACM Transactions on Graphics 37 (6), 1-13, 2018-12-04
Association for Computing Machinery (ACM)