LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Cue vetoing in depth estimation: Physical and virtual stimuli

Photo from wikipedia

Motion parallax and binocular disparity contribute to the perceived depth of three-dimensional (3D) objects. However, depth is often misperceived, even when both cues are available. This may be due in… Click to show full abstract

Motion parallax and binocular disparity contribute to the perceived depth of three-dimensional (3D) objects. However, depth is often misperceived, even when both cues are available. This may be due in part to conflicts with unmodelled cues endemic to computerized displays. Here we evaluated the impact of display-based cue conflicts on depth cue integration by comparing perceived depth for physical and virtual objects. Truncated square pyramids were rendered using Blender and 3D printed. We assessed perceived depth using a discrimination task with motion parallax, binocular disparity, and their combination. Physical stimuli were presented with precise control over position and lighting. Virtual stimuli were viewed using a head-mounted display. To generate motion parallax, observers made lateral head movements using a chin rest on a motion platform. Observers indicated if the width of the front face appeared greater or less than the distance between this surface and the base. We found that accuracy was similar for virtual and physical pyramids. All estimates were more precise when depth was defined by binocular disparity than motion parallax. Our probabilistic model shows that a linear combination model does not adequately describe performance in either physical or virtual conditions. While there was inter-observer variability in weights, performance in all conditions was best predicted by a veto model that excludes the less reliable depth cue, in this case motion parallax.

Keywords: motion; physical virtual; virtual stimuli; cue; motion parallax

Journal Title: Vision Research
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.