Visual Psychophysics on Motion and Stereo

A vision system is so complicated that many of its problems cannot be solved within the intellectual domain and methodology of any one discipline. A major component of my research lies in the use of rigorous computational models to account for psychophysical results related to depth perception arising from dynamic cue and stereo, and to predict new perceptual phenomena and guide experimentation in this area. Although others had explained some of the well-known psychophysical results, our studies produce a full and quantitative explanation of a wide range of psychophysical data such as apparent distance bisection (ADB), apparent fronto-parallel plane (AFPP) and anisotropy in perceived space. Currently, we are using concepts from uncalibrated vision to study psychophysics of viewing scenario under change of focal length, which are situations of importance in virtual or augmented reality.

Using the setup as shown in the figures below, we have evaluated psychophysics responses such as slant and tilt perception as well as the perception of second order shape under both forward and lateral motions. The tests were conducted under self- and object-motion with various visual field of views. In this area, I have been working with researchers from the LPPA lab, CNRS in France, chiefly with Dr Valérie Cornilleau-Peres. Currently, I am also working with Dr Yen Shih-Cheng of our ECE Department to continue research on visual psychophysics.


(a)                                                                                                                    (b)                               

 Fig. (a) Microscribe device to test human subject's shape perception under self-motion. (b) Wide screen projector to test wide-field vision.

Recently, I have also looked at the grouping of motion cues with other cues such as random dots, occlusions, textures etc, using the paradigm of the rotating ellipse. I have also looked at whether human subjects detect independent motion primarily by 2D or 3D motion cues.