Lab Seminars


Seminar on Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation
Date: 2 MAR 2018
Time: 1:00 PM
Speaker: Breawn Schoun

Summary: Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user’s body and awareness of the CAVE’s physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.

Download PDF


Seminar on Behavior Recognition Using Multiple Depth Cameras Based on a Time-Variant Skeleton Vector Projection
Date: 23 FEB 2018
Time: 1:00 PM
Speaker: Hawkar Oagaz

Summary: User behavior recognition in a smart office environment is a challenging research task.Wearable sensors can be used to recognize behaviors, but such sensors could go unworn, making the recognition task unreliable. Cameras are also used to recognize behaviors, but occlusions and unstable lighting conditions reduce suchmethods’ recognition accuracy. To address these problems, we propose a time-variant skeleton vector projection scheme using multiple infrared-based depth cameras for behavior recognition. The contribution of this paper is threefold: 1) The proposed method can extract reliable projected skeleton vector features by compensating occluded data using nonoccluded data; 2) the proposed occlusion-based weighting element generation can be employed to train support-vector-machine-based classifiers to recognize behaviors in a multiple-view environment; and 3) the proposed method achieves superior behavior recognition accuracy and involves less computational complexity compared with other state-of-the-art methods for practical testing environments.

Download PDF


Seminar on AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus virtual Reality
Date: 9 FEB 2018
Time: 1:00 PM
Speaker: Sayed Mohsin Reza

Summary: Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel ”softer” than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.

Download PDF


Seminar on Emulation of Physician Tasks in Eye-tracked Virtual Reality for Remote Diagnosis of Neurodegenerative Disease
Date: 26 JAN 2018
Time: 1:00 PM
Speaker: Shane Transue

Summary: For neurodegenerative conditions like Parkinson’s disease, early and accurate diagnosis is still a difficult task. Evaluations can be time consuming, patients must often travel to metropolitan areas or different cities to see experts, and misdiagnosis can result in improper treatment. To date, only a handful of assistive or remote methods exist to help physicians evaluate patients with suspected neurological disease in a convenient and consistent way. In this paper, we present a low-cost VR interface designed to support evaluation and diagnosis of neurodegenerative disease and test its use in a clinical setting. Using a commercially available VR display with an infrared camera integrated into the lens, we have constructed a 3D virtual environment designed to emulate common tasks used to evaluate patients, such as fixating on a point, conducting smooth pursuit of an object, or executing saccades. These virtual tasks are designed to elicit eye movements commonly associated with neurodegenerative disease, such as abnormal saccades, square wave jerks, and ocular tremor. Next, we conducted experiments with 9 patients with a diagnosis of Parkinson’s disease and 7 healthy controls to test the system’s potential to emulate tasks for clinical diagnosis. We then applied eye tracking algorithms and image enhancement to the eye recordings taken during the experiment and conducted a short follow-up study with two physicians for evaluation. Results showed that our VR interface was able to elicit five common types of movements usable for evaluation, physicians were able to confirm three out of four abnormalities, and visualizations were rated as potentially useful for diagnosis.

Download PDF