Stanford University researchers use Vizard and PPT to study human interaction within the confines of VR

client: Stanford University, Virtual Human Interaction Lab

research field: Human Interaction

equipment used: WorldViz Vizard Virtual Reality Software ToolkitWorldViz PPT-X 8 optical/inertial hybrid wide-area tracking system, enterprise-grade VR headset, Complete Characters avatar software package.

The mission of the Virtual Human Interaction Lab (VHIL) is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR) and other forms of media (eg digital communication systems, video games, etc). Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of their work is centered on using empirical behavioral science methodologies to observe people as they interact in these digital worlds. It is sometimes necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, VHIL researchers also engage in research geared towards developing new ways to produce these VR simulations.

Our research programs tend to fall under one of three larger questions:

What new social issues arise from the use of immersive VR communication systems?How can VR be used as a basic research tool to study the nuances of face-to-face interaction?How can VR be applied to improve everyday life, such as in legal practices or communications systems?


Avatars and Behavioral Modeling

Virtual reality enables us to create a powerful and persuasive stimulus: the virtual self. Using digital photographs, we can create avatars that bear a striking resemblance to the self. We can then manipulate the virtual self in myriad ways that would be difficult or even impossible in the real world. The virtual self can modify its appearance or perform a behavior that the real self cannot, thus serving as a novel type of model. According to social cognitive theory, models can be valuable stimuli for encouraging the imitation of particular behaviors. Thus, we are investigating how using self-models and virtually manipulating social cognitive constructs such as identification, self-efficacy, and vicarious reinforcement can influence imitation, particularly in the context of health and consumer behaviors. Is seeing the virtual self engage in a healthful activity more or less effective than a virtual other? When an avatar shows positive benefits of using a product in the third person, does the consumer then go out and buy that product? Can behaviors be encouraged by seeing the virtual self model health-related rewards and punishments such as weight loss, weight gain?

The Proteus Effect

Cyberspace grants us great control over our self-representations. At the click of a button, we can alter our gender, age, attractiveness, and skin tone. But as we choose our avatars online, do our avatars change us in turn? In a series of studies, we’ve explored how putting people in avatars of different attractiveness or height change how they behave in a virtual environment

Transformed Social Interaction

In collaboration with the Research Center for Virtual Environments and Behavior, we are interested in the experience of social presence as well as task performance within collaborative virtual environments. We are utilizing virtual reality simulations in which people interact in real-time within a collaborative virtual environment. Specifically, we seek to: 1) learn more about the behaviors that occur during collaboration, and 2) explore the idea of transforming social interaction by selectively augmenting and decrementing these behaviors in order to provide the interactants with novel tools during interaction. In other words, by selectively rendering behaviors that were not actually performed, or alternatively by not rendering behaviors that were in fact performed, immersive virtual environments allow for conversational strategies that are not possible in face-to-face interactions or videoconferencing. We are examining the effect of implementing these novel strategies, and testing their influence on conversation in terms of task performance, learning, persuasion. See our wikipedia entry on TSI.

Avatar Identity

What are the implications of having an avatar, that is, a digital model that represents you in virtual reality? We are studying the ties that individuals have to an avatar. Specifically, how much does an avatar need to resemble (both visually and behaviorally) its respective owner in order for person-specific influences to take effect? Using a variety of affective, behavioral, and cognitive measures we are exploring the phenomenon of virtual self, and examining the implications of avatar representation.

Learning in Immersive VR

In collaboration with Berkeley’s CITRUS lab we are exploring how immersive virtual reality extends the benefits of video learning, allowing the user to enter the same world as the teacher. First, immersive settings allow users to see in full three dimensions, greatly increasing detail, presence (i.e., learners feel psychologically as if they are in the digital learning environment, as opposed to the physical space) and social presence (i.e., they feel as if the digital reconstruction of the instructor is a real person). Second, as opposed to stationary video, immersive virtual settings allow users to control how they view the environment by allowing them to change aspects such as camera position and orientation, even allowing a disconnect between their own representation and their point of view in real-time. Third, video settings only allow users to watch the instructor; immersive virtual reality allows the user to interact with the instructor and the environment, as well as to perform novel functions such as sharing body space with the instructor during learning. In the first experiment completed using this paradigm, we demonstrated, via subjective self-report of the learners as well as more objective measures involving expert coder ratings of learners performing the tai chi moves later on in physical space, that people learned more in the immersive virtual reality system than in the 2D video system.