A selection of recent projects (2005-2008) - conducted at NII - is briefly
described below. For a detailed discussion of ongoing projects, including cooperative
work with students from Tokyo University, please visit the publications page of
Helmut Prendinger.
Visual Attentive Presentation Agents (2006-08)
|
The purpose of visual attentive presentation agents is
to exploit users’ natural expressions of visual interest in the presented
material, which can be detected by analyzing their eye gaze patterns,
and have life-like virtual presenters adapt their presentation accordingly.
The video clip is GALA Award Winner 2006 in the category "Lifelike
Agent Application". The GALA
(Gathering of Animated Lifelike Agents) 2006 Festival was held in conj.
with the 6th International Conference on Intelligent Virtual Agents (IVA).
Detailed information is available at the
Visual
Attentive Presentation Agents Website.
|
|
MPML3D is our first candidate of the next generation of
authoring languages aimed at supporting digital content creators
in providing highly appealing and highly interactive content with
little effort. The language is based on our previously developed
family of Multimodal Presentation Markup Languages (MPML) that
broadly followed the "sequential" and "parallel" tagging
structure scheme for generating pre-synchronized presentations
featuring life-like characters and interactions with the user. The
new markup language MPML3D deviates from this design framework and
proposes a reactive model instead, which is apt to handle
interaction-rich scenarios with highly realistic 3D characters.
Interaction in previous versions of MPML could be handled only at
the cost of considerable scripting effort due to branching. By
contrast, MPML3D advocates a reactive model that allows
perceptions of other characters or the user interfere with the
presentation flow at any time, and thus facilitates natural and
unrestricted interaction.
MPML3D is designed as a powerful and
flexible language that is easy-to-use by non-experts, but it is
also extensible as it allows content creators to add functionality
such as a narrative model by using popular scripting languages.
|
AutoSelect: What You Want Is What You Get. Real-Time Processing of
Visual Attention and Affect (2006)
|
While objects of our focus of attention ("where we are
looking at") and accompanying affective responses to those
objects is part of our daily experience, little research exists on
investigating the relation between attention and positive
affective evaluation. The purpose of our research is to process
users' emotion and attention in real-time, with the goal of
designing systems that may recognize a user's affective response
to a particular visually presented stimulus in the presence of
other stimuli, and respond accordingly. In this paper, we
introduce the AutoSelect system that automatically
detects a user's preference based on eye movement data and
physiological signals in a two-alternative forced choice task. In
an exploratory study involving the selection of neckties, the
system could correctly classify subjects' choice in 81%. In this
instance of AutoSelect, the gaze 'cascade effect' played a
dominant role, whereas pupil size could not be shown as a reliable
predictor of preference.
Demo: The
video
clip (21MB) provides an introduction to the work on automated
preference detection. The clip was designed and edited by
Arjen Hoekstra.
|
iPick: Eye-based Interaction with an Augmented Reality Video
Conferencing System (2005)
|
We have implemented an augmented reality videoconferencing system
that inserts virtual graphics overlays into the live video stream of
remote conference participants. The virtual objects are manipulated using
a novel interaction technique cascading fiducial marker-based bimanual
tangible interaction and eye tracking. User studies prove that our user
interface enriches remote collaboration by offering hitherto unexplored
ways for collaborative object manipula-tion such as gaze controlled
raypicking of remote physical and virtual mobile objects.
Demo:
István Barakonyi produced this amazing
video clip (139MB, XviD codec is needed).
|
Affective Gaming (2005)
|
This work advocates a novel method for evaluating the impact of
animated interface agents with affective and empathic behavior.
While previous studies relied on questionnaires in order to assess
the user's overall experience with the interface agent, we will
analyze users' physiological response (skin conductance and
electromyography), which allows us to estimate affect-related user
experiences on a moment-by-moment basis without interfering with
the primary interaction task. As an interaction scenario, a card
game has been implemented where the user plays against a virtual
opponent. We used the Max agent that was developed at the Univ. of
Bielefeld.
The findings of our study indicate that within a
competitive gaming scenario, (i) the absence of the agent's
display of negative emotions is conceived as arousing or
stress-inducing, and (ii) the valence of users' emotional response
is congruent with the valence of the emotion expressed by the
agent. Our results for skin conductance could also be reproduced
by assuming a local rather than a global baseline.
Demo: A
video clip (16MB) showing the game interaction has been prepared by
Christian Becker.
|
Please click here
if you want to visit Helmut Prendinger's homepage.
last modified: March 2008
|