HOME ABOUT RESULTS TEAM
Interactive Augmented Reality Storytelling
Guided by Scene Semantics

Abstract

We present a novel interactive augmented reality (AR) storytelling approach guided by indoor scene semantics. Our approach automatically populates virtual contents in real-world environments to deliver AR stories, which match both the story plots and scene semantics. During the storytelling process, a player can participate as a character in the story. Meanwhile, the behaviors of the virtual characters and the placement of the virtual items adapt to the player’s actions. An input raw story is represented as a sequence of events, which contain high-level descriptions of the characters’ states, and is converted into a graph representation with automatically supplemented low-level spatial details. Our hierarchical story sampling approach samples realistic character behaviors that fit the story contexts through optimizations; and an animator, which estimates and prioritizes the player’s actions, animates the virtual characters to tell the story in AR. Through experiments and a user study, we validated the effectiveness of our approach for AR storytelling in different environments.

Keywords: Augmented Reality, Storytelling, Character Animation

Publication


Interactive Augmented Reality Storytelling Guided by Scene Semantics
Changyang Li, Wanwan Li, Haikun Huang, Lap-Fai Yu
ACM Transactions on Graphics (Proceeding of SIGGRAPH 2022)
[Main paper] [Supplementary material] [Code]

BibTex

@article{arstorytelling,
    title={Interactive Augmented Reality Storytelling Guided by Scene Semantics},
    author = {Changyang Li and Wanwan Li and Haikun Huang and Lap-Fai Yu},
    journal = {ACM Transactions on Graphics (TOG)},
    volume = {41},
    number = {4},
    year = {2022},
    publisher={ACM New York, NY, USA}
}

Main Video

RESULTS

Some Experiment Results

Click on the images to make them bigger


AR storytelling in a real apartment scene. The blue avatar denotes the AR player.
The virtual character's behaviors are adaptive to the action of the AR player (denoted by the blue avatar).
We extend our approach to automatically place virtual furniture and appliance objects for storytelling.
Our approach can retarget the same story to different scenes of a specific type. The virtual apartment scene is from the Matterport3D dataset (https://niessner.github.io/Matterport).

Additional Video Results

OUR TEAM

Changyang Li1

Wanwan Li1,2

Haikun Huang1

Lap-Fai Yu1

1 George Mason University

2 University of South Florida