Emmie - Environment Management for Multi-User Information Environments

Andreas Butz, Tobias Höllerer, Tom Dickes, Blair MacIntyre, Steven Feiner

The Computer Graphics and User Interfaces Laboratory

Introduction

One important problem in multi-user virtual or augmented reality environments, as well as in ubiquitous computing environments, is the intelligent management of units of information, such as text, images, and animation clips. In a conventional WIMP GUI, these units of information are usually displayed as windows or icons on a desktop, and are managed by some kind of window manager. Window-management operations typically include moving, opening, closing, and iconifying windows with a mouse. The manager also provides means for starting applications, getting general or context-sensitive help, or finding things in the environment. Furthermore, some window managers provide basic functionality for the automated arrangement and layout of windows and icons on the screen as they are created (find a free place, tile, cascade). Once the windows are on the screen, the functions provided for their automatic rearrangement are either very simplistic (cascade/tile on request) or absent.

Objective

Our goal with the EMMIE system is twofold. The first part is to provide services similar to those of a conventional window manager in a Multi-User Augmented Environment in order to create a simple and intuitive way of managing information units over different displays and between several users manually. Since the environment is inhabited by several users who can share certain information units, but might want to have others only in their private view of the environment, the additional issue of privacy vs. publicity has to be addressed. Private information should not appear on publicly visible displays, whereas public information has to be displayed on public displays or shared between all of the private displays (hand-held or head-mounted). The second part is to actively assist the user by dynamic layout mechanisms. Virtual objects should, for example, not occlude other users or displays in the environment, unless explicitely placed so. They can be attached to real world objects or people or to fixed locations within the field of view.

Motivation

Several augmented reality application prototypes have been built by the Computer Graphics and User Interfaces Lab at Columbia over the past decade (e.g., see KARMA, Windows on the world, Architectural Anatomy, and Augmented Reality for Construction). These prototypes exhibit more and more complexity, and essentially form heavily distributed ubiquitous computing environments augmented by virtual objects. Our augmented environments combine a variety of displays (hand-held, head-worn, desk-top, and wall-sized) including both opaque and see-through, and make use of many heterogeneous input devices. Our experience with them has revealed a strong need for a general environment management mechanism. The issues and notion of environment managment were first discussed by Blair MacIntyre and Steven Feiner in their 96 article "Future Multimedia User Interfaces" (see publications). Environment management involves mechanisms for dynamically changing the layout of virtual objects in the user's field of view. The need for rearrangement became clear to us in our work on Windows on the world and "Hybrid user interfaces" (see publications). In these projects we approached the limits of statically placed virtual objects (either attached to real world objects or fixed in space).

Scenario

One application scenario we imagine is a the management of automatically generated multimedia presentations (see MAGIC) for augmented Environments with different categories of displays. Another one is a collaborative scenario, in which users of an augmented environment hold a meeting with the help of this environment. A third application area is the scenario of the National Tele-Immersion Initiative.


Some Studies in VRML

VRML
studies In order to do some brainstorming, discuss and try out ideas we created some VRML97 models of the Teleimmersion scenario. These models show a possible transfer of the window-desktop metaphor to 3D Augmented Reality Environments. As in conventional window-desktop environments, in our augmented environment there are documents and applications. Each document type as well as each application has a visual representation in the form of a 3D icon. In order to support existing 2D applications on computers residing in the 3D environment, these applications have a 2D visual representation in the form of a 2D icon. The operations on these icons are similar to the ones known from the window-desktop metaphor, as for example drag and drop or click to open.

In addition to the operations and services known from conventional window managers, a Multi-User Augmented Environment requires dealing with the issue of privacy vs. publicity. Not every item in one user's personal view of the environment should necessarily be visible to every other user. On the other hand we sould be able to display some items publicly in order to discuss them with other users. We have developed and discussed several approaches to that, two of which are actually implemented in the VRML worlds.


Implementation

Implementation We are currently implementing a prototype of an information manager for Multi-User Augmented Information Environments. The implementation uses the lab's distributed graphics programming environment COTERIE and is running on different kinds of Unix as well as on Windows NT. It integrates wall-sized displays as well as conventional PCs, Laptops and workstations together with see-through head mounted displays into one Augmented Environment in which information units can be presented in multiple ways and passed between machines. On the left is a snapshot of the current system, taken with a camera looking through a head mounted display.



Publications

A. Butz, T. Höllerer, S. Feiner, B. MacIntyre, C. Beshers, Enveloping Users and Computers in a Collaborative 3D Augmented Reality, In: Proc. IWAR '99 (Int. Workshop on Augmented Reality), San Francisco, CA, October 20-21, 1999, pp. 35-44
( 1.1MB Acrobat version of paper)( 2.6MB gzipped Postscript version of paper)


Acknowledgements

This research is supported by the German Academic Exchange Service, the National Tele-Immersion Initiative; the Office of Naval Research under Contract N00014-97-1-0838; hardware and software gifts from Intel, Mitsubishi Electric Research Labs and Microsoft; the New York State Center for Advanced Technology in Computers and Information Systems under Contract NYSSTF-CAT-92-053; and NSF Grant CDA-92-23009. The working environment, hardware, infrastructure and technical support are provided by the Computer Graphics and User Interfaces lab at Columbia University, New York


Please send comments to Andreas Butz at <butz@cs.columbia.edu>