Chapter 2

Multimedia User Interfaces

Authors: Dr. Rainer Götze, Dietrich Boles, Dr. Helmut Eirund

2.1 Abstract

Multimedia user interfaces integrate the processing of heterogeneous media, like text, graphics, video and audio for enhancing the effectiveness of human-computer interaction. An essential property of multimedia user interfaces is the support of time-invariant and time-variant media types. Based on the constant digital processing of media multimedia user interfaces facilitate a flexible combination as well as an user controlled selection and presentation of information. Interactive multimedia systems are prophesied a growing application market e.g. in entertainment electronics, point-of-sale, point-of-information and computer based training applications ([24], [62]). Therefore, the keywords of the expanding multimedia technology are "media integration” and "interactivity”.

The increasing performance and simultaneously decreasing prices of multimedia hardware have caused a still growing dissemination of multimedia applications. However, beside the hardware equipment multimedia applications need an appropriate software environment. Therefore research efforts are oriented to the investigation of multimedia extensions of operating systems, database management systems, communication systems, document architectures, synchronization mechanisms and user interfaces [70]. An important requirement to a software environment for multimedia systems is the integration of the new media types into the human-computer interaction. Therefore, multimedia extensions of user interfaces and their development tools have gained special attention.

This chapter describes the intermediate results of the XFantasy project, which is concerned with the development of software tools for the design and implementation of multimedia user interfaces and interactive multimedia applications. The resulting software environment consists of an object-oriented user interface management system (UIMS) for multimedia user interfaces (XFantasy-UIMS) and an authoring system (FMAD: "XFantasy-based Multimedia Application Developer”) for the interactive development of multimedia applications.

For the implementation of the inherent parallelism in multimedia user interfaces the parallel object-oriented programming language QPC++ has been developed. QPC++ is an extension of the object-oriented programming language C++. It integrates mechanisms for specifying parallelism, communication, and synchronization into the base language. QPC++ merges concepts of object-oriented and parallel programming by creating processes as instances of specialized classes, called process classes.

2.2 Setting

2.2.1 Multimedia User Interfaces

Software tools for the development of multimedia user interfaces have to support discrete media as well as continuous media. Discrete media imply a transitional model of dialog control. Each event caused by an user input or internal message is processed by a particular action that will be terminated before the next event is processed. However, the integration of continuous media causes this model to become inadequate because actions like presenting a video or playing an audio possess a duration. In order to facilitate the dynamic manipulation of continuous output actions by user interactions a dialog model has to incorporate the concurrent execution and synchronization of output actions and interactions.

In the last years, there have been published several approaches to the management of continuous media in user interfaces. Nearly all of them encompass concepts for defining the temporal relations of time-variant output actions.

Gibbs [34] presents a framework for the composition of multimedia objects. Composite multimedia objects manage a collection of component multimedia objects and define their temporal relations by composite timeline diagrams. For this purpose they offer temporal transformations that manipulate the temporal translation, scale and orientation of their component multimedia objects.

A technique for the formal specification and modeling of multimedia composition with respect to intermedia timing and synchronization is presented in [59]. This technique is based on a modification of Timed Petri Nets and temporal intervals. The augmented Petri Net model defines two additional mappings which assign duration and resource usage to the places and thereby supports the synchronization of concurrently presented temporally related multimedia objects.

Other approaches have enhanced formal document structures by concepts for the definition of temporal relations. Hoepner [50] proposes a synchronization model for ODA ("Open Document Architecture”) [53] based on events and actions. The synchronization of the presentation of multimedia objects is specified by path expressions including path operators which define the temporal relations of output actions and control their corresponding synchronization. In [50] they are integrated into the composite layout objects of ODA. Another approach to the standardization of representing multimedia, hypertext, hypermedia, time- and state-based documents is HyTime ("Hypermedia / Time-based Document Structuring Language”) ([64], [54]). In HyTime the temporal scheduling of multimedia objects is expressed in terms of "finite coordinate systems” which define a specific measurement domain and a reference time unit. Another currently developed standard supported by ISO is known as MHEG [9] ("Multimedia and Hypermedia Information Coding Experts Group”) and focuses on multimedia synchronization, hypermedia navigation and object-orientation. In [61] a hypermedia object model and presentation environment based on MHEG is proposed. This model uses composite objects for the definition of temporal relations between subordinate objects and additionally supports synchronization at internal reference points which are modeled as timestone events and propagated to other objects. Timestone events may coincide with video frames, audio samples or animation scenes and facilitate the implementation of more sophisticated synchronization mechanisms than parallel or sequential start and end.

The Ttoolkit [46] represents an approach to the integration of time management into an existing user interface toolkit for the development of graphical user interfaces. It has been implemented as an extension of the Xt toolkit [65] by defining a new branch of the Xt class hierarchy. In order to achieve an isomorphic treatment of time and space the new classes conform to the Xt model for spatial composition., i.e. they are subdivided into simple classes encapsulating the media-dependent operations and composite classes whose instances define the temporal relation of their subordinate objects.

Timelines present another approach to the implicit specification of temporal relations by specifying the starting time and duration of continuous output actions in physical or logical time units. They are used in the multimedia construction tools Muse [49], MAEstro [25] and QuickTime [8]. Because or their user-friendliness they have also been adopted by many commercial multimedia design tools, e.g. Director [79] and MediaMaker [80].

All these approaches use temporal relations only for the synchronization and composition of multimedia presentations and either do not consider user interactions at all or only incorporate predefined standard user interactions like in [9] and [61]. The dialog model developed in the XFantasy project not only supports the temporal composition of multimedia output actions but also their synchronization with interactions.

The description of interactions has already been considered in other approaches. Especially the interaction model [51], the EDGE model [57] and the AIT model [20] are concerned with this problem and have incorporated ideas from Anson´s device model [6] which uses hierarchically arranged logical devices for the description of complex user inputs. These models apply the paradigm of interaction hierarchies [20] by modeling user interfaces in terms of complex interactions. However, in contrast to our dialog model, actions in the interaction, EDGE and AIT model are tightly coupled to events since they are executed before or after an event occurs. Thus, they do not facilitate a synchronization of continuous actions and interactions which is the main feature of the dialog model developed in the XFantasy project.

2.2.2 Parallel Object-Oriented Programming

Investigations in merging concepts of parallel and object-oriented programming have already been made for several years. Parallel object-oriented programming languages (POOPLs) combine the advantages of object-oriented programming - especially reusability and extensibility of software - with the characteristic of parallel programming, namely the concurrent execution of certain operations. On the one hand, they can be exploited for the implementation of applications for distributed systems. On the other hand, the development of software in certain areas of application (e.g. simulations, interactive graphical user interfaces) is simplified, not only concerning multi-processor systems but uni-processors, too. One way to classify POOPLs is described in [66]. Several languages are compared with respect to what objects stand for. Objects may be considered as processes, shared passive abstract data types or as encapsulation of multiple processes and data. A good overview of existing POOPLs is given in [82]. However, in this chapter we focus on corresponding extensions of the object-oriented programming language C++ [71].

DROL [73] is an extension of C++ with the capability of describing distributed real-time systems. It supports the definition of sequential active objects. Timing constraints can be associated with the member functions of active objects. When an object misses a specified protocol during run-time, a user defined exception handling is initiated.

C_NET [1] is a POOPL which adds concepts of the parallel programming language OCCAM to C++. The mechanisms for expressing parallelism are orthogonal to the concept of classes of C++. That means, classes and objects keep the same properties as in C++; processes are created in OCCAM-like par-statements using objects as shared passive abstract data types.

C++ is an upward-compatible extension of the C programming language providing data abstraction facilities. Concurrent C [31] is an upward-compatible extension of C providing parallel programming facilities. By merging C++ and Concurrent C, the POOPL Concurrent C++ [32] has been defined. It offers both: data abstraction and parallel programming facilities. They are offered in an orthogonal way. Classes and objects can be defined like in C++. Processes can be defined like in Concurrent C. Thus, objects do not have any parallel properties and processes do not have any object-oriented properties.

C&& [52] adds mechanisms for expressing parallelism by means of cobegin-coend-blocks to C++. Cobegin-coend-blocks are well known from Concurrent Pascal [47]. C&& offers the possibility to define special classes (parclasses). Instances of parclasses are passive objects (parobjects). Calls of member functions of parobjects can be placed in cobegin-coend-blocks. The statements within an cobegin-coend-block are executed in parallel. It is not allowed to execute two or more member functions of the same parobject simultaneously. Member functions of parobjects can only be used exclusively.

ACT++ [55] is a parallel extension of C++ which supports the actor model of concurrent computation [2]. An actor is a self-contained active object. Actors can be defined as instances of classes which are derived from a predefined class ACTOR. Interaction among actors can occur only through message passing. Each actor is associated with a unique mail queue whose address serves as the identifier of the actor. Member functions of actors are used for message passing. However, they are not called directly. Instead, their addresses and the actual arguments are packed in special send constructs. When an actor receives a message, it decodes its information and calls the corresponding member function. Message passing is always done asynchronously. However, it is possible for the addressed actor to pass results back by means of so called Cbox-objects.

KAROS [45] is an exploratory language based on C++ which has been designed for reliable distributed applications. It has been implemented as a C++ class library along the line of ACT++. KAROS provides two kinds of objects: ActiveObjects and DataObjects. ActiveObjects are global logical units of distribution and passive objects. They are always local to an ActiveObject. ActiveObjects communicate similar to actors in ACT++ by asynchronously passing values using future variables to store reply values.

mC++ [21] is an extension of C++ which supports parallelism on several levels of abstraction. Besides the definition of C++-like objects, it is possible to define coroutines, monitors, coroutine-monitors and tasks as instances of special kinds of classes. A task possesses a body which is implicitly activated after its creation and initialization. Tasks communicate via their member functions. Calls of member functions have to be accepted explicitly. Afterwards, they are executed just like member functions of processes of QPC++. Member functions of tasks can only be called synchronously. At each time only one member function per task can be active. In mC++ a member function of a task is executed by the client, not by the server. During execution of a member function, the client can be postponed until some later time. A postponed client is blocked and added to a condition queue. While the postponed task is deactivated, the server can accept and deal with other requests.

A different approach for the integration of mechanisms for expressing parallelism into object-oriented programming languages is offered by special libraries, e.g. in form of class libraries. Such libraries are often called task libraries. Predefined classes offer routines for the creation of processes and for the interaction among processes. Programmers can derive new classes from the predefined ones and use the inherited functions. Such task libraries are described in [37], [44] and [72] for example.

2.2.3 Multimedia Authoring Systems

Multimedia authoring systems are interactive software tools which enable their users (called authors) to develop multimedia applications without any knowledge of textual programming languages. Instead, they support the visual programming of multimedia applications [29].

In the last few years a lot of authoring systems have been implemented (see [18]). Based on the manner in which way they support the specification of temporal relationships between the media objects of a multimedia application authoring systems can be divided into three categories: screen-based, timeline-based and flowchart-based authoring systems.

In screen-based authoring systems like HyperCard [36] or ToolBook [11] multimedia applications are represented as stacks, books, or slide presentations which consist of a set of cards, pages, or slides with each of them representing a screen. The authoring system enables the author to lay out each screen with the help of interface builders and to create hyperlinks between the screens.

In timeline-based authoring systems like Macromind Director [60] or Muse [49] the film metapher is used to characterize multimedia applications. A multimedia application is compared with a set of scenes which can be placed on a timeline. The author is enabled to specify actions which lead to a time jump in respond to certain user interactions.

Fowchart-based authoring systems like Authorware Professional [68], Apple Media Kit [76], and Eventor [30] use flowcharts to describe the temporal relationships between the media objects of a multimedia application. Icons are used representative of the objects. The icons are connected with edges which specify the flow of control of the application.

The disadvantage of most of the existing authoring systems is that only the definition of temporal relationships between the media objects is supported by visual programming techniques. Some systems additionally integrate graphical techniques which make it possible for the author to describe certain spatial relationships and relationships based on certain interactions (for example the action to be performed as the result of the selection of a menu item by the user). To enable the author to define other types of relationships often special script languages are integrated into the system. But that fact is inconsistent with the intension of authoring systems to support the development of multimedia application by non-programmers. Another disadvantage of existing authoring systems is that in general only buttons and text entry fields are available to the author to integrate them into a multimedia application and enable the user to manipulate the flow of control or the lay-out of the application. Most of the systems are not extendible, they do not support the integration of new media types, interaction types, and other applications like databases into the system.

2.3 Aim of the Project

Aim of the XFantasy project [7] is the development of a software environment for the design and the implementation of interactive multimedia user interfaces consisting of an object-oriented user interface management system (XFantasy-UIMS) and an interactive authoring system (FMAD: "XFantasy-based Multimedia Application Developer”). Furthermore, the parallel object-oriented programming language QPC++ has been developed for the implementation of the inherent parallelism in multimedia user interfaces.

Fig.1 Software tools of XFantasy project

The XFantasy-UIMS involves an object-oriented user interface toolkit (XFantasy-UIT) offering a class library for the implementation of multimedia user interfaces and the dialog specification language ODIS ("Object-Oriented Dialog Specification”) supporting an abstract description of dialogs in multimedia user interfaces. The interactive tool FMAD has been developed with the XFantasy-UIMS. It facilitates the visual development of multimedia applications without special knowledge in programming languages. Thus, the software tools of the XFantasy project support three levels of abstraction for the development of multimedia applications: object-oriented programming with the XFantasy-UIT, abstract specification of dialogues in ODIS and visual programming of multimedia applications with FMAD.

2.3.1 XFantasy-UIMS

Multimedia user interfaces pose new requirements to models and software tools for their development since they include discrete (time-invariant) media and continuous (time-variant) media. Multimedia presentations often require the simultaneous output of multiple continuous media. Furthermore, for their dynamic manipulation interactions have to be processed simultaneously to continuous output actions. Thus, multimedia user interfaces possess an inherent parallelism. Therefore, we have developed an dialog model which is especially adapted to this new requirements and supports the recognition and processing of user defined interactions, the definition complex multimedia presentations and temporal relations between interactions and continuous output actions. It supports the development of common graphical user interfaces as well as multimedia user interfaces releasing the dialog developer from programming difficult synchronization operations. It has been implemented as part of an object-oriented User Interface Toolkit (UIT).

The dialog specification language ODIS applies this dialog model to the abstract specification of dialogs in multimedia user interfaces. ODIS is based on a object-oriented user interface model and facilitates the modeling of user interfaces as systems of communicating interaction objects.

2.3.2 QPC++

The use of object-oriented programming languages has strongly been increasing in the last few years. Especially the programming language C [56] has been replaced more and more by its object-oriented extension C++ [71]. First of all, the reason for that development is the raised reusability and easy extensibility of object-oriented software. Additionally, the modeling of large systems is well supported by an object-oriented design. Real world problems can be modeled more naturally by objects than e.g. in a functional approach.

However, in a few areas of application, like simulations and the development of interactive graphical user interfaces, applications are often characterized by activities taking place in parallel. Processes in the form of active objects are more suited to modeling and implementing such applications than passive objects of sequential object-oriented programming languages. In contrast to a passive object which is always waiting until it receives a message (in C++ in form of a call of a member function) an active object has an ongoing activity on its own. After being created, it executes a special function. The execution takes place in parallel with the execution of other functions by other active objects in the system. Only at certain explicitly indicated points an active object can communicate with other objects.

QPC++ supports such applications by offering the definition of processes in form of active objects besides the definition of passive C++-like objects. Processes are defined as instances of specialized classes, called process classes. By the fact that processes are not added to the base language in an orthogonal way, the concepts of object-oriented programming, like inheritance or polymorphism, are also applicable for processes. The main goal of the definition of QPC++ was the integration of known, clear and easy to handle mechanisms for expressing parallelism into C++. The modeling and implementation of the applications mentioned above is well supported by QPC++. It is not intended to support the definition of very fine-graduated processes. At the moment, the language is only implemented on a uni-processor. Processes are working quasi-parallel.

QPC++ is upward-compatible to C++. It is possible to use existing classes without any restrictions. Moreover, it is possible to define process classes by deriving them from usual C++-classes. Only a few new key words and syntactical constructs are added to the base language. Thus, it will be easy for a C++-programmer to switch from C++ to QPC++.

2.3.3 FMAD

Often designers and advertise man, who in general are non-programmers, want to develop multimedia applications. Therefore authoring systems should enable an author to develop multimedia applications without any knowledge of traditional programming languages. Unlike most of the existing authoring systems FMAD enables the development of highly interactive multimedia applications without using a certain script language. It facilitates an author to define any sort of relationships between any sort of media objects by using concepts and techniques of visual programming languages [67] exclusively.

FMAD has been implemented by using the object-oriented programming language C++ [71]. The XFantasy-UIMS has been used as development environment and as run-time system. These facts support the extendability of FMAD. It is very easy to integrate new media types, interaction types, and applications like databases into FMAD. Therefore FMAD can easily be adapted to new technologies in the field of human-computer communication.

2.4 Results

2.4.1 XFantasy-UIMS

2.4.1.1 Basic Concepts of the COMMAND Model

For the development of XFantasy-UIMS a new dialog model has been developed which is called COMMAND model ("Control of Multimedia output and multi-threaded Dialogs”) ([23], [42], [74]) and defines concepts for event handling and output synchronization in multimedia user interfaces. It is based on the interaction model [51] that facilitates the recognition and processing of user-defined complex events (interactions) by object hierarchies containing basic events as leaves. The model integrates dialog control into the event handling process and supports a strong connection between input and output processing. Complex multimedia output actions are defined as composition of discrete and continuous output actions.

In the COMMAND model events are recognized by so called event-sensitive objects which supervise their occurrence and inform other interested objects. Complex events composed of basic and other complex events are recognized bottom-up by hierarchies of event-sensitive objects. Each event-sensitive object announces interest in the events recognized by its subordinate objects and will be informed about their occurrence. An event-sensitive object will only control the recognition of events if there is at least one other object being interested in this event. Basic events are received and propagated by an object-oriented device interface (see fig. 3) to the underlying window system and the application consisting of so called device objects.

Besides the events of the window system the COMMAND model also supports temporal and application-specific events. Fig. 2 shows the main tasks of the event-sensitive objects, event recognition and propagation.

The basis of event handling in the COMMAND model is an object-oriented interface to the input devices which consists of four device objects as shown in fig. 3. The class of device objects is derived from the class of event-sensitive objects and hides the interface to the underlying window system and the application. Device objects receive events and propagate them to all objects which have announced interest for them. In contrast to complex events, those events are called basic events.

Fig. 2 Event recognition and propagation

Fig. 3 Object-oriented device interface

Besides the device classes FMouseDevice and FKeyboardDevice the device interface includes two additional device classes called FTimerDevice and FApplicationDevice. The instance of the class FTimerDevice is responsible for two kinds of temporal events, the reaching of a point of time and the passing of a span of time. The class FApplicationDevice supports mixed control between the user interface and the application because its only instance receives events that have been generated by application objects. The distribution of events to the appropriate device objects is done by a dispatcher object which contains the main event loop and therefore controls the user interface.

2.4.1.2 Complex Events and Dialog Objects

One of the main features of the COMMAND model is the facility to recognize and process complex events. Fig. 2 already elucidates that event-sensitive objects can be informed about events recognized by multiple other objects and thereby recognize complex events which are propagated to all interested objects. Complex events are supervised by so called dialog objects which support the definition of event hierarchies.

Complex events can be assigned a temporal extension (duration) defined by the interval between the occurrence of the first and the last subordinate event. For defining different types of dialog objects we assign internal START- and END-events to the start end time of these intervals:

- The START-event is generated immediately after the first subordinate event has sent its START-event.

- The END-event is generated immediately after the last subordinate event has send its END-event.

For the basic events START- and END-events are sent immediately after the event has occurred, i.e. basic events have no duration. If a subordinate event of a dialog object is in turn a complex event, the above definition is applied recursively. Thereby START- and END-events are propagated bottom-up through the hierarchies of basic and complex events.

Dialog objects build another subclass of the event-sensitive objects. A dialog object defines the recognition order of subordinate events and the conditions for the occurrence of the corresponding complex event. Complex events are defined by the class of an dialog object and its subordinate events. The COMMAND model defines four classes of dialog objects (SEQUENCE, AND, OR, REPEAT) and thereby partly employs concepts from the interaction model [51].

- SEQUENCE:

The subordinate events of the SEQUENCE-object are supervised sequentially. After an event has been recognized the following one is supervised. Thus, the SEQUENCE-object terminates supervision after all subordinate events have occurred in the specified order. The START-event of the complex event is generated after the first subordinate event has sent its START-event whereas the END-event is generated after the last subordinate event has sent its END-event.

- AND:

The subordinate events of the AND-object are supervised simultaneously. The AND-object makes no requirements on the order of event occurrence and terminates supervision after all subordinate events have occurred. Thus, the AND-object supports a concurrent supervision of several events until all of them have occurred. The START-event of the complex event is generated after one of the subordinate events has sent its START-event whereas the END-event is generated after all subordinate events have sent their END-event.

- OR:

The subordinate events of the OR-object are supervised simultaneously until one of them occurs. Immediately after the first subordinate event has occurred supervision of the other subordinate events is stopped. Thus, the OR-object supports a concurrent supervision of several events until one of them has occurred. The START-event of the complex event is generated after one of the subordinate events has sent its START-event whereas the END-event is generated after one subordinate events has sent its END-event.

- REPEAT:

Subordinate events of the REPEAT-object are divided into a repeat event and an abort event. The repeat event is supervised repeatedly until the abort event occurs. At the start of each iteration both events are supervised simultaneously until one of them sends its START-event. Depending of the subordinate event which first sends its START-event the iteration is continued or aborted. After the START-event has been sent the supervision of the other subordinate event is aborted so that partially processed repeat events cannot be interrupted. Supervision of the complex event terminates after the abort-event has occurred. The START-event of the complex event is generated when for the first time one of the subordinate events has sent its START-event whereas the END-event is generated after the abort event has sent its END-event.

Integration of Event Processing and Recognition

Since dialog control also incorporates the processing of basic and complex events, the COMMAND model supports actions and conditions to be integrated into the event recognition process. Conditions can check necessary properties of events whereas actions can output prompts or process events, e.g. by sending messages to presentation and application objects (call-backs). In contrast to conditions, actions do not return any value. Actions and conditions are incorporated at any level of the event hierarchy by using them as subordinate objects of SEQUENCE-and REPEAT-objects. Thereby, the COMMAND model supports an event processing at different levels of abstraction and a strong connection between input and output processing which is suited to the implementation of direct feedback. Interactions are considered as recognition and processing of complex events and are modeled as hierarchies of dialog, basic event, action and condition objects. Since dialog objects have to consider the temporal extension of subordinate complex events the COMMAND model facilitates the direct integration of continuous output actions by treating them as special kind of action objects. After having started a continuous output action dialog objects return control and thereby implement a passive waiting for the termination of continuous output actions.

Fig. 4 presents a control panel for the simultaneous presentation of video and audio clips consisting of two buttons and two sliders. The object hierarchy for implementing the dialog control with the COMMAND model is shown in fig. 5.

Fig. 4 Video/audio presentation with control panel

Fig. 5 Object hierarchy for controlling video/audio presentation

Composition of Complex Multimedia Output Actions

Generally, multimedia output actions are composed of several actions according to the different types of media involved. Since the included continuous output actions possess a duration, their composition has to consider their temporal relations, i.e. the synchronization of output operations. In order to facilitate synchronization of continuous output actions, we introduce START and END-events of output actions which are sent at the start and end time of their presentation respectively. Based on these events, output synchronization objects for the definition of temporal relations of continuous output actions are defined. The COMMAND model distinguished two different types of continuous action objects:

- Timer actions:

A timer action lasts exactly the number of time units specified as parameter and will be used to restrict the duration of concurrent continuous actions.

- Media actions:

A media action controls output of time-invariant or time-variant media by communicating with an so called media object. For presenting time-variant media they send start- and stop-messages to media objects which encapsulate output actions of discrete media and output processes of continuous media. Media objects indicate the start and end of their output action or process by sending START- and END-events to media actions. Discrete output actions are treated as continuous output actions with infinite duration and must be stopped explicitly.

The COMMAND model supports four types of output synchronization objects. An output synchronization object and its operands define a complex continuous action and also generate START- and END-events. Thus, they can be used in turn as operands of other output synchronization objects and support a hierarchical definition of multimedia presentations.

- SEQUENTIAL: sequential execution of actions

The START-event of the complex output action is generated when the first operand has sent its START-event whereas the END-event is generated when the last operand has sent its END-event.

- PARSTART: parallel start of all operands

The START- or END-event of the complex output action are generated when all operands have sent their START- or END-event, respectively.

- PARSTARTEND: parallel start and end of all operands

The START-event of the complex output action is generated when all operands have sent their START-event. After one operand has sent its END-event the other operands are stopped and the END-event is generated.

- ITERATE: iterated execution of an action

The START-event of the complex output action is generated when the iterated action sends its START-event for the first time while the END-event is generated when the iteration stops.

Consider the following example and the object hierarchy in fig. 6: A sequence of three video clips (V1,V2,V3) has to be presented concurrently with an audio sequence A. Each video clip is annotated with a subtitle (T1,T2,T3) and may be presented no longer than 30 time units.

However, this approach to modeling multimedia output actions does not directly support synchronization at internal synchronization points (see [61], [69]) because synchronization events are only generated at the start or end time of continuous output or timer actions. Yet internal synchronization points can be modeled by timer actions started simultaneously with output actions. The START- and END-events of these timer actions can be used for synchronizing them with other continuous output actions. Hence, the COMMAND model supports the functionality of internal synchronization points without dealing with the quanta of continuous media like video frames, audio samples or animation scenes and thereby ensures media independence.

Fig. 6 Composition hierarchy of a complex output action

Other approaches to the synchronization of multimedia output actions like timed petri-nets [59], path expression [50] and reference points ([10], [48]) can model more complex relations than those that can be modeled by object hierarchies, e.g. directed graphs. In order to increase the modeling power of the COMMAND model we introduce synchronization conditions that describe disjunctive (ORSync-object) and conjunctive (ANDSync-object) combinations of START- and END-events (see fig. 7). They define temporal relations between the start and end times of output actions and thereby employ concepts of the temporal composition by reference points. These conditions control the starting and stopping of output actions and are evaluated event-triggered. Synchronization conditions can be applied to simple output actions as well as complex output actions. Thus, the COMMAND model supports the definition of complex multimedia output actions by an orthogonal combination of the hierarchical composition and the composition by reference points.

Fig. 7 Object graph of synchronization conditions

The underlying run-time system (”MediaManager”, [77]) manages the continuous synchronization [69] of timer and continuous output actions, i.e. their precise temporal alignment, and copes with synchronization problems caused by different presentation speeds.

2.4.1.3 Concurrency of Interactions and Actions

In graphical user interfaces, actions are mainly executed in response to events caused by user inputs. However, with the integration of multimedia, continuous actions have to be executed while event handling is still going on. The dialog model presented in the previous chapter incorporates concurrency in the definition of interactions (AND- and OR-object) and complex output actions (PARSTART- and PARSTARTEND-object). In this chapter the COMMAND model is enhanced by concepts for modeling the concurreny between interactions and continuous actions. Since the duration of interactions and continuous actions is described by intervals delimited by their START and END-events, the definition of their concurrency has also to be based on intervals. Generally, we do not only want to define concurreny but the temporal relations of interactions and continuous output actions. Therefore, we have considered Allen's interval calculus ([3], [4]), a well-known approach to temporal reasoning with time intervals which has also been considered for the definition of the output synchronization objects. The interval calculus defines the thirteen possible mutual exclusive relations of two time intervals. Considering that the duration of interactions and continuous actions is described by START- and END-events, we choose only those relations that can be mapped to equality relations of their start and end times. Thus, we introduce the synchronization objects MEETS, STARTS, ENDS and EQUALS.

Synchronization of Actions with Interactions

Synchronization objects facilitate the definition of actions (timer and output actions) to be executed concurrently with interactions.

- STARTS: simultaneous start of interaction and action

The action operand will be started immediately after the interaction operand has sent its START-event.

- EQUALS: simultaneous start and end of interaction and action

The action operand will be started immediately after the interaction operand has sent its START-event and will be stopped immediately after the interaction operand has sent its END-event.

- ENDS: simultaneous start and end of interaction and action

The action operand will be stopped immediately after the interaction operand has sent its END-event.

- MEETS: simultaneous execution of interaction and action

The action operand will be started immediately after the interaction operand has sent its END-event.

Synchronization objects control the duration of one operand depending on the duration of the other one, i.e. the start and end time of one operand are synchronized with the start and end time of the other one by evaluating START- and END-events. Yet they are passive objects because they start or stop operands only after having received appropriate START- or END-events. Thus, they neither conform to the bottom-up modeling of event recognition nor to the top-down modeling of complex output actions. However, they offer a flexible approach to the temporal composition of these two paradigms.

In order to point out the advantages of synchronization objects we once more consider the example of fig. 3. Fig. 8 presents the improved implementation which separates the output operation from the interaction and synchronizes them by an EQUALS- and an ENDS-object. Thereby, it is possible to apply the PARSTARTEND-object that guarantees the simultaneous presentation of the video and audio clip. When the control panel interaction sends its START-event, the EQUALS-object will immediately start the complex output action. After one of the operands has sent an END-event, the other one is stopped immediately.

Fig. 8 Synchronization of actions with interactions

Synchronization of Interactions with Actions

The synchronization objects for actions and interactions are polymorphic, i.e. they can also be applied the other way around for the synchronization of interactions with actions having the corresponding semantics.

Thereby, the synchronization objects support both directions of synchronization, the synchronization of actions with interactions as well as the synchronization of interactions with actions. The first direction is suitable for starting actions at certain states in the dialog whereas the second direction supports the starting of dialogs at certain times of complex actions. The START-event of the action operand will cause the start of the interaction operand. This kind of synchronization makes sense if it is the interaction whose start has to be synchronized with the start of the output action. Synchronization of interactions with actions is especially suited to combining interactions with continuous output actions included in complex output actions.

Suppose that the video/audio action of the control panel example is included in a complex output action but that we still want to guarantee access to the control panel when the video/audio action is executed. The dialog control will start the root object of the complex output action and will have no direct influence on the start of single component output actions. Therefore, it cannot decide when the control panel interaction has to be started. By combining the video/audio action and the control panel interaction with an EQUALS- and ENDS-object we ensure their synchronization while still being able to use the video/audio action as component output action of a complex output action.

Other convincing applications of the synchronization objects are actions with a priori infinite duration, e.g. live video. One possiblity to delimit their durations is the synchronization with another action having a finite duration, e.g. a timer-action, by the PARSTARTEND-object. The synchronization objects discussed in this chapter additionally offer the possiblity to delimit their duration by the end of an interaction. Suppose that the combined audio/video actions has an infinite duration. Since it is synchronized with the control panel interaction by an ENDS-object, it will be stopped when the interaction sends its END-event.

In the previous chapter we described how timer actions support the functionality of internal reference points. Since synchronization objects can also be applied to the synchronization of timer actions and interactions, the COMMAND model supports the integration of interactions in complex multimedia output actions at arbitrary reference points, i.e. at the start or end time of continuous output actions as well as at internal reference points.

2.4.1.4 Implementation of the XFantasy Dialog Model

The COMMAND model has been implemented as a class hierarchy (see fig. 9) as part of the object-oriented XFantasy-UIT ([40], [23], [77]) which is based on the X Window System. The class hierarchy can be subdivided into classes for the implementation of interactions (abstract base class FEventGuard), classes for the implementation of complex output actions (abstract base classes FContinuousAction) and classes for the synchronization of interactions and actions (abstract base classes FSyncOperator and FSyncCondition). The interface between dialog objects and their subordinate objects is defined in the abstract base class FComponent. Hierarchies of dialog objects defining complex interactions are supported by allowing instances of the class FEvent to reference subordinate dialog objects.

The object-oriented XFantasy-UIT consists of the five components presented in fig. 10. Each of them has been designed and implemented as a class hierarchy:

- a hierarchy of media classes for discrete and continuous media types,

- classes for the recognition and propagation of basic events as well as the for the scheduling of dialog and output processes,

- a hierarchy of classes for dialog control,

- a hierarchy of application independent user interface classes,

- input/output classes abstracting from the underlying window system.

Fig. 9 Class hierarchy of the COMMAND model

Fig. 10 Software architecture of the XFantasy-UIT

Input/output classes encapsulate the access to the underlying X Window System ([65], [33]) and thereby ensures the portability of the XFantasy-UIT. Media classes define abstract interfaces for processing discrete and continuous media as well as for the spatial and configurational composition [75] in multimedia presentations. The media classes of the XFantasy-UIT encompass a hierarchy of graphics classes based on a hierarchical graphics model. Classes for dialog control facilitate the integrated implementation of interactions and multimedia presentations as well as their temporal relations. Dialog and continuous output processes are implemented as lightweight processes [5]. User interface classes define the look-and-feel of application independent user interface components, like buttons, scroll bars, menus and dialog boxes. Their implementation is based on the graphics model as well as on the dialog model of the XFantasy-UIT.

2.4.1.5 The Dialog Specification Language ODIS

The integration of the COMMAND model into an object-oriented user interface model results in the dialog specification language ODIS ([38], [39], [41], [43]). In ODIS user interfaces are modeled by systems of communicating interaction objects. The dynamics of user interfaces is determined by the dynamics of its interaction objects and their co-operation, i.e. their communication and their hierarchical composition.

ODIS is based on an object model that is derived from the MVC model ([35], [58]). An interaction object is modeled as aggregation of a presentation and dynamics object (see fig. 11). To each interaction object one application object can be assigned.

Presentation objects define the graphical representation of interaction objects whereas dynamics objects define their dialog control based on the COMMAND model. Furthermore, they manage the communication and data exchange between the presentation and application object. Presentation objects are generated as instances of classes offered by the XFantasy-UIT.

Fig. 11 ODIS object model

In ODIS interaction objects are generated as instances of interaction classes. An interaction class defines the dynamics of their instances by textual descriptions of interactions, multimedia presentations and temporal relations. Communication between interaction objects is mainly executed by asynchronous sending and receiving of events. The concepts for event based communication are especially adapted to the hierarchical composition of interaction objects. ODIS supports the inheritance of dialog specifications and thereby facilitates the definition of new interaction classes as enhancements or specializations of other ones.

Composition of interaction objects is one of the most important concepts in the object-oriented modeling of user interface and leads to a differentiation between simple and complex interaction objects. Superordinate interaction objects control the dynamics of subordinate interaction objects (subobjects) by disabling and enabling their interactions. Thus, composition of interaction objects is exploited for modeling complex dialogs with participation of multiple interaction objects. ODIS distinguishes between exclusive subobjects, which are exactly subordinated to one interaction object, and shared subobjects, which may be subordinated to multiple interaction objects. Thereby, strict object hierarchies as well as directed object graphs can be modeled in ODIS.

2.4.2 QPC++

2.4.2.1 Objects and Processes

In the object-oriented programming paradigm a system can be regarded as a collection of objects. An object is a unit consisting of data and functions, called methods, acting on these data. The data are stored in so called instance variables. Usually, access to instance variables is restricted to the objects itself, i.e. objects have no access to the instance variables of other objects, except they are explicitly allowed to do so. In C++, methods are called member functions and form the interface of objects.

Objects of the same type can be grouped in classes or that is to say, classes can be regarded as blueprints for the definition of objects. During execution of a program, objects are created as instances of classes. Objects of the same class execute the same code for their methods. Instance variables of objects of the same class have the same names and types but each object has its own set of variables. Fig. 12 illustrates the form of objects.

Objects of sequential object-oriented programming languages are characterized by their passivity. In C++, a program usually starts with the execution of a function, called main. During execution of the program, objects are created and initialized with the help of special member functions, called constructors. After being initialized, an object is waiting for a call of one of its member functions. Being called, it becomes active and executes the function. By doing that, it possibly calls member functions of other objects. After having finished the execution of the member function, the object becomes passive again.

Fig. 12 Objects and processes

An object which is not waiting quietly until one of its member functions is called but has an activity on its own is called active object. Only at certain explicitly indicated points its activities can be interrupted in order to execute a member function. Active objects represent autonomous processes. QPC++ offers the possibility to define so called process classes. They are similar to classes in C++, but additionally, they can contain a special member function, called body, in which such activities can be described. An instance of a process class is an active object. Syntactically, it can be created like an object in C++. After its initialization by executing a constructor, the body is activated. It is executed like a usual function. Its execution takes place in parallel with the execution of the statements following the creation (definition) of the object. Fig. 12 illustrates the form of a process in QPC++.

A QPC++-program is a program consisting of active objects each executing its body. Besides the definition of process classes and processes the definition of object classes and passive objects is still allowed. Program 1 demonstrates the definition and creation of processes in QPC++. The program starts with the execution of the function main (called main process in the following). The main process creates two processes w1 and *w2. They are instances of process class Writer. After the initialization of w1 by executing the constructor of process class Writer, the body of the process (the member function $Writer()) is started automatically. Meanwhile, the main process creates process *w2 in a similar way. After the execution of the second statement of function main, there exist three active processes: the main process executing the for-loop and the two writer processes each executing the while-loop of its body. By calling the delete-operator or by leaving the scope of variable w1 respectively, the main process terminates the two writer processes. In both cases the main process has to wait, until the writer process has finished the execution of its body. Having left the scope of function main, the main process terminates as well. The termination of all processes causes the termination of the whole program.

#include <stream.h>

process class Writer { // process class

int n, loops;

public:

Writer(int lo, int i) { // constructor

loops = lo; n = i;initialization

}

$Writer() { // body

// after the creation and initialization of a writer-process, this member function

// will be called automatically

while (loops--) cout << n++ << endl;

// now the process is ready to terminate

} };

// the program starts with the main-function

main() {

Writer w1(100, 1);

// process w1 is created and initialized; its body is called implizitly

Writer *w2 = new Writer(200, -24);

// process *w2 is created and initialized; its body is called implizitly

// now there exist three active processes: the main-process executing this function

// and the writer-processes w1 and *w2 each executing its body

for (int loops=0; loops<10; loops++) cout << loops*loops << endl;

delete w2; // the main-process wants to terminate process *w2

// the main-process will terminate process w1 because of the end of its scope

}

// the whole program terminates because all processes are terminated

Program 1 Definition and Creation of Processes

2.4.2.2 Interprocess Communication

Usually, processes do not deal with a special problem in isolation from other processes. Processes have to interchange data in order to solve their special task. In general, the exchange of data between two or more processes can be achieved by offering the definition of shared variables and/or mechanisms for message passing.

Shared Variables

QPC++-processes can use shared variables in order to interact with other processes. Shared variables of two or more QPC++-processes are all those variables which belong to the scope of each of those processes. Processes can read values out of a shared variable and write data into it. The access to a shared variable by two or more processes has to be synchronized in order to avoid interference. In QPC++, synchronization can be achieved by means of semaphores (see program 3).

Message Passing

An additional possibility for a QPC++-process to interact with another process deals with the call of one of its member functions. However, unlike the call of a member function of a C++-object, a call of a member function of a process does not lead to the immediate execution of the function. Usually, the called process is executing its body. Only at certain explicitly indicated points the body may be interrupted in order to execute a member function. Thus, the calling process (client) has to wait, until the called process (server) reaches such a point. QPC++ adds a new syntactical construct to C++ in order to be able to mark such a point of synchronization. It is called accept statement and includes two sorts of information, the name of the member function which may be called and the names of the processes which may call that member function.

Supposing a client calls a member function of a process (it makes a request) and the server accepts that call (it accepts the request), the two processes enter into a so called rendezvous: The client and the server synchronize, the actual parameters are passed to the member function, the function is executed in the thread of the server, and perhaps a result is passed back to the client by means of a return statement. After the execution of the member function the rendezvous is finished. Each of the two processes can execute its next statement independently. A rendezvous is a special form of interprocess communication via message passing.

Usually, if a client makes a request, it delays until the server accepts the request, executes the corresponding member function and finishes the rendezvous. Such a request is called a synchronous request.

An accept statement is blocking as well. That means, reaching an accept statement, the server delays until a suitable request is made. After the completion of the execution of the corresponding member function, the server continues with the execution of the statement following the accept statement.

A rendezvous is finished implicitly by reaching the end of a called member function or explicitly by calling a return statement. The semantics of a return statement of QPC++ is the same as in C++. Additionally, QPC++ offers another way to finish a rendezvous by means of a so called reply statement. A reply statement looks like a return statement. However, in contrast to it, it does not finish the execution of the member function. By means of a reply statement a server can pass back a result to the client as soon as possible without finishing the execution of the function. Thus, the client is only delayed as long as necessary.

QPC++-processes are sequential processes(see [78]). Member functions are executed in the thread of the server. Therefore, it is not possible to execute two or more member functions of the same process in parallel. At each time there exists at most one active member function per process. Intra-object concurrency is not supported. However, it is allowed to define accept statements not only in the body but in member functions of process classes as well. In the latter case, the execution of the outer request is interrupted, until the execution of the inner request is completed.

Program 2 illustrates the interaction of processes by virtue of a rendezvous. After the creation of process b (an instance of process class Buffer) and the activation of its body, b reaches an accept statement expressing that b will accept a call of its member function put being made by any process (indicated by the key word all). Process b delays until a put-request is made. When a put-request occurs, it starts the execution of its member function put after having passed the actual argument to it. By virtue of the reply, b finishes the rendezvous and allows the client to continue immediately. The client does not have to wait, until the actual parameter is assigned to the instance variable buffer of b. Afterwards, the buffer process waits for a get-request. A get-request is performed by passing the actual value of the instance variable buffer back to the client. After the creation of process b, the main process makes a put-request to b. A rendezvous takes place. The member function put of process b is executed. Afterwards, the main process performs some actions. Finally, by means of a get-request, it asks b to pass the buffered value back and assigns that value to the local variable i.

process class Buffer {

int buffer;

public:

void put(int value) {

reply; // reply-statement

// the reply finishes the rendezvous; the client can continue immediately; the

// buffer-process executes the assignment before it returns to the body

buffer = value;

}

int get() { return buffer; }

$Buffer() { // body

all<.put; // accept-statement

// waiting for a put-request; when a process makes a put-request, the member

//function put is executed

all<.get; // accept-statement

// waiting for a get-request; when a process makes a get-request, the member

// function get is executed

} };


main() {

Buffer b;

// a buffer-process b is created and initialized; its body is called automatically;

// b is waiting for a put-request

b.put(47); // synchronous request

// the main process makes a put-request to b; because b is waiting for a

// put-request, a rendezvous can take place; the member function put of b is executed

// ...

int i = b.get(); // synchronous request

// the main process makes a get-request to b; having finished the put-request, b is

// waiting for a get-request; thus, a rendezvous can take place;

// the member function get of b is executed; it returns a value which is stored in i

}

Program 2 Interprocess Communication

Often, it is desirable for a server to offer the possibility to accept two or more requests of a different type alternatively at a certain time. For that reason a new syntactical construct is added, the select statement. The select statement consists of one, two, or more accept statements and an optional otherwise statement. It is possible to attach an additional condition to the accept statements within a select statement (guarded accept statement). When a process reaches a select statement during run-time, the alternative accept statements are checked in a non-deterministic order whether they are holding or not. If an alternative is holding, the corresponding member function is executed and afterwards the select statement is finished. If no alternative is holding, the process is delayed, until a process makes a request that causes an alternative to be holding. In the latter case, the delay of the process can be avoided by means of an otherwise statement. The process executes the otherwise statement if no alternative is holding. By defining a select statement only consisting of one accept statement and an empty otherwise statement, a non-blocking accept statement can be achieved.

On the other hand, a non-blocking call of a member function of a process, a so called asynchronous request, can be defined, too. Putting the key word asynchronous in front of a request, that request together with the actual parameters is stored in an internal mailbox of the addressed server. The client has not to wait until the server will execute the member function but can continue immediately.

The use of select statements and asynchronous requests is demonstrated in program 3. A process class Semaphore is defined. Semaphores can be used to synchronize processes. Using semaphores, it is easy to protect critical sections, e.g., a statement accessing a shared variable. A semaphore is created and initialized with the number of processes that are allowed to stay in the critical section simultaneously. The condition counter>0 in the second alternative of the select statement within the body of process class Semaphore indicates whether the critical section which is guarded by the semaphore is occupied. If the condition is not fulfilled, processes making a wait-request are delayed. At least they have to wait until one process leaves the critical section indicating that by virtue of a signal-request. A signal-request may be made asynchronously by the client, because the execution of the member function signal does not affect its activities.

#include <stream.h>

process class Semaphore {

int counter;

public:

Semaphore(int value) { (value<0)?(counter=0):(counter=value); }

void wait() { counter--; }

void signal() { counter++; }

$Semaphore() {

while (1)

// waiting for a signal-request or for a wait-request if the condition ´counter>0´

// holds

select { // select-statement

when (all<.signal);

or when ((counter>0) => all<.wait);

} } };

Semaphore sem(1);

// creation of a semaphore-process which allows only one process to enter the critical

// section; at one time it is allowed for only one process to assign values to the

// global variable cout (declared in <stream.h>)

process class Writer {

char *text;

public:

Writer(char *t) { text = t; }

$Writer() {

while (1) {

sem.wait();

// after a wait-request sem can only handle a signal-request because the condition

// ´counter > 0´ does not hold; so it is not possible for other writer-processes

// to pass the above statement; at least they are blocked, until a signal-request

// has been executed by sem

cout << text << endl;

// cout is a global variable; assignments to it must be synchronized

asynchronous sem.signal();

// a signal-request is put into the mailbox of sem; the process can continue

// immediately, it has not to wait until the request is accepted and executed

} } };

main() {

Writer w1("hello america"), w2("hello germany");

// two writer-processes are created; each of them executes its body

}

Program 3 Select-Statement, Asynchronous Requests

Nevertheless, a client can receive a result from an asynchronously called member function adapting the wait-by-necessity principle [22] to QPC++: A client of an asynchronous request has to wait only when it attempts to use the result of the corresponding member function. Variables which are defined to receive the result of an asynchronous request are called future variables [81]. In QPC++ normal variables can be used as future variables.

A process is able to accept all those requests for which there exists a corresponding member function in its protocol. Additionally, a special accept statement is available to any process: all<.terminate. It is used like a normal accept statement and signals the willingness of the process to become terminated. A terminate-request is made implicitly by another process by means of the delete-operator or through leaving the scope of the process. A terminate-request is always synchronous. The destructor of a process class can be regarded as the member function being assigned to the termination accept statement. After a process has accepted a terminate-request and has executed its destructor, it is dead and cannot perform further actions. Note that in program 3 this cause a deadlock of the main process. Leaving the scope of function main, the main process makes an implicit terminate-request to process w2. However, w2 will never accept that request. Thus, the main process is blocked forever.

Program 4 illustrates the use of future variables and the termination accept statement. The main process creates process *c and makes the asynchronous request f to *c. The variable value1 is used as future variable. While the Calculator process executes member function f the main process can perform some other operations in parallel. It has to wait for the result of f only when it evaluates the expression value2*value1. Note that the return statement in member function f does not force the calculator process to wait until the main process will use the result. The return value is automatically assigned to the future variable. The delete-operation of the main processes causes an implicit terminate-request to *c. The main process has to wait until *c has accepted that request and has executed the destructor as usual.

process class Calculator {

public:

Calculator() {...}; // constructor

~Calculator() {...} // destructor

float f(float x) {

float y = ...

return y;

// the return value is assigned to the corresponding future variable if f is

// called asynchronously

}

$Calculator() {

while (1)

// at each time a calculator process accepts either a f- or a terminate-request

select {

when (all<.f);

or when (all<.terminate);

} } };

main() {

Calculator *c = new Calculator();

float value1 = asynchronous c->f(2.3);

// value1 is used as future variable of the asynchronous request f; the main process

// does not wait for the result of f but executes the following statement immediately

float value2 = sin(30.7) * cos(93.0);

value2 = value2 * value1;

// now, during the evaluation of this expression the value of the future variable is

// used; the main process has to wait if f has not returned a result yet

delete c;

// by calling the delete-operator the main process makes an implicit terminate request

// to *c and waits until *c has accepted that request and has executed its destructor

}

Program 4 Future Variables, Termination

If no body is defined within a process class and no body is inherited (see the following section), a process of that class behaves just like a passive object. It is always waiting for any request. Receiving a request, it executes the corresponding member function. Afterwards it waits again. That means that in program 4 the definition of the body of class Calculator is unnecessary and could have been left.

2.4.2.3 Inheritance and Polymorphism

The advantages of object-oriented programming over other programming paradigms are the raised reusability and easy extensibility of existing software. These advantages are facilitated by the concepts of inheritance and polymorphism in conjunction with dynamic binding.

A class possessing some similarities with an already existing class can be derived from the existing one. That means that the new class automatically inherits the specification and implementation of the existing class. Parts of it can be extended or modified. The concept of polymorphism in conjunction with dynamic binding allows the definition of variables referring to objects that have the same interface but different implementations of their methods. The referred object and the executed method will be determined at run-time and not at compile-time. C++ supports both concepts. It allows the definition of a class being derived from one or even two or more existing classes (multiple inheritance). Polymorphism is facilitated by means of so called virtual member functions.

In QPC++ both concepts are applied to process classes as well. Process classes can be derived from already defined process classes. They can even be derived from object classes. Member functions and variables are inherited just like for object classes. Furthermore, a body is inherited. Member functions of process classes can be defined virtual. Thus, the concept of polymorphism is valid for processes as well. It is even applied to accept-statements with corresponding virtual member functions. A call of a virtual member function of a process is said to be a virtual request. Receiving a virtual request is called virtual acceptance.

Program 5 illustrates the merging of concepts of object-oriented and parallel programming in QPC++. Process class Calculator is an abstract process class with the pure virtual member function calculate. Process classes Sin and Cos are derived from Calculator and therefore inherit its body. Each of them defines the virtual member function calculate in a different way. The main process makes two virtual calculate-requests. They cause different results depending on the actual type of the process referred by c.

#include <math.h>

#include <stream.h>

process class Calculator {

// abstract process class

public:

$Calculator() { all<.calculate; // virtual acceptance }

virtual float calculate(float) = 0;

};

process class Sin : public Calculator {

public:

// the body is inherited

float calculate(float x) { return sin(x); }

};

process class Cos : public Calculator {

public:

// the body is inherited

float calculate(float x) { return cos(x); }

};

main() {

Calculator *c;

c = new Sin();

cout << c->calculate(90.0) << endl; // virtual request to *c; sin(90.0) is calculated

c = new Cos();

cout << c->calculate(90.0) << endl; // virtual request to *c; cos(90.0) is calculated

}

Program 5 Inheritance and Polymorphism.

2.4.2.4 Implementation of QPC++

The language QPC++ is described in detail in [12]. A prototype is implemented on SUN-SPARC workstations. A compiler [13] and a run-time system [14] have been developed. The QPC++-compiler checks the correctness of a given QPC++-program. Further, it generates C++-code which is afterwards compiled by an existing C++-compiler. The generation of code includes the transformation of the new constructs of QPC++ into calls of routines which are defined by the run-time system of QPC++. Fig. 13 illustrates the use of the compiler and the run-time system of QPC++.

Fig. 13 Compiler and run-time system of QPC++

Run-Time System

The existing run-time system of QPC++ has been designed for uni-processors. It has been implemented in C++ on SUN-SPARC workstations on top of the UNIX operation system. The run-time system mainly deals with three tasks: It provides mechanisms which facilitate the creation of processes, cares about the scheduling of processes and implements the interprocess communication.

Processes are implemented as so called lightweight processes. They are working quasi-parallel in a common virtual address space of a (heavyweight) UNIX process. Each lightweight process has its own stack. The scheduler is implemented as a lightweight process with special tasks. When it is called, it deactivates the running process by storing the actual contents of the hardware registers. Afterwards, it activates another process by either starting its execution or loading its earlier stored registers. Thus, a process switch is not much more expensive than a usual call of a function. The scheduler is either called explicitly by some routines of the run-time system or implicitly after a certain period of time has been elapsed (time slicing).

Lightweight processes are working in a common virtual address space. Thus, the interprocess communication could be implemented in a very efficient way. The call of a member function of a process is compiled into the definition of a special class. An object of that class is created which includes the member function and the actual arguments. The address of the object is appended to the mailbox of the server. Mailboxes are implemented as linked lists. When a server wants to accept a special request, it has to look for it in its mailbox. It fetches the object out of the mailbox and calls a special member function of it. Within the execution of that member function the corresponding member function of the server is called.

The run-time system of QPC++ is implemented in C++ in form of a hierarchy of 29 classes. The main part of it is shown in figure 13. Class QP_Stackframe offers routines for storing and loading of hardware registers. Class QP_Task uses the inherited functions for the implementation of lightweight processes. Class QP_Communication adds member functions which facilitate the interprocess communication. At last, QPC++-processes are defined as instances of classes that are directly derived from class QP_Process. QP_Process changes some inherited member functions implementing the special semantics of processes of QPC++. The main process of a QPC++-program is created as an instance of class QP_Main_Process. Internally, the run-time system defines an object of class QP_Scheduler which handles the scheduling. The scheduler is implemented as a lightweight process, too. However, in contrast to other lightweight processes, it performs special tasks. The basic scheduling mechanisms are implemented in class QP_Simple_Scheduler. By deriving classes from QP_Simple_Scheduler and redefining a virtual member function of it, special scheduling algorithms can be implemented. Thus, it is very easy to compare different scheduling strategies.

Compiler

The QPC++-compiler checks the correctness of a given program. Using the routines of the run-time system of QPC++, it further generates C++-code which afterwards is compiled by an existing C++-compiler. Currently, the C++-compilers of AT&T and GNU are supported.

2.4.3 FMAD

2.4.3.1 The IMRA-Model

FMAD can be regarded as the implementation of the IMRA-Model (Inter Media Relationships via Attributes) [16], [17]. The IMRA-Model consists of three parts:

- the IMRA-Formalism, an abstract object-oriented formalism which supports the definition of the syntax of interactive multimedia applications,

- the IMRA-Presentation Algorithm, which defines the semantics of a multimedia application being specified by the IMRA-Formalism, and

- the Media-Relationship Diagrams, a graphical notation which enables the visual modeling of interactive multimedia applications.

Characterization of interactive multimedia applications

The intension of a multimedia application is to inform a user about certain facts. To achieve this intension certain independent heterogeneous information units are connected. Multimedia applications can be characterized as graphs (multimedia graphs) consisting of nodes and edges. The nodes represent certain units of information. They are called information objects. The edges describe certain connections or relationships between the objects.

Information objects can be instances of certain types. They can be divided into three categories:

- media objects like texts, graphics, animations, videos, or audios,

- interaction objects like buttons, text entry fields, or sliders and

- application objects like databases.

Sometimes it is useful to regard a part of a multimedia graph as a whole, for example a video and its setting. Those composed information units are called complex information objects or multimedia objects. They consist of other information objects - their component objects - and relationships between the component objects.

Relationships between information objects can be characterized in the sense that certain events (triggers) imply certain actions (effects). In a multimedia graph the relationships are represented as edges. Triggers and effects can be of different types. Temporal triggers and effects are for example the start of a video or the end or abortion of an audio. Spatial triggers and effects are the displacement of a text-object on the screen or the increase of a graphic-object. Other triggers and effects are the change of the volume of an audio or the change of the colour of a graphic-object. User interactions can be regarded as triggers, too, for example the selection of a menu item, the clicking of a button, the input of a string into a text entry field or the manipulation of a slider or scrollbar. Furthermore it is possible that the types of the trigger and effect of one relationship are different. The displacement of a graphic-object on the screen (effect) as the result of the start of a video (trigger) is an example for a temporal-spatial relationship.

The IMRA-Formalism

Within the IMRA-Model interactive multimedia applications are regarded as collections of information objects and relationships. Information objects can be elementary or complex. Elementary objects are instances of a certain type. Complex objects consist of other objects and relationships between these objects. They are used to structure multimedia applications in a hierarchical manner.

The state of an information objects is formed by some type-specific data (like audio samples) and a set of type-specific attributes (like volume or speed of an audio). The data are unchangeable but the attributes can be manipulated by the author. All objects possess a so-called activity attribute which represents an active object by the value active and a passive object by the value passive.

The IMRA-Model is an object-oriented model and therefore extendable. A new information type can be integrated by specifying its attributes. A so-called system function can be assigned to each attribute. It is a function or procedure of the underlying run-time system. While the IMRA-Model itself only enables the specification of the structure and the dynamics of a multimedia application the system functions implement the perceptible behaviour of the application. The system function of an activity attribute is called activity function. Activity functions are not called like the other functions but they are started and then they act concurrently to other activities of the application. In general activity functions handle the data of an information object whereas other system functions care about the attributes. The activity function of an audio-object for example reads the audio samples and transfers them to the speaker. Usual system functions for examples are functions which implement the movement of a graphic-object on the screen, the manipulation of the volume of an audio-object, or the storing of data into a database.

The attributes serve as a basis for the definition of relationships between the information objects. Relationships of the IMRA-Model are so-called event-condition-action triplets. Each relationship consists of a set of events, a condition and an action. A condition is a logical relation between attributes, an action is an assignment of a calculated value to an attribute, and an event is the assignment of a value to an attribute which occurs in the condition.

The IMRA-Presentation Algorithm

The semantics of an interactive multimedia application specified with the IMRA-Formalism can be simplified described in the following way: An event (an assignment to an attribute) implies the examination of all conditions of relationships in which the attribute occurs. If the examination is positive the accompanying action will be executed (which in general will trigger further events). Afterwards the system function of the manipulated attribute will be called.

It should be noticed that the assignment of a value to an attribute during the execution of a system function implies the triggering of an event as well, if the attribute is handed over to the function as a reference-parameter.

Media-Relationship Diagrams

In order to enable a clear handling of the abstract IMRA-Formalism the Media-Relationship Diagrams (MR-Diagrams) have been developed on the basis of the formalism. In the notation of MR-Diagrams information objects are represented by rectangles. Type- and object-specific information are textually put into the rectangles. Relationships are represented by arrows. As a notation for the conditions and actions of relationships a C++-like notation is used. Arrows are uni-directional and connect exactly two information objects, strictly speaking two so-called ports which are assigned to the objects. Ports are used to distinguish between the different kinds of relationships. Therefore three different types of ports are introduced: start-ports, end-ports, and lay-out-ports. Arrows which start at a start- or end-port characterize temporal triggers (start, end, or abortion of the object). Arrows which finish at a start- or end-port characterize temporal effects (activation or passivation of the object). Non-temporal triggers and effects are characterized by arrows starting or ending at a lay-out-port. Ports are represented as small rectangles at the border of the rectangle representing an object. Start-ports lie at the left, end-ports at the right, and lay-out-ports at the upper or lower border of the rectangle.

Examples

Two small examples shall illustrate the use of MR-Diagrams for the visual specification of interactive multimedia applications. In the first example (figure 14) a circle with changing colour (red or green) is displayed on the screen. The task of the user is to click the corresponding button as quickly as possible. The elapsed time will be sent to a statistic-object representing an application which can evaluate the value. Clicking the quit-button the whole application will be finished.

The whole application is represented by a complex information object. The start of the application causes the triggering of relationship (1). That causes the activation of all component objects which particularly leads to the display of the circle and the buttons on the screen being performed by their activity functions. The virtual object is used for the definition of relationships between the circle, the buttons and the timer-object. Relationship (3) is used to store the actual colour of the circle in the attribute col of the virtual object. If the user clicks a button the corresponding value is assigned to the attribute but of the virtual object (relationships (4) and (5)). When the values of the attributes but and col of the virtual object are identically the timer-object will be aborted (relationship (6)). Within its activity function the timer-object calculates the elapsed time and sends it to the statistic-object (relationship (7)). Via relationship (8) the colour of the circle is set and via relationship (9) the timer-object is started again.

Fig. 14 Example 1

If the user clicks the quit-button relationship (10) will be triggered which will lead to the passiviation of the complex object which implicitly causes the abortion of all component objects and with that the termination of the whole application.

In the second example (figure 15) the synchronous presentation of a video and an audio is modeled. Additionally users are enabled to manipulate the speed of the presentation by using a slider. In the upper left part of figure 15 the audio- and video-objects are combined in a complex object named sync. They are synchronized via the relationships (1) and (2). The complex object sync is reused in the upper right part of figure 15 and connected to the slider. The relationships (3) and (4) express the manipulation of the speed. If a user moves the button of the slider the value of the attribute value will be changed correspondingly by the activity function of the slider. That triggers relation (4). Via relationship (3) the change is forwarded to the audio- and video-object. The scale-attributes of the objects are adapted correspondingly and the system functions Scale of the attributes are called. Within the system functions the perceptible manipulation of the speed is performed.

Fig. 15 Example 2

2.4.3.2 Conceptional Structure of FMAD

The IMRA-Model serves as a conceptual basis of FMAD [16], [19]. The IMRA-Formalism has been transformed into an extendable hierarchy of C++-classes, the IMRA-Presentation Algorithm has been used to implement the interpreter of FMAD and the MR-Diagrams have influenced the lay-out of the user interface of FMAD.

Conceptually FMAD consists of several tools:

- graphical editors to form and to manage sets of information object,

- graphical editors to specify relationships between information objects

- graphical editors to define the initial values of the attributes of the information objects,

- an interpreter,

- an code generator.

Figure 16 illustrates the relationships between the tools.

Fig. 16 Conceptual structure of FMAD

FMAD contains no tools supporting the creation of information objects like editors to build graphics or editors to manipulate audio samples. Instead FMAD enables the coupling of any external tools. Sets of information objects (multimedia-databases) can be built by using certain graphical editors which contain browsing and filtering tools. The objects are represented as icons. To define relationships between the objects the author has to drag the icons into the relationship editors. These editors enable the definition of relationships using concepts and techniques of visual programming languages like programming-by-demonstration [63]. Via these editors complex information objects can be defined. Graphical attribute editors (like interface builders) support the definition of the initial values of the attributes of the objects. The corresponding initializing relationships are generated automatically. FMAD uses an internal format to manage the objects, their attributes, and the multimedia graphs. These internal representation can be saved into and re-loaded from files. Thus, persistent reusable objects can be generated. In addition the internal representation can be transformed into XFantasy-objects. That enables the debugging of applications by using the interpreter of FMAD. Furthermore it is possible to generate C++-code out of the internal representation. If necessary a programmer can manipulate that code. At last the code can be compiled by using an usual C++-compiler and an executable stand-alone multimedia application can be generated.

2.4.3.3 Implementation of FMAD

FMAD has been implemented with the object-oriented programming language C++ [71]. The XFantasy-UIMS has been used for the development of FMAD and it is used as its run-time system as well.

Figure 16 exclusively illustrates the conceptual structure of FMAD whereas its real internal structure is shown in figure 17. FMAD has been implemented using object-oriented concepts. In reality FMAD does not consist of a set of editors which can handle several information objects. Instead, FMAD consists of a set of information objects each possessing several components to handle and to manipulate the object independently.

Fig. 17 Internal structure of FMAD

The kernel of FMAD is formed by some abstract classes (the base classes of the components) and the code generator and the interpreter which both access the interfaces of the abstract classes. Information types are not part of the kernel. They can be added according to the wishes of the authors. Information types can be implemented as composite classes consisting of classes being derived from the abstract classes of the kernel. General features are inherited. Only type-specific features have to be added. The cooperation of the components has already been implemented within the kernel on the basis of the interfaces of the abstract classes. Because of these facts it is very easy to integrate new information types into FMAD. Currently 23 types are integrated. Figure 18 shows a part of the actual hierarchy of information types of FMAD.

Fig. 18 Information types of FMAD

2.5 Cooperations

For the exchange of research ideas and results we have had several meetings and discussions with other groups of scientists:

- Dr. W. Hübner, Dr. P. Wiedling, ZGDV Darmstadt (Theseus++ project),

- Prof. Dr. Six, Dr. J. Voss, Fernuniversität-Gesamthochschule Hagen (DIWA project),

- Prof. Dr. R. Gunzenhäuser, Dipl.-Inf. J. Herczeg, University of Stuttgart (XIT project).

2.6 Conclusion and Future Work

2.6.1 XFantasy-UIMS

The XFantasy-UIMS is especially adapted to the requirements of multimedia user interfaces. It consists of an object-oriented user interface toolkit (XFantasy-UIT) and a dialog specification language (ODIS). The underlying dialog model (COMMAND model) facilitates the definition of interactions and complex multimedia output actions. Interactions are defined by hierarchies of objects based on a bottom-up event recognition and processing. Complex output actions are also modeled hierarchically with inner nodes (output synchronization operators) defining temporal relations between subordinate output actions. For modeling complex output actions which cannot be represented as hierarchies the XFantasy dialog model additionally offers synchronization conditions for starting and stopping of output actions.

In order to model the concurrency of interactions and continuous output actions in multimedia user interfaces the COMMAND model offers special synchronization objects describing temporal relations. Thereby, the XFantasy-UIMS facilitates an integrated modeling of interactions and multimedia presentations and goes beyond the scope of other multimedia development tools which have mainly focused on the concepts of temporal composition and synchronization.

The implementation of the XFantasy-UIMS has been concluded. The user interfaces of several applications have been implemented by using the XFantasy-UIMS. As well the XFantasy-UIMS has been used to implement the authoring system FMAD, and it forms the run-time system of FMAD, too. Thus the development of FMAD can be regarded as an evaluation of the XFantasy-UIT and its dialog model.

2.6.2 QPC++

QPC++ represents an approach to the homogenous integration of object-oriented and parallel programming concepts based on the programming language C++. Based on a comparison with other approaches to the integration of parallelism into C++ (see [15]) QPC++ can be considered as a well suitable language for the modeling of large systems with inherent parallelism, like graphical user interfaces. QPC++ is a language with minimal syntactical overhead. The added concepts for expressing parallelism are as simple as possible but as powerful as necessary. They are fully based on the class/object model of C++. Therefore, QPC++ is easy to learn and easy to handle. For a C++-programmer it will be very easy to switch to QPC++, even without any experiences in parallel programming. The availability of autonomous active objects in conjunction with mechanisms facilitating synchronous and asynchronous communication and multicasting of messages makes the development of graphical user interfaces easier than with passive objects of sequential object-oriented programming languages (see [27]). The definition of QPC++ as well as the implementation of its run-time system and its compiler have been concluded.

2.6.3 FMAD

FMAD is an flowchart-based authoring system which supports the development of interactive multimedia applications using visual programming concepts and techniques. FMAD does not presuppose any knowledge of traditional programming languages by a developer. Hence it can be used by non-programmers to develop interactive multimedia applications like presentations of products or companies as well as CBT applications and computer games.

In FMAD an interactive multimedia application is represented by a network consisting of information objects as nodes and relationships as edges. Information objects can be media objects like graphics, audios, and videos as well as interaction objects like buttons, menues, or sliders as well as application objects which represent certain applications like databases. Relationships can be defined by a developer in order to express any dependencies between objects. They can be used to determine the flow of control and the design of an application as well as user interactions and their effects. FMAD has been designed and implemented using object-oriented concepts. Hence it is extendable and can easily be adapted to future input- and output-technologies.

The implementation of FMAD has been concluded. Several small multimedia applications have already been developed with it. FMAD had been presented on the CeBIT 1995 in Hannover.

2.7 References

1. Adamo, J.-M.: Extending C++ with Communicating Sequential Processes. - in Welch, P. et al. (eds.): Transputing´91, IOS Press, 1991.

2. Agha, G.: ACTORS: A Model of Concurrent Computation in Distributed Systems. - MIT Press, Cambridge, Mass., 1986.

3. Allen, J.F.: Maintaining knowledge about temporal intervals. - Communication of the ACM, Vol. 26., No. 1, pp. 832-843, 1983.

4. Allen, J.F.: Time and Time Again: The Many Ways to Represent Time. - International Journal of Intelligent Systems, Vol. 6, pp. 341-355, 1991.

5. Andrews, G.R.: Concurrent Programming - Principles and Practice. - The Benjamin /Cummings Publishing Company, Inc., Redwood City (CA), 1991.

6. Anson, E.: The Device Model of Interaction. - Computer Graphics 16 (3), pp. 107-114, 1982.

7. Appelrath, H.-J., Götze, R.: XFantasy - ein objektorientiertes UIMS für multimediale Informationssysteme. - Internal Paper (in German), University of Oldenburg, OFFIS 1991/1993.

8. Apple Computer, Incorporation: QuickTime 1.5 Developer Kit. - Developer Technical Publications, 1992.

9. Bertrand, F., Price, R.: Coded Representation of Multimedia and Hypermedia Information Objects. - MHEG Working Document, ISO/IEC JTC1/SC/WG12, May 1992.

10. Blakowski, G., Hübel, J., Langrehr, U.: Tools for Specifying and Executing Synchronized Multimedia Presentations. - Proceedings of Second Workshop on Network and Operating System Support for Digital Audio and Video, Heidelberg, pp. 271-282, 1991.

11. Bogaschewsky, R.: Hypertext-/Hypermedia-Systeme: Ein Überblick. - (in German) Informatik-Spektrum, 15(3), pp. 127-143, 1992.

12. Boles, D.: QPC++ - eine parallele objektorientierte Programmiersprache, Syntax and Semantics, Version 4.0. - Internal Paper (in German), University of Oldenburg, Department of Computer Science 1992.

13. Boles, D.: QPC++ - eine parallele objektorientierte Programmiersprache, Compiler, Version 4.0.2. - Internal Paper (in German), University of Oldenburg, Department of Computer Science 1992.

14. Boles, D.: QPC++ - eine parallele objektorientierte Programmiersprache, Run-Time System (Uniprocessor), Version 4.0.2. - Internal Paper (in German), University of Oldenburg, Department of Computer Science 1992.

15. Boles, D.: Parallel Object-Oriented Programming with QPC++. - Structured Programming, Springer-Verlag, Vol.14, pp. 157-172, 1993

16. Boles, D.: Das IMRA-Modell - Integration von Interaktionen in das Autorenwerkzeug FMAD. - Internal Report IS-20 (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1994.

17. Boles, D.: Das IMRA-Modell - Modellierung interaktiver multimedialer Präsentationen. - (in German), in: Hypertext - Information Retrieval - Multimedia: Synergieeffekte elektronischer Informationssysteme, Proceedings GI-ÖCG-Si-HI-Fachtagung HIM´95, Hrsg.: R. Kuhlen/ M. Rittberger, Universitätsverlag Konstanz, pp. 61-75, 1995.

18. Boles, D.: Elektronisches Publizieren - Autorenwerkzeuge und Arbeitsumgebungen für Autoren. - (in German), in: Proceedings Deutscher Dokumentartag 1995, Potsdam, Deutsche Gesellschaft für Dokumentation, Hrsg.: W. Neubauer, pp. 393-411, September 1995.

19. Boles, D.: FMAD - ein objektorientiertes Autorensystem für interaktive multimediale Anwendungen. - (in German), in: Proceedings GI-Fachtagung Softwaretechnik ´95, Braunschweig, pp. 24-34, Oktober 1995.

20. van den Bos, J.: Abstract Interaction Tools: A Language for User Interface Management Systems. - ACM Transactions on Programming Languages and Systems 10 (2), pp. 215-247, 1988.

21. Buhr, P.A. et al: mC++: Concurrency in the Object-oriented Language C++. - Software-Practice and Experience, Vol.22, No.2, 1992.

22. Caromel, D.: Concurrency And Reusability: From Sequential To Parallel. - Journal of Object-Oriented Programming, September / October 1990.

23. Claaßen, R.: Synchronisation in der Dialogkontrolle von multimedialen Benutzerschnittstellen (in German). - Master thesis (in German), University of Oldenburg, Department of Computer Science, 1993.

24. Davies, N.A., Nicol, J.R.: Technological perspective on multimedia computing. - Computer Communcations, Vol. 14, No. 5, pp. 260-272, 1991.

25. Drapeau, G.D., Greenfield, H.: MAEstro - A Distributed Multimedia Authoring Environment. - Proceedings of the 1991 Summer USENIX Conference, 1991.

26. Eirund, H., Götze, R.: Modeling Interactive Multimedia Applications. - Proceedings of Eurographics Workshop on Object-Oriented Graphics, Champery, pp. 229 - 245, 1992.

27. Eirund, H., Götze, R. : Realisierung eines Entwicklungswerkzeugs für multimediale Anwendungen - (in German), Proceedings GI-Fachtagung Softwaretechnik'93, Dortmund, pp. 48-61, 1993.

28. H. Eirund, M. Hofmann: Designing Multimedia Presentations. - Proceedings Hypermedia'93, Zürich, pp. 183-194, 1993.

29. Encarnacao, J.L., Hübner, W., Väänänen, K.: Autorenwerkzeuge für multimediale Informationssysteme. - (in German) Informationstechnik und Technische Informatik, 35(2), pp.31-38, 1993..

30. Eun, S., No, E.S., Kim, H.C., Yoon, H. Maeng, S.R.: Eventor: an authoring system for interactive multimedia applications. - Multimedia Systems, 2(2), pp. 129-140, 1994.

31. Gehani, N.H., Roome, W.D.: Concurrent C. - Software-Practice and Experience, Vol. 16, pp. 821-844, 1986.

32. Gehani, N.H., Roome, W.D.: Concurrent C. - Software-Practice and Experience, Vol. 16, pp. 821-844, 1986.

33. Gettys, J., Karlton, P.L., McGregor, S.: The X Window System, Version 11. - Software-Practice and Experience, Vol. 20, No. S2, pp. 35-67, 1990.

34. Gibbs, S.: Composite Multimedia and Active Objects. - Proceesings OOPSLA´91, pp. 97-112, 1991.

35. Goldberg, A.: Smalltalk-80: The Interactive Programming Environment. - Addison-Wesly, Reading, Mass., 1984.

36. Goodman, D. (editor): The Complete HyperCard 2.0 Handbook. - Bantam Books, Inc., 1990.

37. Gorlen, K.E., Orlow, S.M., Plexico, P.S.: Data Abstraction and Object-Oriented Programming in C++. - Teubner Verlag, Stuttgart, Germany, 1990.

38. R. Götze: Objektorientierte Dialogspezifikation. - (in German) Proceedings GI-Jahrestagung 1991, Springer-Verlag, IFB 293, pp. 519 - 528, 1991.

39. R. Götze: Object-oriented User Interface Specification. - Proceedings of Eurographics Workshop on Formal Methods in Computer Graphics, Marina di Carrara, 1991.

40. Götze, R., (project group), H.-J. Appelrath: Endbericht der Projektgruppe "Objektorientierte Benutzerschnittstellenentwicklung”. - Internal Report IS-13 (in German), University of Oldenburg, Faculty of Computer Science, 1992.

41. Götze, R. : Object-Oriented Specification of Complex Dialogues. - Proceedings of Eurographics Workshop on Object-Oriented Graphics, Champery, pp. 301 - 319, 1992.

42. Götze, R., Eirund, H., Claaßen, R. : Object-Oriented Dialog Control for Multimedia User Interface. - Proceedings VCHCI'93 Vienna Conference on Human-Computer Interaction'93, Wien, Springer-Verlag, pp. 63-75, 1993.

43. Götze, R.: Dialogmodellierung für multimediale Benutzerschnittstellen. - (in German), Teubner Verlag (Teubner-Texte zur Informatik; Bd. 14), PhD thesis, 1995.

44. Grunwald, D.: A Users Guide to AWESIME: An Object-Oriented Parallel Programming and Simulation System. - University of Colorado at Boulder, Technical Report CU-CS-552-91, 1991.

45. Guerraoui, R., Capobianchi, R., Lanusse, A., Roux, P.: Une vue générale de KAROS: un langage à objets concurrents destiné à des applications distribuées. - Technical Report CEA, CE Saclay DEIN/SIR (in french), 1992.

46. Guimarães, N.: Programming Time in Multimedia User Interfaces. - Proceedings UIST´92, ACM Press, pp. 125-134, 1992.

47. Hansen, P.B.: The Programming Language Concurrent Pascal. - IEEE Transactions on Software Engineering Vol. 1, No. 2, pp. 199-207, 1975.

48. Herzner, W, Kummer, M.: MMV - Synchronizing Multimedia Documents. - Proceedings of the Second Eurographics Workshop on Multimedia, Darmstadt, Eurographics Technical Report, pp. 107-126, 1992.

49. Hodges, M.E. et al: A Construction Set for Multimedia Applications. - IEEE Software 1 (1989), pp. 37-43, 1989.

50. Hoepner, P.: Sychronizing the Presentation of Multimedia Objects - ODA Extensions. - Kjelldahl, L. (Ed.): Multimedia - Systems, Interaction and Application, Springer-Verlag, pp. 87-100, 1991.

51. Hübner, W.: Ein objektorientiertes Interaktionsmodell für die Spezifikation graphischer Dialoge. - PhD thesis (in German), Zentrum für Graphische Datenverarbeitung, Darmstadt, 1990.

52. Hüsken, V.: Objektorientierung und Parallelität in Betriebssystemen und Programmiersprachen. - PhD Thesis (in German), RWTH Aachen, Germany , 1990.

53. ISO/IEC: Information Processing - Text and Office Systems - Office Document Architecture (ODA) and Interchange Format (ODIF). - International Standard 8613, 1988.

54. ISO/IEC: Information Technology - Hypermedia/Time-based Structuring Language (HyTime). - International Standard Draft, JTC1/SC18 N3190, 1991.

55. Kafura, D., Kung Hae Lee : ACT++: Building a Concurrent C++ with Actors. - JOOP, May/June 1990.

56. Kernighan, B.W., Ritchie, D.M.: The C Programming Language. - Prentice Hall, 1978.

57. Kleyn, M.F., Chakravarty, I.: EDGE - A Graph Base Tool for Specifying Interaction. - Proceedings of the ACM Symposium on User Interface Software, Banff (Alberta), ACM Press, pp. 1-14, 1988.

58. Krasner, G.E., Pope, T.S.: A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80. - Journal of Object-Oriented Programming, Vol. 1, No. 3, pp. 26-49, 1988.

59. Little, T.D., Ghafoor, A.: Synchronization and Storage Models for Multimedia Objects. - IEEE Journal on Selected Areas in Communications, Vol. 8, No. 3, pp. 413 427, 1990.

60. Lütjens, O.: Neue Schale, neuer Kern. - (in German) Screen Multimedia, pp. 60-63, June 1994.

61. Marchiso, P., Panicciari, P., Rodi, P.: A Hypermedia Object Model and its Presentation Environment. - Proceedings Third Eurographics Workshop on Object-Oriented Graphics, Champery, pp. 335-353, 1992.

62. Mountford, S.J.: Multimedia: Trends and Issues. - in Bullinger, H.-J. (Ed.): Human Aspects in Computing:, Elsevier Science Publishers, pp. 40-54, 1991.

63. Myers, B.A.: Visual Programming, Programming by Example, and Program Visualization: A Taxonomy. - in Conference Proceedings CHI`86, Human Factors in Computing Systems, pp. 59-66, ACM, Inc., 1986.

64. Newcomb, S.R., Kipp, N.A., Newcomb, V.T.: The "HyTime” Hypermedia / Time-based Document Structuring Language. - SGML SIGhyper Newsletter, Vol. 1, No. 1, pp. 10-44, 1991.

65. Nye, A., O´Reilly, T.: X Toolkit Intrinsics Programming Manual. - O´Reilly & Associates, Sebastopol, CA, 1990.

66. Papathomas, M.: Concurrency Issues in Object-Oriented Programming Languages. - Centre Universitaire d`Informatique, Genève, 1989.

67. Shu, N.C.: Visual Programming Languages: A Perspective and a Dimensionale Analysis. - in: Visual Languages, Plenum Press, New York and London, Chang, S.-K, Ichikawa, T, Ligomenides, P.A. (editors), 1986.

68. Steinbrink, B.: Multimedia-Regisseure: Autorensystem und -sprachen im Vergleich. - (in German) c`t, pp. 168-179, October 1993.

69. Steinmetz, R.: Synchronization Properties in Multimedia Systems. - Technical Report, IBM European Network Center, 1989.

70. Steinmetz, R.: Multimedia-Technologie: Einführung und Grundlagen. - (in German) Springer-Verlag, Berlin, 1993.

71. Stroustrup, B.: The C++ Programming Language, Second Edition. - Addison-Wesley, 1991.

72. Sun Microsystems, Inc.: The Task Library. - Sun 2.1 C++ Manual Set, AT&T C++ Language System, Library Manual, 1992.

73. Takashio, K., Tokoro, M.: DROL: An Object-Oriented Programming Language for Distributed Real-Time Systems. - Proceedings OOPSLA`92, ACM Sigplan Notices, Vol. 27, No. 10., 1992.

74. Tiemann, D.: Die Modellierung von Interaktionen und Objektbeziehungen in einem objektorientierten System. - Master thesis (in German), University of Oldenburg, Department of Computer Science, 1991.

75. Vazirgiannis, M., Mourlas, C.: An Object-Model for Interactive Multimedia Presentations. - The Computer Journal, Vol. 36, No. 1, pp. 78-87, 1993.

76. Volgger, T.J.: Trendwende. - (in German) Screen Multimedia, pp. 50-53, March 1994.

77. Voßberg, L.: MediaManager - eine Erweiterung von XFantasy um Multimedia-Klassen und Synchronisationsmechanismen. - Master thesis (in German), University of Oldenburg, Department of Computer Science, 1993.

78. Wegner, P.: Dimensions of Object-Oriented Language Design. - Proceedings OOPSLA´87, ACM Press, pp.168-182, 1987.

79. West, N.: Multimedia Design Tools. - Macworld, pp. 194-201, November 1991.

80. Yager, T.: Build Multimedia Presentations with MacroMind's MediaMaker. - Byte, pp. 302-304, September 1991.

81. Yonezawa, A., Shibayama, E., Takada, T., Honda, Y.: Modeling and Programming in an Object-Oriented Concurrent Language ABCL/1. - Yonezawa, A., Tokoro, M.: Object-Oriented Concurrent Programming. - MIT Press, Cambridge, Massachusetts, pp. 55-89, 1987.

82. Yonezawa, A., Tokoro, M.: Object-Oriented Concurrent Programming. - MIT Press, Cambridge, Massachusetts, 1987.

List of Publications

Appelrath, H.-J., Götze, R.: XFantasy - ein objektorientiertes UIMS für multimediale Informationssysteme. - Internal Paper (in German), University of Oldenburg, OFFIS, 1991/1993.

Boles, D.: QPC++ - - eine parallele objektorientierte Programmiersprache, Run-Time System (Uniprocessor), Version 4.0.2. - Internal Paper (in German), University of Oldenburg, Department of Computer Science, 1992.

Boles, D.: QPC++ - eine parallele objektorientierte Programmiersprache, Compiler, Version 4.0.2. - Internal Paper (in German), University of Oldenburg, Department of Computer Science, 1992.

Boles, D.: QPC++ - eine parallele objektorientierte Programmiersprache, Syntax and Semantics, Version 4.0. - Internal Paper (in German), University of Oldenburg, Department of Computer Science, 1992.

Boles, D.: Parallel Object-Oriented Programming with QPC++. - Structured Programming, Vol. 14, pp. 158-172, 1993.

Boles, D.: Das IMRA-Modell - Integration von Interaktionen in das Autorenwerkzeug FMAD. - Internal Report IS-20 (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1994.

Boles, D.: Das IMRA-Modell - Modellierung interaktiver multimedialer Präsentationen. - (in German), in: Hypertext - Information Retrieval - Multimedia: Synergieeffekte elektronischer Informationssysteme, Proceedings GI-ÖCG-Si-HI-Fachtagung HIM´95, Hrsg.: R. Kuhlen/ M. Rittberger, Universitätsverlag Konstanz, pp. 61-75, April 1995.

Boles, D.: Elektronisches Publizieren - Autorenwerkzeuge und Arbeitsumgebungen für Autoren. - (in German), in: Proceedings Deutscher Dokumentartag 1995, Potsdam, Deutsche Gesellschaft für Dokumentation, Hrsg.: W. Neubauer, pp. 393-411, September 1995.

Boles, D.: Autorenwerkzeuge und Arbeitsumgebungen für Autoren. - (in German), in: nfd (Nachrichten für Dokumentation), Zeitschrift für Informationswissenschaft und -praxis, herausgegeben von der Deutschen Gesellschaft für Dokumentation, Vol. 46, Nr. 5, pp. 273-282, 1995.

Boles, D.: FMAD - ein objektorientiertes Autorensystem für interaktive multimediale Anwendungen. - (in German), in: Proceedings GI-Fachtagung Softwaretechnik ´95, Braunschweig, pp. 24-34, October 1995.

Claaßen, R.: Synchronisation in der Dialogkontrolle von multimedialen Benutzerschnittstellen. - (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1993.

Eirund, H., Götze, R.: Modeling Interactive Multimedia Applications. - Proceedings of Eurographics Workshop on Object-Oriented Graphics, Champery, pp. 229 - 245, 1992.

Eirund, H., Götze, R.: Realisierung eines Entwicklungswerkzeugs für multimediale Anwendungen. - (in German), Proceedings GI-Fachtagung Softwaretechnik'93, Dortmund, pp. 48-61, 1993.

Eirund, H., Hofmann, M.: Designing Multimedia Presentations. - Proceedings Hypermedia'93, Zürich, pp. 183-194, 1993.

Götze, R. : Object-oriented User Interface Specification. - Proceedings of Eurographics Workshop on Formal Methods in Computer Graphics, Marina di Carrara, 1991.

Götze, R. : Objektorientierte Dialogspezifikation. - (in German), Proceedings GI-Jahrestagung 1991, Springer-Verlag, IFB 293, pp. 519 - 528, 1991.

Götze, R., (project group), H.-J. Appelrath: Zwischenbericht der Projektgruppe "Objektorientierte Benutzerschnittstellenentwicklung”. - Internal Report IS-11 (in German), University of Oldenburg, Department of Computer Science, 1992.

Götze, R., (project group), H.-J. Appelrath: Endbericht der Projektgruppe "Objektorientierte Benutzerschnittstellenentwicklung”. - Internal Report IS-13 (in German), University of Oldenburg, Department of Computer Science, 1992.

Götze, R.: Object-Oriented Specification of Complex Dialogues. - Proceedings of Eurographics Workshop on Object-Oriented Graphics, Champery, pp. 301 - 319, 1992.

Götze, R., Eirund, H., Claaßen, R.: Object-Oriented Dialog Control for Multimedia User Interface. - Proceedings VCHCI'93 Vienna Conference on Human-Computer Interaction'93, Wien, Springer-Verlag, pp. 63-75, 1993.

Götze, R.: Dialogmodellierung für multimediale Benutzerschnittstellen. - (in German), University of Oldenburg, Department of Computer Science, Ph. D. thesis, 1994.

Götze, R.: Dialogmodellierung für multimediale Benutzerschnittstellen. - (in German), Teubner Verlag (Teubner-Texte zur Informatik; Bd. 14), 1995.

Ihmels, R.: Ein Graphik- und Animationseditor für das Autorensystem FMAD. - (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1995.

Tiemann, D.: Die Modellierung von Interaktionen und Objektbeziehungen in einem objektorientierten System. - (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1991.

Voßberg, L.: MediaManager - eine Erweiterung von XFantasy um Multimedia-Klassen und Synchronisationsmechanismen. - (in German), University of Oldenburg, Department of Computer Science, diploma thesis, 1993.