Re: TECH: non-geometry things and vrml.

David Cake (davidc@cs.uwa.edu.au)
Fri, 24 Jun 1994 10:36:57 +0800


>>We have only vaguely discussed non-geometrical elements of VR. We should bring
>> up ways to incorporate things like audio into VRML. Either through imbedding
>
> These might not be the most important things on the first version of
>VRML, but sure these will become part of it sooner or later..
>
I think that VRML in the first version is basically going to be
limited to 3d graphics and text, because of bandwidth. We are building a
protocol for the information highway, but we are going to be driving it
along the footpaths for a while. We should put in hooks for multi-media,
but incorporating it in the first version is not a priority.
My idea for a sensible way for the protocol to be implemented is
that a browser should specify to what degree it supports various features,
and then only the appropriate information will be sent. This way we can
slowly add more features, and more importantly people can disable features
in their browser depending on their bandwidth. A person on a small machine,
over a modem line, will just have coloured polygons, or maybe bit mapped
ones, a person with a full internet connection can have full bit mapped
polys, sound, and a person on the internet MBONE can get real time video.
This also applies to other features that cannot be supported due to lack of
processing power - it is not desirable for all browsers to implement all
features always, because many browsers will be running on small machines.
When do move towards multi-media, Apple's Quicktime is probably the
first thing we should look at. Already on Windows, Macs, SGI, and the
closest thing to a standard at the moment (MPEG and such are lower level
protocols for specific types of multimedia). I do not know what the
licencing situation is, though.
>
>[audio]
> Again, some standard sounds like <beep> could be available. Question is,
>should the audio-"direction" and "spread" be user definable or should the audio
>sources transmit with equal intensity to all directions - all the rest depends
>on the wanted physics' laws.
>
We start with a standard sound or two (like beep), and then some standard
sound coding method (and people can code volume based on position
themselves??). Fancy stuff later.
>
[audio coding system discussion deleted]
> The later requires complex audio-tracing calculations. Enuf work with
>normal (pseudo) ray-tracing, I'd say. However, both could be supported.
>
Then the vector information for such schemes is included in the syntax, and
some browsers (presumably running on very expensive machines, for the next
few years) will actually implement them. Fancy schemes like this should
have hooks left in the code for them, for those that want them, but we need
to make sure that failing to implement them is not a problem.
Basically this is how I feel most fancy ideas should work - we have
a base subset of features that all browsers support, and all other features
are alternatives for the simpler features, not replacements.
Cheers
David Cake
>
>-Bad Grammar, Panu.
>