Sound Sources (was Re: WWWInline; include non-VRML data?)

Linas Vepstas (linas@innerdoor.austin.ibm.com)
Tue, 18 Oct 1994 11:38:08 -0500


>To: linas@innerdoor.austin.ibm.com (Linas Vepstas)
>Subject: Re: WWWInline; include non-VRML data?
>Date: Mon, 17 Oct 94 22:04:44 EDT
>From: Brygg Ullmer <ullmer@bigcheese.math.scarolina.edu>
>
>
>Hi, Linas! Did you want to cc this to the list? (And if you did, maybe you
>can append my response to the list as well... or otherwise, I'll post my
>response if you post yours! ;-)

Whoops

>> < Sure -- I don't expect all this [inlined HTML, images, video, spatialized
>> < audio sources, scripted geometry-synthesizers, etc.] to be possible in a
>>
>> When I jumped & said 1), I forgot about sound. It's not unreasonable to
>> ask that sound be inlined. Comments?
>
>Actually, while I'm very interested in spatialized sound inclusion,
>I think sound brings along many more worms than innocent things
>like texture-mapped HTML, etc. With sound, one needs to decide whether
>the delivery model is a "canned" one-shot sound, "canned" rhythm, or
>a socket to a live sound source.

If you have a library/ collection of audio objects, this is not a problem.
For instance, I beleive our Ultimedia/6000 for RS/6000 supports this function.
I'm sure HP & Sun & SGI all have comparable API's; sadly, they are not
standardized.

For VRML, I'd treat a sound source much like a spot light:

Sound {
filename <URL> # The URL must be a .wav or .audio or whatever
# Can URL's be used to specify MBone broadcasts ???
volume <flt> # measured in deciBells (dB)
loop <int> # if 0, sound file is played once. If 1, then
# sound is looped over & over.
location <flt> <flt> <flt>
direction <flt> <flt> <flt>
attenutionRate <flt> # exponential attenuation as a function of distance.
cutoffDistance <flt> # max distance, after which sound level drops to zero.
lobeExponent <flt> # "Radiation Lobe Directionality" The dot product of sound
# direction times listener direction is raised to this power.
# the result multiplies the sound intensity. Part of
# directional sound. Defalut value 0.0
lobeCutoffAngle <flt> # If the angle between listener and sound direction is
# greater than this angle, sound inensity drops to zero.
# Another part of specifying directional sound.
# Default value 180.0
}

Each time the viewer moves, the volume is computed (based on position,
location, above formulas) and the audio player is told to mix in the
specified sound source. (If the audio plyer does NOT support mixing, then
the loudest sound wins -- i.e. the loudest sound is played).

I think above could work really well ... comments for sound experts?

>In time, I'm confident we'll want
>all of these, though I wouldn't necessarily expect any on the first pass;
>similar to the remote-inlined textures, it would be nice to agree on
>a model general enough to support inlined sound sources without having
>to laboriously hash through all the intricacies at this stage in the game.
>I think this means we shouldn't rigorously restrict WWWInline to
>VRML inputs.
>
>In the spatialized case, there are the attendant questions about how to
>specify orientation, etc. I guess it would be sort of neat if the
>scaleFactor (e.g. volume), translate, rotation, and other Transform nodes
>could be used with spatialized sound elements...

In the above, location & direction are treated as real 3D coordinates,
just as they would be for a spot light (or camera) -- they are transformed
by the scale, rotate etc. transforms just like a spot light would be.

>Brygg

--linas