Re: TECH Survey comments

Mark Waks (justin@dsd.camb.inmet.com)
Mon, 1 Aug 94 15:23:42 EDT


>Caching things you've seen before is easy; you just add an entry to a
>dictionary that maps from URL to Object * whenever you grab a URL from
>the net, and check the dictionary before grabbing a URL off the net.

Well, first of all keywords have one clear advantage here: they work
*much* better in the presence of highly-distributed objects. If an
object is replicated a hundred times across the Net (and I expect to
see this happening a lot), using current URL's to identify it works
rather poorly. If you identify it by some common URL, then you put
extra burden on that common site; if you use a local URL, the user
might go and fetch it from you even though they already have a copy
from another site.

With a keyword-based scheme (or something analogous -- there are
probably a bunch of possible variants), you are identifying *what*
the object is, rather than where it comes from. I don't believe
this is possible under the current URL's...

> Unless we assume that most VRML scenes look about the same, with about
>the same set of objects, the caching by keywords won't work much better
>than caching by URL.
>
>If we do assume that most VRML scenes look about the same, then count
>me out; I have a short attention span, and will get bored.

"Most" is much too strong a word; "some" is what I had in mind. I
mean, we're talking about a world where *everybody* is putting stuff
onto the Net. Some people are artists, and want to produce cool
customized rooms. Some aren't -- all they care about is the
underlying data, and they aren't going to put a lot of effort into
making their rooms unusual in style. I'm pretty confident about
this. I want to have a mechanism that optimizes bandwidth in these
cases where the designer doesn't care as much about the details...

>If there was some sort of database that indexed into objects stored on
>the local disk, that database could be loaded at startup time and the
>local filesystem could be used as the object cache. I'm going to claim
>there's no way that is going to work, either-- I'm not going to
>dedicate space on my disk to VRML objects, and I'm not going to
>dedicate the effort required to keep the keyword index database
>up-to-date.

Why not? Seems easy enough to me. It's not a heckuva lot harder than
the URLs-I've-seen list that Mosaic always keeps. Maybe not every
browser would bother, but some surely would...

(Although I'm *much* more interested in the case where there's a
CD full of Standard Objects, which get used as appropriate...)

>I see all of this discussion as completely orthogonal to VRML. I
>believe it is already being hashed to death by the folks talking about
>URI (uniform resource indicators), where the idea is to distribute
>commonly used information to many places across the web and refer to it
>in such a way that the nearest server could be used to fetch it.

Possible -- I'm not closely plugged into the cutting-edge URL
discussions, and it is certainly possible that they are coming up with
a mechanism that is general enough for our purposes. Can anyone give a
brief summary of what the state of URL discussion in this direction
currently is? If it's good enough, I'm willing to withdraw the
argument...

(Essentially, I don't think it's *necessarily* orthogonal to VRML --
it might be if the URL mechanism will be robust enough to subsume the
problem...)

-- Justin

Random Quote du Jour:

Re: The History of Usenet
"More seriously, the cabal worked for a while. Then it stopped working, and
we tried something else. At the time this seemed like the end of the world
as we knew it, in reality it was evolution in action (the dinosaurs, as they
saw the comet streaking down towards impact, probably reacted similarly.
"Shit. Now we'll never get talk.bizarre rmgrouped before those damn mammals
take over.")"
-- Chuq