Re: two ideas...

touch@ISI.EDU
Mon, 11 Dec 1995 14:13:58 -0800


Jeff Mogul said:

> there are clearly large areas of high-bandwidth
> connectivity in the Internet.
>
> - rendering speed
>
> I think this is usually not the main bottleneck.

> It
> takes about 3 seconds to re-render the SGI home page (www.sgi.com),
> which is quite graphics-rich. It takes many times that long to
> download over a modem.

The modem is clearly the bottleneck, but this also means that buying
anything faster than ISDN is a lose. My point is that, although there
are (as you observe) areas of high-BW in the internet, nothing we do
to speed things up will be visible if rendering is so slow.

>
> Not if I hide the additional ports between proxies I provide, which
> is what I plan to do. It's invisible to the client and server, and
> isn't an HTTP extension.
>
> In other words, you are mostly (only?) interested in optimizing
> a world where you can control most or or all of the components.
> I'd like to be able to optimize a more chaotic world; in particular,
> one where I cannot assume anything about the intermediate systems
> (such as proxies).

I am indeed interested in optimizing only that which I can control.
Provided the proxies are what I am optimizing:
- the client proxy is optimized to maximize hit rate,
including first-hit
- the server proxy is optimized to burn BW to reduce latency

> Note that HTTP allows 100%-transparent extensions by the addition
> of new headers, which MUST be ignored by implementations that don't
> understand them. Therefore, adding a "Prefetch prediction" header
> to HTTP does not break compatibility, and because existing proxies
> must convey such headers, does not require any new support in the
> proxies.

Point well taken. But, as we know in IP, assuming a 100%-compliant
implementation, especially regarding transparent extensions, isn't
always valid either.

The only other advantage to using multiple ports, rather than HTTP
extensions, is that the proxies don't have to sift through anticipative
messages to find the direct messages. This demuxing can occur at the
port level in IP (as I have proposed), or at the HTTP parsing level,
as you are proposing here.

> As several other people (Neil Smith in the UK, and Steve Davies
> in South Africa), the situation outside the US is somewhat different
> from the one that Joe and I face. Several people have suggested
> geographical caching schemes, to avoid some of the latency of the
> really narrow international links.
>
> I think there is a good case to make that neither prefetching on demand
> and presending is entirely appropriate over such links, because in
> these cases the available bandwidth is already overutilized. Instead,
> it seems to make more sense to put very large caches at either end
> of the slow link, in the hope that they will reduce the frequency
> at which requests will have to be sent over the link.

We are looking at some of these issues in a project called
"Large-Scale Active Middleware" as well, in a task called
"Intelligent BW (IB)".

The idea of IB is to optimize cache loading using multicast
as well as environment-awareness (i.e., BW, latency, and,
often as important, access method [shared, point-point, etc.]).

The pointer to that is
http://www.isi.edu/lsam

Joe
----------------------------------------------------------------------
Joe Touch touch@isi.edu
ISI / Project Leader, ATOMIC-2 http://www.isi.edu/~touch
USC / Research Assistant Prof. http://www.isi.edu/atomic2