Re: two ideas...

Jeffrey Mogul (mogul@pa.dec.com)
Mon, 11 Dec 95 11:34:02 PST


> My intuition is that, at the moment, the primary contributor to
> delay for the average web user (on a 14.4 or 28.8 modem) is the
> long transmission time over the "tail circuit" dialup link.

Actually, there are two major contributors to delay for the
"average" user -
- bandwith to the server
this has less to do with the modem speed, and more to
do with shared access to an often limited and highly
contended resource i.e., even over a 14.4k modem, we
often see 4-6 kbps transfer rates
That may be the result of slow-start and poor proxy implementation
more than anything else. It could be due to congestion in the
larger Internet, but there are clearly large areas of high-bandwidth
connectivity in the Internet.

- rendering speed
consider how much time it takes to display a page,
*after* it has been received, which is a function of
the client's processing power

I think this is usually not the main bottleneck. My office computer
is an Alpha workstation, so I can't claim to be representative there.
However, from home I use a 75 MHz 486-based notebook, which apparently
is considered wimpy by most people these days. On that machine, it
takes about 3 seconds to re-render the SGI home page (www.sgi.com),
which is quite graphics-rich. It takes many times that long to
download over a modem.

> Use of a different port and
> requiring network-level priority settings definitely means
> changes to the HTTP spec, and almost certainly would require
> major changes to proxies, routers, etc.

Not if I hide the additional ports between proxies I provide, which
is what I plan to do. It's invisible to the client and server, and
isn't an HTTP extension.

In other words, you are mostly (only?) interested in optimizing
a world where you can control most or or all of the components.
I'd like to be able to optimize a more chaotic world; in particular,
one where I cannot assume anything about the intermediate systems
(such as proxies).

Note that HTTP allows 100%-transparent extensions by the addition
of new headers, which MUST be ignored by implementations that don't
understand them. Therefore, adding a "Prefetch prediction" header
to HTTP does not break compatibility, and because existing proxies
must convey such headers, does not require any new support in the
proxies.

As to which is more complicated to implement, or whether they are
two sides of the same coin, is something I'd be interested in discussing.

As several other people (Neil Smith in the UK, and Steve Davies
in South Africa), the situation outside the US is somewhat different
from the one that Joe and I face. Several people have suggested
geographical caching schemes, to avoid some of the latency of the
really narrow international links.

I think there is a good case to make that neither prefetching on demand
and presending is entirely appropriate over such links, because in
these cases the available bandwidth is already overutilized. Instead,
it seems to make more sense to put very large caches at either end
of the slow link, in the hope that they will reduce the frequency
at which requests will have to be sent over the link.

The Harvest project (which has a placeholder on Joe's summary page, but
no link yet; try http://harvest.cs.colorado.edu/) includes some work on
geographical caching. I think this is actually being done by
Hans-Werner Braun. There has also been some work by Gwertzman &
Seltzer at Harvard on "geographical push-caching" (see the paper in
HotOS-V or http://das-www.harvard.edu:80/cs/research/push.cache/)

-Jeff