Re: holding connections open: a modest proposal

HALLAM-BAKER Phillip (hallam@dxal18.cern.ch)
Wed, 14 Sep 1994 21:27:17 +0200


In article <865E@cernvm.cern.ch> you write:

|> I've been putting off getting into this thread because it has
|>fundamental implications and requirements that go beyond HTTP.
|>
|>On Sep 12, 9:14pm, Dave Kristol wrote:
|>>
|>> What's New:
|>> 1) A server keeps the connection to the client live after a transaction
|>> (e.g., GET).
|>> 2) A client sends a QUIT command (like FTP clients) when it wants to
|>> close a connection. Otherwise it keeps its connection to the server
|>> open at the end of a transaction.
|>
|> It's trivial to add a QUIT command to a server.
|>
|> It's non-trivial to indicate end-of-file without closing the
|>TCP connection. This is probably why FTP uses a second TCP connection
|>(but I'm not Postel; you'll have to ask him)

We have already had to solve this problem on POST. Originaly the POST spec
read that to signal the end of the stream you closed the connection. This
was not a success :-)

We should try to stick with the single connection because if the user is
not on TCP/IP two connections may not be possible. For example a user on
a dialup modem line with a local client talking continuous mode HTTP to
a proxy that does all the IP. For this scheme running IP is not what is wanted
at all, its only overhead and administration. IP is in any case a poor
protocol for a portable machine that moves about regularly :-)

The requirement to send Content-Length is pretty simple to satisfy except
when you have an item being synthesized on the fly. Here I would prefer to
use a multipart and send content length on each block, the overhead is not
too great :-

Content-type: multipart/mixed; boundary=end

Content-Type: text/html
Content-Length: 10

1234567890

Content-Length: 10

1234567890

Content-Length: 10

1234567890

---end

The header for each segment defaults to the previous one given so we don't
need to keep sending content type. So the blocking penalty is

strlen ("content-length: ") + 6 + ceil(log10 (length))

Say 20 bytes/block? Sound expensive? Not so much as having to search for the
end of buffer pattern.

Plus if the thing is really big - ie too big to buffer the other end
a segmented reply would have a lot of advantages, could do interleaving
and other cool things.

You can send a QUIT command under the blocking scheme and still keep a useful
connection. The server only needs to send the bytes that it has contracted for
under the last segment

IMHO block sizes of 64K would be appropriate on the internet. other networks
milage might differ. There could be a header tag to suggest a block size:

Permit-Chunking: 65536

If no chunking tag given, no chunking allowed. The size should only be a
suggestion though.and would not bind either party.

|> MIME is a Good Thing. Multipart/mixed is a Good Thing.
|>Given the lack of out-of-band EOF indicators aside from shutting down,
|>multipart/mixed with just >one< object may help a bit. The problem
|>then becomes keeping the multipart boundary unique or (better) out of band.
|>This implies Base64 encoding for too many things.

HTTP is 8 bit clean. We do not want to have people spreading social diseases
like Base64 encoding round here thank you very much :-)

Now the HTML proposal is getting pretty much nailed down and there is a
clear idea on at least the principles of HTML 2.0 and HTML+ 3.0 we should
start a discussion on what should go into HTTP 2.0. Should there be a HTTP+
that cleans out some of the methods that we never implemented (CHECKOUT)
and provides the conferencing and transaction support? It can't be called
HTTP+ though, that would break most of the servers.

We also need to derrive a version of HTTP that uses binary tags (eg ASN.1)
for speed. We know how we can move the library in that direction but it is
a lot of work and there are other more pressing concerns.

--
Phillip M. Hallam-Baker

Not Speaking for anyone else.