Re: HTTP-NG: status report

Phillip M. Hallam-Baker (hallam@dxal18.cern.ch)
Wed, 23 Nov 1994 05:07:55 +0900


In article <9601@cernvm.cern.ch> you write:

|>> From www-talk@www0.cern.ch Tue Nov 22 07:47:51 1994
|>>
|>> Looked great!
|>
|>It's a good time to lift up the rock and ask a few what, when, where,
|>how and why questions. (And I liked it, too).
|>
|>What are the key areas that need to be addressed in the next rev of the
|>HTTP protocol? (performance, security, extensibility, xxx)

Security is being handled separately. The question of how we get in sync
with the binary mode stuff is a good one. The security extensions are
cpompatible with the RFC-822 header stuff of HTTP/1.0 and should also be
applicable to the binary version. The question is whether to introduce the
security enhancements and the unpgrade to binary or do it all in one step.

|>Why does the protocol need to be enhanced? (or why have we left it the way it
|>is for such a long time?)

Mainly its for interactive stuff that does not work with the single Request/
Response idea of HTTP/1.0

|>When could a new version be specified/deployed? (summer 95?)
|>
|>Who's ready willing and able to make it happen? (W3O)

W3O and W3C.

|>How do we make a smooth transition to the new capabilities?

Good question. The clients would initialy make a request using HTTP/1.0, if
the reply says that HTTP/2.0 is going to be well received then the next request
uses that. The start of the request can be adapted so that HTTP/1.0 servers
just barf immediately and safely so that later on when there are more HTTP/2.0
servers than 1.0 ones the default is the new protocol.

|>In other words, a document header is accessible with a minimal amount of
|>processing and may contain equally valuable processsing instructions at
|>an application level. If this information is in the HTML HEAD, it could be
|>possible to acheive similar savings over other transport schemes (ftp, gopher,
|>wais, etc.)

This sort of thing really needs to have a server where there is a proper
`add new entry command'. If UNIX weren't 95% lossage then we could add a
method to the write file operation that extracted the metainformation
and stored it somewhere usefull. To do this well needs a server backed up
by a database or persistent object store of some sort.

Perhaps what we could do is to mount a database volume onto a UNIX filestore
in such a way that to UNIX it looked like a directory but when a file was
written the meta information would be cached.

--
Phillip M. Hallam-Baker

Not Speaking for anyone else.