Re: holding connections open: a modest proposal

Rick Troth (TROTH@UA1VM.UA.EDU)
Sun, 18 Sep 94 15:16:08 CDT


>Perhaps not. Perhaps it looks at the actual size of the file it will be
>sending, allocates a buffer that large, sucks it in, writes it out, and
>away you go. I use programs that work this way all the time.

There is another way. Some who code this other way have
become fiends about it. Whether the "file" is the output of a program
or not, the server could be coded as a pipeline. Not everyone wants
to suck the whole thing into memory.

>> program that reads a 4 megabyte file into memory and writes
>> it back out with only two system calls is going to have
>> problems
>
>Why? I do it all the time. Why do you think OS's are using the VM manager
>to manage persistant-file I/O also? Why not map the file into memory and
>then squirt it back out a chunk at a time?

Not all OS's have file-to-memory mapping. Even when it's
available it might not always give you the win you're after. If it's
there, and you want to use it, and the situation allows it, great!
But when suggesting protocol changes, consider that not everyone is
going to use the same methods.

> -- Darren

--
Rick Troth, <rmtroth@aol.com>, <troth@ua1vm.ua.edu>, Houston, Texas, USA