North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: MTU of the Internet?

  • From: Phil Howard
  • Date: Mon Feb 09 10:39:52 1998

Patrick McManus writes:

> At the risk of introducing meaningful background literature:
>      ftp://ds.internic.net/rfc/rfc2068.txt
> 
> I direct folks to 14.36.1 "Byte Ranges" which when interleaved with
> pipelined requests comes very close to achieving client-driven
> multiplexing that I'd suggest from a UI pov will behave much better
> than the multiple connections method (eliminating the cost of tcp
> congestion control but at the cost of some application protocol
> overhead). 

More than application overhead, I suspect the biggest problem with this
otherwise good idea is that it won't be implemented corrently by the
browsers or the servers.

For example on the server end, it would see multiple requests for the
same object, at different byte ranges.  If that object is being created
on the fly by a program process (e.g. CGI) the browser won't have a
clue of the size.

What is the correct behaviour of the server if the request is made for
bytes 0-2047 of an object which invokes a CGI program to create that
object?  Obviously it can send the first 2048 bytes, but then what?
Should it leave the process pipe blocked until the next request comes
in?  One httpd listener might well have to have dozens of these stalled
processes.  Should they all remain there until the persistent connection
is broken?

Of course with multiple connections, you have all these processes, anyway.
But at least you know when the process should go away (when the connection
is dropped).

If the persistent connection gets dropped before all the object get loaded,
then loading _must_ start from the beginning, since objects may now become
inconsistent (a different GIF image can be created by a new instance of the
program that generates it).

Of course, all of this can be done.  But can you trust the developers of
every browser and every server to get it right?  What I am saying is that
if this is to be pursued, it needs to be pursued with a lot of details
addressed that even the RFC doesn't seem to touch on.

Consider CGI.  Should the server start a new instance of CGI for each range
request, passing that request via the CGI environment?  Or should the server
keep each CGI persistent as long as each range request is sequential to the
previous one?  What if there are two different requests for the same path,
which in the ordinary case can indeed generate distinctly different objects
(not cacheable).  How would the server know which of them to continue when
the next range request comes in (previously the distinction is managed by
the connection).

While I can see that persistent connections with range requests can solve
many things, I believe the implementations will botch it up in most cases
to the extreme that it won't get used.  A subchannelized method of doing
request/response transactions over a single persistent connection would
handle more (if not all) of these cases better (IMHO).

-- 
Phil Howard | [email protected] [email protected] [email protected]
  phil      | [email protected] [email protected] [email protected]
    at      | [email protected] [email protected] [email protected]
  milepost  | [email protected] [email protected] [email protected]
    dot     | [email protected] [email protected] [email protected]
  com       | [email protected] [email protected] [email protected]