North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: topological closeness....

  • From: Vadim Antonov
  • Date: Mon May 13 21:32:35 1996

Well, i simply pointed out that solution based on
*hosts* choosing the best paths in global networks
is not going to work.  It is in the same category as
source-based routing.

The network is simply too large for end-hosts to be
able to make any useful decisions.  I though that was
understood long time ago, so nowadays resolvers do not
even attempt to guess which address is best in case
of multihomed hosts (beyond obvious directly-connected
network trick).

Pinging all addresses may be worse than just talking to
a random server.  To make any meaningful measurement
you need to send many dozens of probe packets.  Incidentally,
that is about as much as the average WWW exchange takes.

So the real solution is not to bother, and concentrate
on demand-side cacheing.

--vadim

PS	I would also be sceptical about attempts to
	"try" such things as perpetuum mobile or
	palm reading just on the chance that they may
	work.

PPS	The funny thing is that there's a hope that a
	good heuristic can be found -- since the network
	is rather hierarchal a good choice of topologically
	significant addressing is likely to produce situation
	in which bit distance between addresses is a good
	approximation of some "goodput" metric!

	This is very much the same effect as that of
	well-placed aggregation which produces routes close
	to optimal with a lot less information.



Date: Mon, 13 May 1996 20:30:38 -0400
From: "Mike O'Dell" <[email protected]>

Vadim,

Yes, caching is a good idea (technically).

And who said "AS path length" was a wonderful global metric???
just because the "usual suspects" BGP implementation generally does
route selection based on that attribute doesn't mean that's the
ONLY thing one can do.  

Depending on how much additional information
one wished to supply about ASes, their general level of connectivity,
and geographic location, one can conceivably produce hybrid metrics,
probably heuristic, which reflect some sense of "performance".

for example, the server could do round-trip measurements to "sufficiently
interesting" ASes so that it could base its behavior on observations.
the goal wouldn't be to do fine-grained decision-making, but if the
"Bruce Springsteen Ticket Server" caused some reasonably long-lived
congestion, it might be worthwhile redirecting some responses.

The assumption is that this special DNS server might be concentrated
on fielding responses for a special set of servers, like special Web
servers (could be caches), and not general connectivity.

So while few things are perfectly-universal solutions, the prospect
of implementing heuristics that we all use today in getting a sense of how
things are going, and what to try when the first guess is hosed,
seems like a worthwhile attempt.

some will work better than others.

that is neither news, nor a reason to not try it.

	-mo

- - - - - - - - - - - - - - - - -