On Thu, 20 Aug 1998, Carlos Horowicz wrote:
> At the client level, the so called "Autoproxy Configuration" provided by a
> self-written FindProxyForURL JS function, should provide transparent failover.
> Even at the server level, I think that having more than one server for each
> squid functionality (at the intranet or at the gateway level) ensures higher
> availability.
> Conclusion: am I missing the point ?
Not at all. Quite sound.
NOW, what we need is an auto-proxy configuration facility for hierarchical
relatives in a cache hierarchy that subordinate (client) Squids can load
and use (kinda like the JavaScript stuff for browsers) ... in preference
to the hard-configured stuff in the squid.conf.
Is this viable or in the pipeline?
If you read Peter Danzig's paper on the NetCache architecture and
deployment:
http://www.netapp.com/technology/level3/3029.html
... you get a scheme where FindProxyForURL hashes URLs to particular
proxies, so if every client uses the same hash, up goes the hit rate (with
redundant steering to alternate proxies for failover, to boot).
Now, if we had a scheme for hashing between Squids in the hierarchy, this
_could_ have the same effect (and perhaps address the braindead fashion in
which we currently spew out ICP transactions across WANs). Come to think
of it, it should probably be the same scheme as that which the browsers
are using, shouldn't it?)
Please note: this is NOT a new idea. It's one that's currently being
exploited by Cisco, NetApp (and probably others) in their clustering
solutions.
My 5cents worth (since that's the smallest unit of currency available
here in Oz 8-).
J.
----------------------------------------------------
John V Cougar | Voice: 1800 065 744
Cache Manager |-----------------------------
Telstra Internet | E-Mail: cougar@telstra.net
----------------------------------------------------
Received on Thu Aug 20 1998 - 18:13:11 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:40 MST