Mark Pace Balzan wrote:
> At what point do I need to consider clustering ? (Note the above is at one
> single physical location.)
Probably from start. You already know that you need to grow quite a bit
in request rate, and clustering is good for reliability.
> Practically speaking what is the max load seen in the field/production ?
> Does squid break under such heavy load, if running on appropriate
> hardware/memory ?
Squid is a quite big piece of softare, and occationally it crashes (and
restarts shortly thereafter). Also, many people find that hardware quite
often locks up / crashes when running Squid under high load, and also it
has a tendency to stress the VM system of the OS quite badly.. Squid is
a huge process doing lots of random I/O in parallell, causing quite
large disk service times and stress on filesystem buffers and uses many
networking kernel resources, which in when combined stresses the kernel
memory consumption more than usual.
> Is anyone aware of any hardware, kernel o/s or squid issues with
> memory addressing over 1GB physical RAM
Plenty in various OS:es, but you should be run if you run a reasonably
modern OS version.
Squid should not care as long as you have less RAM than the signed
pointer size of your processor..
-- Henrik Nordstrom Squid HackerReceived on Tue May 08 2001 - 15:33:37 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:59:53 MST