Currently I have a testing server running as squid as reverse proxy to
a server web servers, the sites contains a lot of images uploaded by
user storing in NFS and hence now being accelerarted by squid:
The current config of squid is as below:
Intel E5310 Quad Core 1.6Ghz PU x 2
6GB RAM (4GB assigned to squid)
400GB SAS 15K Hard Disk (Raid5, 300GB assigned to squid)
It is running Squid 3 Stable7 for around 2 weeks, contains the following stats
Avg HTTP reqs per minute since start: 3926.2
Cache Hit: 5min : 82.3%, 60min: 79.4%
Memory hits as % of hit : 5min: 40.9%, 60min: 47.5%
Mean Object Size : 49.02 KB
CPU Usage : 5min: 4.91%, 60min: 4.17%
From the MRTG report, the bandwidth throughtput is :
Avg In (4M) , Max In (6M)
Avg Out (10M) , Max In (19M)
Now the memory is 100% used and disk has only around 10% used, for
long term I would expect all the disk cache being used also given our
data size.
From the CPU usage as you can see, the server seems too powerful for
our need, I guess even if I sarurated 100M bandwidth, the server is
able to handle it very easily.
We are planning to replace this testing server with two or three
cheaper 1U servers (sort of redundancy!)
Intel Dual Core or Quad Core CPU x1 (no SMP)
4GB DDR2 800 RAM
500GB or 750GB SATA (Raid 0)
Any comments?
Received on Thu Jul 03 2008 - 04:04:29 MDT
This archive was generated by hypermail 2.2.0 : Mon Jul 07 2008 - 12:00:03 MDT