Re: More on Bake-Off results ...

From: Duane Wessels <wessels@dont-contact.us>
Date: Fri, 15 Oct 1999 11:36:28 -0600

On Fri, 15 Oct 1999, Marc G. Fournier wrote:

>
> Just going through the Appendices, and am really curious about something.
>
> Squid ran on a PII 333Mhz machine and does 96req/sec ($3960)
>
> InfoLibria-L was a Cluster of *4* DynaCache IL-100-7 and did 1600req/sec
> ($202880)
>
> Now, if I had $202,880 to spend, and set up a cluster of 51 Squid servers,
> I could conceivably see 4896req/sec (51*96req/sec)...no? So, for the same
> money, I'd be doing 3x better? :)
>
> Or...if I setup ~17 Squid servers as above, I could get equivalent
> performance, for ~1/3 the cost?
>
> The question, in a roundabout way, is more or less: would 2xPII 333Mhz
> Squid servers give me 2x96req/sec, etc?

Marc,

The first bakeoff (polymix-1 workload) is only one hour long.
The Squid cache we tested could sustain 96/sec for one hour, but
it would not sustain it for much longer. To observe how quickly
the throughput drops off, refer to the first graph on
http://www.squid-cache.org/Benchmarking/std1/2.2.stable3-sixdisks/

In other words, the polymix-1 result should not be used for
capacity planning (unfortunately). However, it can be used
to compare two products, as you have done above.

Also, note that a benchmark tries to find the "peak" throughput. You
would never want to run a production cache at the peak level for very
long. I think the average throughput should be about half of the peak
throughput.

In general, and up to some point, your aggregrate throughput should
scale linearly with the number of servers that you have. Don't
forget to include the cost of a L4 switch, or whatever scheme you
will use to distribute the load.

Duane W.
Received on Fri Oct 15 1999 - 11:46:24 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:48:55 MST