Andres Kroonmaa wrote:
> Yes, but thats like what our 200kb/sec case showed. We rely on other traffic.
> If it isn't there, we lag. Suppose spike of requests, say 100, after which clients
> just sit there and wait for data, and new poll events don't relate to aufs.
> If we have spinned off 1-2 threads, they'll wait after each other. Some slow
> open might block already opened fast file io. New request of one client slows
> down fast client's stream.
Yes, it would in such case rely on network I/O to kick us out of the
"single thread" condition.
Yes, if we have a huge surge of requests, then suddenly all traffic
stops and Squid is only processing already received cache hits, and none
of the threads is currently executing, and all requests are currently
processing the last block of data (no new I/O requests to be queued),
then we might end up with only 1 thread running to serve the queue of
I/O request until something breaks this condition.
Yes, in such case the performance will be limited to approximately the
I/O speed of 1 drive spindle + memory cache.
However, in almost all caces we either
a) Have additional requests, which will trigger another I/O event which
will in turn signal another thread.
b) Large replies, eventually causing blocking on network, which will
also cause variance in the thread signalling, allowing new threads to be
started if needed.
c) Some randomness in when/how requests are received, making the above
conditions extremely unlikely to happen.
So no, it is nowhere near the 200kb/sec case.
Note: for most purposes not involving a high degree of concurrency
performance will be mostly equal for 1 and N I/O threads.
If we have a high degree of concurrency then we want new threads to get
scheduled if the ones currently processing requests is all blocked on
I/O.
In all imagineable conditions where what you describe might happen, once
the I/O thread signals an I/O completetion there will be at least one
comm loop and a new I/O event to read the next block of data, triggering
another signal, kicking another thead alive. So in worst imagineable
case we are talking about a delay of 1 I/O operation before additional
threads gets signalled in the current scheme.
Only in the case that all queued I/O requests are for the last block of
data of each file, and there is no new requests beeing received will it
get stuck at a lower than intended number of threads (at least 1).
And all of this is only of relevance if it can be shown that cond_signal
may loose signals. Nothing in the documentation variants of cond_signal
presented in this discussion indicates so may happen (only the reverse,
having more than one thread signalled). But then, perhaps it is simply
not specified what will happen in case of multiple cond_signal before
the signal is received..
On thread implementations such as linuxthreads where cond_signal signals
at least 1 thread per call then the above is completely a non-issue as
it cannot happen.
Regards
Henrik
Received on Sun Jan 05 2003 - 22:26:51 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:19:05 MST