Andres Kroonmaa wrote:
>
> On 2 Jan 2003, at 19:45, Henrik Nordstrom <hno@squid-cache.org> wrote:
>
> > The likelyhood for the I/O thread to get scheduled "immediately" is
> > rather high on a UP machine. Same thing on a SMP machine, but on a SMP
>
> hmm. likely - why?
Why I do not know, but is how it behaves on UP systems, and I assume
that on SMP systems the second CPU is put to use, and the scheduler has
no reason to delay activation of threads which has been running on the
second CPU (or not at all) if the second CPU is available.
> er, my fault. badly expressed myself. my path on thinking was:
> m_lock(mutex) - mutex locked
> signal(mutex,var) - mutex locked, marked to-be-relocked, var set true
> m_unlock(mutex) - mutex unlocked, but marked to-be-relocked
> ...
> m_lock(mutex)
> __if (mutex & to-be-relocked) then force_call scheduler
>
> I'm highly likely wrong here. But I can't get rid of belief that
> thread switch after unlock is inefficient. And if it doesn't happen,
> and main thread reaches m_lock for second time, there is bad case:
> threads blocked on cond mutex haven't had a chance to run and second
> signal is about to be raised.
Exacly, and my testing indicates you are right. A mutex unlock does not
make a thread switch, not even a unlock+lock sequence while there is
other threads waiting for a lock.
As you have said, if you want to force a switch you must use two
ping-pong mutexes, forcing the "main" thread to block.
> If its allowed to happen, then previous
> signal is lost, but man pages do not document signal loss, therefore
> such case must be detected and resolved. One way is to forcibly make
> thread switch on every single m_unlock which is cpu-buster, or somehow
> mark mutex into intermediate state, denying new locks until scheduler
> has been run.
It is resolved, at least on glibc. The signal is not just a boolean. For
each thread it is a boolean, but each signal signals exacly one thread
which is then put on queue to be run by the scheduler. The next signal
goes to the next thread in the cond queue.
I assume there is some optimization involved detecting mutex locks etc,
avoiding rescheduling of the signalled threads while the mutex is
locked, possibly by not finishing the last steps of the delivery of
cond_signal until mutex_unlock, but cond_signal is not mutex dependent
so I am a bit confused on this point..
> does it work? ;) I'm only worried about pipe buffering. does the
> 1-byte write into the pipe immediately propagate to the other end?
> Or does it poll ready only when 8Kb of data is in the pipe?
Yes it works.
pipes are immediate. If you want to buffer you must do so in userspace.
> As to priority, iothreads have explicit priority set to higher than
> main thread. But some scheduler might assume that thread eating up
> its quanta might be doing something useful, so on contrary up its
> priority. But that all is rather unimportant i guess. Worst case
> is 10ms latency when poll() spins like nuts, ie under high loads.
Which is fully acceptable.
And a poll timeout is very unlikely on a loaded Squid as there most
likely will be some I/O pending..
Regards
Henrik
Received on Fri Jan 03 2003 - 17:17:28 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:19:05 MST