kernel lock contention and scalability

kernel lock contention and scalability

Post by Jonathan Lah » Sat, 17 Feb 2001 03:46:56



To discover possible locking limitations to scalability, I have collected
locking statistics on a 2-way, 4-way, and 8-way performing as networked
database servers.  I patched the [48]-way kernels with Kravetz's multiqueue
patch in the hope that mitigating runqueue_lock contention might better
reveal other lock contention.

In the attached document, I describe my test environment and excerpt
lockstat output to show the more contentious locks for a typical run on
each of my server configurations.  I'm interested in comparing these data
to other lock contention data, so information regarding previous or ongoing
lock contention work would be appreciated.  I'm aware of timer scalability
work ongoing at people.redhat.com/mingo/scalable-timers, but is anyone
working on reducing sem_ids contention?

--
Jonathan Lahr
IBM Linux Technology Center
Beaverton, Oregon

503-578-3385

  note.att
10K Download
 
 
 

kernel lock contention and scalability

Post by Manfred Sprau » Mon, 26 Feb 2001 19:10:02



> To discover possible locking limitations to scalability, I have collected
> locking statistics on a 2-way, 4-way, and 8-way performing as networked
> database servers.  I patched the [48]-way kernels with Kravetz's multiqueue
> patch in the hope that mitigating runqueue_lock contention might better
> reveal other lock contention.

The dual cpu numbers are really odd. Extremely high count of
add_timer(), del_timer_sync(), schedule() and process_timeout().

That could be a kernel bug:
perhaps someone uses
for(;;) {
        set_current_state(TASK_INTERRUPTIBLE);
        schedule_timeout(100);

Quote:}

without checking signal_pending()?

Quote:> In the attached document, I describe my test environment and excerpt
> lockstat output to show the more contentious locks for a typical run on
> each of my server configurations.  I'm interested in comparing these data
> to other lock contention data, so information regarding previous or ongoing
> lock contention work would be appreciated.  I'm aware of timer scalability
> work ongoing at people.redhat.com/mingo/scalable-timers, but is anyone
> working on reducing sem_ids contention?

Is that really a problem?
The contention is high, but the actual lost time is quite small.

The 8-way test ran for ~ 129 seconds wall clock time (total cpu time
1030 seconds), and around 0.7 seconds were lost due to spinning.
The high contention is caused by the wakeups: cpu0 scans the list of
waiting processes and if it finds one it is woken up. If that thread
runs before cpu0 can release the spinlock, the second cpu will spin.

I've attached 2 changes that might reduce the contention, but it's just
an idea, completely untested.

* slightly more efficient try_atomic_semop().
* don't acquire the spinlock if q->alter was 0. It could slightly
improve performance, but I assume that q->alter will be always 1.

Btw, I found a small bug in try_atomic_semop():
If a semaphore operation with sem_op==0 blocks, then the pid is
corrupted. The bug also exists in 2.2.

--
        Manfred

[ patch-sem 1K ]
--- sem.c.old   Sun Feb 25 10:50:55 2001

                curr = sma->sem_base + sop->sem_num;
                sem_op = sop->sem_op;

-               if (!sem_op && curr->semval)
+               result = curr->semval;
+               if (!sem_op && result)
                        goto would_block;
+               result += sem_op;
+               if (result < 0)
+                       goto would_block;
+               if (result > SEMVMX)
+                       goto out_of_range;

                curr->sempid = (curr->sempid << 16) | pid;
-               curr->semval += sem_op;
+               curr->semval = result;
                if (sop->sem_flg & SEM_UNDO)
                        un->semadj[sop->sem_num] -= sem_op;
-
-               if (curr->semval < 0)
-                       goto would_block;
-               if (curr->semval > SEMVMX)
-                       goto out_of_range;
        }

        if (do_undo)
        {
-               sop--;
                result = 0;
                goto undo;

                result = 1;

 undo:
+       sop--;
        while (sop >= sops) {
                curr = sma->sem_base + sop->sem_num;

 {
        int error;
        struct sem_queue * q;
+       int do_retry = 0;

+retry:
        for (q = sma->sem_pending; q; q = q->next) {


                                q->status = 1;
                                return;
                        }
-                       q->status = error;
                        remove_from_queue(sma,q);
+                       wmb();
+                       q->status = error;
+                       /* FIXME: retry only required if an increase was
+                        * executed
+                        */
+                       do_retry = 1;
                }
        }
+       if (do_retry)
+               goto retry;
 }


                sem_unlock(semid);

                schedule();
-
+               if (queue.status == 0) {
+                       error = 0;
+                       if (queue.prev)
+                               BUG();
+                       current->semsleeping = NULL;
+                       goto out_free;
+               }
                tmp = sem_lock(semid);
                if(tmp==NULL) {
                        if(queue.prev != NULL)

 
 
 

kernel lock contention and scalability

Post by Anton Blanchar » Tue, 06 Mar 2001 09:50:03


Hi,

Quote:> To discover possible locking limitations to scalability, I have collected
> locking statistics on a 2-way, 4-way, and 8-way performing as networked
> database servers.  I patched the [48]-way kernels with Kravetz's multiqueue
> patch in the hope that mitigating runqueue_lock contention might better
> reveal other lock contention.

...

Quote:>       24.38%  23.93%    15us(   218us)   4.3us(   111us)     744475     566289     178186      0  runqueue_lock
>       23.15%  38.78%    28us(   218us)   6.2us(   108us)     376292     230381     145911      0    schedule+0xe0

Tridge and I tried out the postgresql benchmark you used here and this
contention is due to a bug in postgres. From a quick strace, we found
the threads do a load of select(0, NULL, NULL, NULL, {0,0}). Basically all
threads are pounding on schedule().

Our guess is that the app has some form of userspace synchronisation
(semaphores/spinlocks). I'd argue that the app needs to be fixed not the
kernel, or a more valid test case is put forwards. :)

PS: I just looked at the postgresql source and the spinlocks (s_lock() etc)
are in a tight loop doing select(0, NULL, NULL, NULL, {0,0}). In samba
we have userspace spinlocks, but they cover small amounts of code and
offer an advantage over ipc semaphores. When you have to synchronise
large sections of code ipc semaphores are reasonably fast on linux and
would be a better fit.

Cheers,
Anton
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Jonathan Lah » Wed, 07 Mar 2001 03:50:04



> > lock contention work would be appreciated.  I'm aware of timer scalability
> > work ongoing at people.redhat.com/mingo/scalable-timers, but is anyone
> > working on reducing sem_ids contention?

> Is that really a problem?
> The contention is high, but the actual lost time is quite small.

I agree it isn't a major performance problem under that workload.  But, I
thought since the contention was high that other workloads which may
utilize it more might have shown it to be a significant problem.

Quote:> I've attached 2 changes that might reduce the contention, but it's just
> an idea, completely untested.

Thanks for the insight into the sempahore subsystem and the suggested fixes.

--
Jonathan Lahr
IBM Linux Technology Center
Beaverton, Oregon

503-578-3385

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Jonathan Lah » Thu, 08 Mar 2001 08:00:04


Quote:> Tridge and I tried out the postgresql benchmark you used here and this
> contention is due to a bug in postgres. From a quick strace, we found
> the threads do a load of select(0, NULL, NULL, NULL, {0,0}). Basically all
> threads are pounding on schedule().
...
> Our guess is that the app has some form of userspace synchronisation
> (semaphores/spinlocks). I'd argue that the app needs to be fixed not the
> kernel, or a more valid test case is put forwards. :)
...
> PS: I just looked at the postgresql source and the spinlocks (s_lock() etc)
> are in a tight loop doing select(0, NULL, NULL, NULL, {0,0}).

Anton,

Thanks for looking into postgresql/pgbench related locking.  Yes,
apparently postgresql uses a synchronization scheme that uses select()
to effect delays for backing off while attempting to acquire a lock.
However, it seems to me that runqueue lock contention was not entirely due
to postgresql code, since it was largely alleviated by the multiqueue
scheduler patch.

In using postgresql/pgbench to measure lock contention, I was attempting
to apply a typical server workload to measure scalability using only open
software.  My goal is to load and measure the kernel for server performance,
so I need to ensure that the software I use represents likely real world
server configurations.  I did not use mysql, because it cannot perform
transactions which I considered important.  Any pointers to other open
database software or benchmarks that might be suitable for this effort
would be appreciated.

Jonathan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Matthew Kirkwoo » Thu, 08 Mar 2001 08:50:03


[ sorry to reply over another reply, but I don't have
  the original of this ]

Quote:> > Tridge and I tried out the postgresql benchmark you used here and this
> > contention is due to a bug in postgres. From a quick strace, we found
> > the threads do a load of select(0, NULL, NULL, NULL, {0,0}).

I can shed some light on this (though I'm far from a PG hacker).

Postgres can use either of two locking methods -- SysV semaphores
(which it tries to avoid, asusming that they'll be too heavy) or
userspace spinlocks (via inline assembler on platforms which support
it).

In the slow path of a spinlock_acquire they busy wait for a few
cycles, and then call schedule with a zero timeout assuming that
it'll basically do the same as a sched_yield() but more portably.

Matthew.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Jeff Dik » Thu, 08 Mar 2001 11:10:03



Quote:> If you're a UP system, it never makes sense to spin in userland, since
> you'll just burn up a timeslice and prevent the lock holder from
> running. I haven't looked, but assume that their code only uses
> spinlocks on SMP. If you're an SMP system, then you shouldn't be using
> a spinlock unless the critical section is "short", in which case the
> waiters should simply spin in userland rather than making system calls
> which is simply overhead.

This is a problem that UML is going to have when I turn SMP back on.  
Emulating a multiprocessor box on a UP host with the existing locking
primitives is going to result in exactly this problem.

Quote:> Actually, what's really needed here is an efficient form of
> dynamically marking a process as non-preemptible so that when
> acquiring a spinlock the process can ensure that it exits the critical
> section as fast as possible, when it would relinquish its
> non-preemptible privilege.

That sounds like a pretty fundamental (and abusable) mechanism.

I had a suggestion from an IBM guy at ALS last year to make UML "spin"-locks
actually sleep in the host (this doesn't make them sleep locks in userspace
because they don't call schedule), which sounds reasonable.  This gives the
lock-holder an opportunity to run immediately.  It's unclear to me what the
wake-up mechanism would be, though.

Another thought I had was to raise the priority of a thread holding a
spinlock.  This would reduce the chance that it would be preempted by a thread
that will waste a timeslice spinning on that lock.  I don't know whether this
is a good idea either.

Quote:> Another synchronization method popular with database peeps is "post/
> wait" for which SGI have a patch available for Linux. I understand
> that this is relatively "light weight" and might be a better choice
> for PG.

URL?

                                Jeff

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Rajagopal Ananthanarayana » Thu, 08 Mar 2001 12:10:03


        [ ... ]

Quote:

> > Another synchronization method popular with database peeps is "post/
> > wait" for which SGI have a patch available for Linux. I understand
> > that this is relatively "light weight" and might be a better choice
> > for PG.

> URL?

>                                 Jeff

Here it is:

        http://oss.sgi.com/projects/postwait/

Check out the download section for a 2.4.0 patch.

cheers,

ananth.

--------------------------------------------------------------------------
Rajagopal Ananthanarayanan ("ananth")
Member Technical Staff, SGI.
--------------------------------------------------------------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Jeff Dik » Thu, 08 Mar 2001 13:50:02



Quote:> Here it is:
>    http://oss.sgi.com/projects/postwait/
> Check out the download section for a 2.4.0 patch.

After having thought about this a bit more, I don't see why pw_post and
pw_wait can't be implemented in userspace as:

int pw_post(uid_t uid)
{
        return(kill(uid, SIGHUP)) /* Or signal of the waiter's choice */

Quote:}

int pw_wait(struct timespec *t)
{
        return(nanosleep(t, t));

Quote:}

In the case of UML, there would be a uid field in its lock structure and the
spin code would look like:

        lock->uid = getpid();
        pw_wait(NULL);

and the lock release code would be:

        pw_post(lock->uid);

Obviously, sending signals to processes from the outside could massively
confuse matters, but I don't see that being a big problem, since I think you
can do that now, and no one is complaining about it.

Is there anything that I'm missing?

                                Jeff

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Tim Wrigh » Fri, 09 Mar 2001 07:20:04




> > If you're a UP system, it never makes sense to spin in userland, since
> > you'll just burn up a timeslice and prevent the lock holder from
> > running. I haven't looked, but assume that their code only uses
> > spinlocks on SMP. If you're an SMP system, then you shouldn't be using
> > a spinlock unless the critical section is "short", in which case the
> > waiters should simply spin in userland rather than making system calls
> > which is simply overhead.

> This is a problem that UML is going to have when I turn SMP back on.  
> Emulating a multiprocessor box on a UP host with the existing locking
> primitives is going to result in exactly this problem.

Yes. On a uniprocessor system, a simple fallback is to just use a semaphore
instead of a spinlock, since you can guarantee that there's no point in
scheduling the current task until the holder of the "lock" releases it.
Otherwise, the spin calling sched_yield() each iteration isn't too horrible.

Quote:> > Actually, what's really needed here is an efficient form of
> > dynamically marking a process as non-preemptible so that when
> > acquiring a spinlock the process can ensure that it exits the critical
> > section as fast as possible, when it would relinquish its
> > non-preemptible privilege.

> That sounds like a pretty fundamental (and abusable) mechanism.

It would be if it were generally available. The implementation on DYNIX/ptx
requires a privilege (PRIV_SCHED IIRC), to be able to use it. It was added
for a database to prevent preemption during critical sections.

Quote:> I had a suggestion from an IBM guy at ALS last year to make UML "spin"-locks
> actually sleep in the host (this doesn't make them sleep locks in userspace
> because they don't call schedule), which sounds reasonable.  This gives the
> lock-holder an opportunity to run immediately.  It's unclear to me what the
> wake-up mechanism would be, though.

Hmmm.. depends what you mean by sleep i.e sleep(3) vs. making a system call
that sleeps. I would have thought the latter, and use semaphores again.

Quote:> Another thought I had was to raise the priority of a thread holding a
> spinlock.  This would reduce the chance that it would be preempted by a thread
> that will waste a timeslice spinning on that lock.  I don't know whether this
> is a good idea either.

That's basically a weaker version of the no-preempt. Not a bad idea, but
less than optimal :-)

Regards,

Tim

--

IBM Linux Technology Center, Beaverton, Oregon
Interested in Linux scalability ? Look at http://lse.sourceforge.net/
"Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Jeff Dik » Sat, 10 Mar 2001 07:30:05



Quote:> On a uniprocessor system, a simple fallback is to just use a semaphore
> instead of a spinlock, since you can guarantee that there's no point
> in scheduling the current task until the holder of the "lock" releases
> it.

Yeah, that works.  But I'm not all that interested in compiling UML
differently for UP and SMP hosts.

Quote:> Otherwise, the spin calling sched_yield() each iteration isn't too
> horrible.

This looks a lot better.  For UML, if there's a thread spinning on a lock,
there has to be a runnable thread holding it, and that thread will get a
timeslice before the spinning one (assuming that the thread holding the lock
hasn't called a blocking system call, which is something that I intend to make
sure can't happen).

Quote:> > That sounds like a pretty fundamental (and abusable) mechanism.

> It would be if it were generally available. The implementation on
> DYNIX/ptx requires a privilege (PRIV_SCHED IIRC), to be able to use
> it.

OK, that makes sense.

                                Jeff

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Anton Blanchar » Mon, 12 Mar 2001 15:40:02


Hi,

Quote:> Thanks for looking into postgresql/pgbench related locking.  Yes,
> apparently postgresql uses a synchronization scheme that uses select()
> to effect delays for backing off while attempting to acquire a lock.
> However, it seems to me that runqueue lock contention was not entirely due
> to postgresql code, since it was largely alleviated by the multiqueue
> scheduler patch.

Im not saying that the multiqueue scheduler patch isn't needed, just that
this test case is caused by a bug in postgres. We shouldn't run around
fixing symptoms - dropping the contention in the runqueue lock might not
change the overall performance of the benchmark, on the other hand
fixing the spinlocks in postgres probably will.

On the other hand, if postgres still pounds on the runqueue lock after
the bug has been fixed then we need to look at the multiqueue patch.

Cheers,
Anton
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

kernel lock contention and scalability

Post by Anton Blanchar » Mon, 12 Mar 2001 16:10:03


Hi,

Quote:> In the slow path of a spinlock_acquire they busy wait for a few
> cycles, and then call schedule with a zero timeout assuming that
> it'll basically do the same as a sched_yield() but more portably.

The obvious problem with this is that we bounce in and out of schedule()
a few times before moving on to the next task. I see this also with
sched_yield().

I had this patch lying around which I think came about when I was playing
with pthreads (which for spinlocks does sched_yield() for a while before
sleeping)

--- linux/kernel/sched.c        Fri Mar  9 10:26:56 2001

                goto out_unlock;
        }
 #else
+       if (prev->policy & SCHED_YIELD)
+               prev->counter = (prev->counter >> 4);
+
        prev->policy &= ~SCHED_YIELD;
 #endif /* CONFIG_SMP */
 }

Anton

/* test sched_yield */

#include <stdio.h>
#include <sched.h>
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>

#undef USE_SELECT

void waste_time()
{
        int i;
        for(i = 0; i < 10000; i++)
                ;

Quote:}

void do_stuff(int i)
{
#ifdef USE_SELECT
        struct timeval tv;
#endif

        while(1) {
                fprintf(stderr, "%d\n", i);
                waste_time();
#ifdef USE_SELECT
                tv.tv_sec = 0;
                tv.tv_usec = 0;
                select(0, NULL, NULL, NULL, &tv);
#else
                sched_yield();
#endif
        }

Quote:}

int main()
{
        int i, pid;

        for(i = 0; i < 10; i++) {
                pid = fork();

                if (!pid)
                        do_stuff(i);
        }

        do_stuff(i+1);

        return 0;

Quote:}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
 
 
 

1. VMM Contention/Scalability with IO on AIX/DB2?

First sorry for cross posting... but I thought it was likely that both
news groups could have relevant experience and comments.

 I am seeking some opinions about how well AIX scales IO.

Background: Hardware p690 16-way 48 GB mem
Applcation: DB2 EEE(partitioned 32 bit)

With our current database installation we have 8 logical nodes(db2) on
the same LPAR. We are using SMS filesystems for database storage. I
know we should be using RAW DMS tablespaces but the customer prefers
SMS. My question is since we are not using RAW IO is it probable that
the VMM for AIX will ever become a bottleneck for IO throughput as we
continue to add IO capacity (through adding controllers and more
drives)?.. We currently have about 250 MB/sec of random read IO
capacity and generally see about 30%-40% kernel time when pushing this
amount of IO through DB2. Currently the files are not MMAPed (So that
we can utilized all the memory because we are not using the 64 bit
version of db2 yet and DB2 immediately invalidates all pages in the
file cache loaded when using mmaped files). Obviously since we are
using non mmaped SMS tablespaces this kernel time is attributed to
double buffering and virtual memory mananagement(anything else?). Does
anyone have a feel for what percentage of kernel time would be spent
doing the double buffering and what percentage is spent doing the
virtual memory management. Will it be possible to overwhelm the vmm so
that it becomes a bottleneck. Is the vmm linearly scalable? That is,
if it is true that the VMM becomes a bottleneck will adding more CPU
solve the problem or is there some fundamental contention that makes
the scalability of the VMM with IO plateau out at some point?

Any insight is greatly appreciated..

Thanks for putting up with my rambling as I attempt to struggle with
putting the concepts in my question in words.

Thanks,
Scott

2. SCO Binaries on Solaris x86?

3. Lock contention problems

4. Mounting ext2fs partition fails

5. reduce lock contention in do_pagecache_readahead

6. PCI bridge in Solaris X8 ?

7. reduce lock contention in try_to_free_buffers()

8. PCI modem and LINUX

9. Enhanced profiling support (was Re: vm lock contention reduction)

10. scalable kmap (was Re: vm lock contention reduction) (fwd)

11. Lock contention data for 2.5.8-pre1

12. Matrox Mystique ands X.

13. Dan Frye on scalability, security, and the Linux kernel