repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Andreas Dilge » Fri, 03 Aug 2001 07:10:07



You write:

Quote:> We've been experiencing a problem where an errant process would run in a
> tight loop trying to create files in a directory where it did not have
> access. While this errant process was running, we'd notice all of the
> available memory shift from buffers/cache (or free) to used and stay
> that way while the process was running. vmstat also reports heavy in/out
> traffic on the swap, but swap consumption does not grow past a few dozen
> megabytes. The memory used by the process itself does not grow.

> Note that we increase the default values for certain FS parameters:

> echo '16384' >/proc/sys/fs/super-max
> echo '32768' >/proc/sys/fs/file-max
> echo '65535' > /proc/sys/fs/inode-max

You are probably creating negative dentries.  Check /proc/slabinfo for
the number of dentries, and it will confirm this.  I'm not sure why
that would cause swapping, but then again I haven't checked the policy
for shrinking the dentry cache recently, and there have been a number
of changes in that area lately.

Cheers, Andreas
--
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Brian Ristucci » Fri, 03 Aug 2001 07:40:10



> You write:

>>We've been experiencing a problem where an errant process would run in a
>>tight loop trying to create files in a directory where it did not have
>>access. While this errant process was running, we'd notice all of the
>>available memory shift from buffers/cache (or free) to used and stay
>>that way while the process was running. vmstat also reports heavy in/out
>>traffic on the swap, but swap consumption does not grow past a few dozen
>>megabytes. The memory used by the process itself does not grow.

>>Note that we increase the default values for certain FS parameters:

>>echo '16384' >/proc/sys/fs/super-max
>>echo '32768' >/proc/sys/fs/file-max
>>echo '65535' > /proc/sys/fs/inode-max

> You are probably creating negative dentries.  Check /proc/slabinfo for
> the number of dentries, and it will confirm this.  I'm not sure why
> that would cause swapping, but then again I haven't checked the policy
> for shrinking the dentry cache recently, and there have been a number
> of changes in that area lately.

Yow! Right on. On 2.2.19 and 2.4.7, the  line for dentry_cache in
/proc/slabinfo skyrockets while the test program is running. Also, on
2.2.19 but not 2.4.7 the line for size-32 climbs steadily at around the
same pace as dentry_cache when the test program is running. After I stop
the test program, the number slowly declines as other processes allocate
memory.

--
Brian Ristuccia

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Linus Torvald » Fri, 03 Aug 2001 08:00:10



Quote:

>Note that we increase the default values for certain FS parameters:

>echo '16384' >/proc/sys/fs/super-max
>echo '32768' >/proc/sys/fs/file-max
>echo '65535' > /proc/sys/fs/inode-max

Doesn't matter.

The thing that you end up hitting looks very much like just tons of
negative dentries lying around.

If you are even moderately confident about playing around with the
kernel, I would suggest testing the following approach (if you aren't
comforable with playing with the kernel, I hope somebody else is willing
to try this out, or I could try to cook up a patch to test later this
week)

 - add a new dentry list in addition to the current 'dentry_unused' list
   in linux/fs/dentry.c:

        static LIST_HEAD(dentry_unused_negative);

 - when 'dput()' adds a dentry to the old 'dentry_unused' list it would
   instead check whether the dentry is negative, and if so add it to the
   negative list instead:

        list = &dentry_unused;
        if (!dentry->d_inode)
                list = &dentry_unused_negative;
        list_add(&dentry->d_lru, list);

 - add a new "shrink_negative_dentries()" function that just does
   something like

        struct list_head *tmp;

        spin_lock(&dcache_lock);
        for (;;) {
                struct list_head * tmp = dentry_unused_negative.next;
                if (tmp == &dentry_unused_negative)
                        break;
                struct dentry *dentry = list_entry(tmp, struct dentry, d_lru);
                tmp = tmp->next;
                if (atomic_read(&dentry->d_count))
                        BUG();
                if (dentry->d_inode)
                        BUG();
                prune_one_dentry(dentry);
                /* prune_one_dentry() releases the lock */
                spin_lock(&dcache_lock);
        }
        spin_unlock(&dcache_lock);

 - make all the things that shrink dentries (notably the
   shrink_dcache_memory() function) call the above function first.

Does that fix the behaviour for you?

                Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Andreas Dilge » Fri, 03 Aug 2001 08:00:11


Brian Ristuccia writes:

> > You are probably creating negative dentries.  Check /proc/slabinfo for
> > the number of dentries, and it will confirm this.  I'm not sure why
> > that would cause swapping, but then again I haven't checked the policy
> > for shrinking the dentry cache recently, and there have been a number
> > of changes in that area lately.

> Yow! Right on. On 2.2.19 and 2.4.7, the  line for dentry_cache in
> /proc/slabinfo skyrockets while the test program is running. Also, on
> 2.2.19 but not 2.4.7 the line for size-32 climbs steadily at around the
> same pace as dentry_cache when the test program is running. After I stop
> the test program, the number slowly declines as other processes allocate
> memory.

So it is identified, but still probably needs to be fixed in some way.
Otherwise, you could potentially have DOS from someone trying to read
or create files, even if they don't have write permission _anywhere_
on the system.

Looking at the code, it appears that if we call shrink_dcache_memory()
from while trying to allocate memoty for a filesystem, it returns
without doing anything, to avoid a deadlock.  Al Viro and/or Marcelo
Tosatti probably know how to fix this.

Cheers, Andreas
--
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Alexander Vir » Fri, 03 Aug 2001 08:20:04



>  - make all the things that shrink dentries (notably the
>    shrink_dcache_memory() function) call the above function first.

> Does that fix the behaviour for you?

That will kill _all_ negative dentries whenever we get any amount of
memory pressure. For stuff a-la $PATH it will get very ugly - currently
we have a lot of negative dentries in /usr/local/bin that prevent tons
of bogus lookups there,

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Linus Torvald » Fri, 03 Aug 2001 08:20:07




> >  - make all the things that shrink dentries (notably the
> >    shrink_dcache_memory() function) call the above function first.

> > Does that fix the behaviour for you?

> That will kill _all_ negative dentries whenever we get any amount of
> memory pressure. For stuff a-la $PATH it will get very ugly - currently
> we have a lot of negative dentries in /usr/local/bin that prevent tons
> of bogus lookups there,

I agree. I'm a big fan of negative dentries myself.

However, I'd like to see what the patch does for the bad case first, and
then we can see whether there are less drastic methods (like only killing
half of the negative dentries or something).

The current behaviour obviously has some problems, and negative dentries
_are_ different from positive ones. Trond (or somebody - maybe it was
Neil) also mentioned that negative dentries tend to hurt some NFS
benchmarks by virtue of filling up memory a bit too much.

Btw, my fix description wasn't perfect - you need to get things like the
unused dentry counts right too, etc. The basic approach stands, though.

                Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Linus Torvald » Fri, 03 Aug 2001 09:10:05



Quote:

>Looking at the code, it appears that if we call shrink_dcache_memory()
>from while trying to allocate memoty for a filesystem, it returns
>without doing anything, to avoid a deadlock.  Al Viro and/or Marcelo
>Tosatti probably know how to fix this.

Note that negative dentries are different, and can be happily thrown
away even from a filesystem context - so the suggested fix with special
handling for negative dentries (another email) should work fine.

                Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Alexander Vir » Fri, 03 Aug 2001 10:30:08



> However, I'd like to see what the patch does for the bad case first, and
> then we can see whether there are less drastic methods (like only killing
> half of the negative dentries or something).

Removing the "second chance" logics for negative dentries might be a good
start...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

Post by Michael Ganzhor » Mon, 17 Sep 2001 19:13:20


Hi there,

Im an sparc linux newbie. I tried to compile the 2.4.9 kernel with
patch 2.4.10pre9.
The compilation was ok but I get an error from the btfixupprep progam:

###

Wrong use of 'disable_irq' in '.text.exit' section. It can be only used
in .text, .text.init, .fixup and __ksymtab

###

What can I do to solve this problem ??? Has anyone experience with this
kernel release on a sparc32 machine, or an idea how it can be solved?

Thanks

Micha
--
------------------------------------------------------------------------
On the requirements it said: Windows 98 or better - so I installed Linux

Michael Ganzhorn


webhome:http://www.ganzhorn.de

 
 
 

1. Memory loss with lots of memory?

Howdy,
        I have a question regarding some strangeness that was noted on
a Linux machine that is being used as a compute-server.  It's a
Pentium Pro 200, 512M of RAM, 256K cache.  The process(es) that is
being run on this machine will take up approximately 445M of RAM.
These include C programs that will malloc sizes between 512k at once
and 200M at once.  Thus, it is a relatively small number of mallocs
performed to grab a large amount of space.  However, once it reaches
445M of space or so, processes start swapping out, and it is noted
that the remaining RAM, about 70M, is being tied up.  This is
happening while running under X, which initially consumes less than
10M of RAM.  There is only 1 user on the system, and it is not
networked to anything else that would allow any other user in.
        The owner of this machine wants to purchase another 512M for
it, but would be hesitant of it if 10%-15% of the RAM will be tied up
doing something else.  Does anybody know what that 70M of ram is being
used for, and whether or not it would stay at 70M or jump to about
150,if total ram was increased to 1G?

Thanks,
Tom

2. Shadow password security

3. How to see memory used per user; memory used per process

4. strange nfs behavior

5. 3002 uses a *LOT* of memory

6. Calentool Binary Wanted

7. oclock uses lots of memory

8. Are remote dialins for http possible?

9. Linux uses lots of memory?

10. Gimp uses lots of memory

11. X server using lots of memory

12. New: vesafb fails to initalize 1GB system memory and 128MB video memory

13. Solaris rpc.nisd taking lots o' memory