RM: maximum number of files?

RM: maximum number of files?

Post by evdwa » Thu, 15 Jul 1999 04:00:00



In mail from oracle:

<quote>

Subject: cron

sh: /usr/bin/rm: The parameter list is too long.

*************************************************
Cron: The previous message is the standard output
      and standard error of one of your crontab commands:

rm -f /ora_arch/oradata/PROD/archive/*

<end quote>

This command is a crontab command made by me to delete archive entries when
the database is down.

I was thinking: maybe rm doesn't take a REAL long list of files?
sollution:

ls -1 /ora_arch/oradata/PROD/archive/* | head -1000 | rm -f

Executing this 10x in a row should do the trick.
But what is the maximum amount of parameters rm takes? Or is the error from
something else?

(it is a HP 11 system)

I am not sure if this is OT, but i guess shell scripters have been
encountering these problems as well.
If you have another group to direct me to feel free..

 
 
 

RM: maximum number of files?

Post by Dan Merc » Thu, 15 Jul 1999 04:00:00




> In mail from oracle:

> <quote>

> Subject: cron

> sh: /usr/bin/rm: The parameter list is too long.

> *************************************************
> Cron: The previous message is the standard output
>       and standard error of one of your crontab commands:

> rm -f /ora_arch/oradata/PROD/archive/*

> <end quote>

> This command is a crontab command made by me to delete archive entries when
> the database is down.

> I was thinking: maybe rm doesn't take a REAL long list of files?
> sollution:

> ls -1 /ora_arch/oradata/PROD/archive/* | head -1000 | rm -f

> Executing this 10x in a row should do the trick.
> But what is the maximum amount of parameters rm takes? Or is the error from
> something else?

> (it is a HP 11 system)

> I am not sure if this is OT, but i guess shell scripters have been
> encountering these problems as well.
> If you have another group to direct me to feel free..

The problem is with the exec call,  not any of the programs,  so your
solution will blow up just as badly as the original.  By default,  your
combined environment and argument stack is limited to 20478 bytes
(ARG_MAX in limits.h).  In HP-UX this can be extended up too 100 times
that size.  It may require a patch and certainly requires a kernel
parameter change.  However,  it's likely your main problem is simply
poor organization of your command.  If you change:

   rm -f /ora_arch/oradata/PROD/archive/*

to

   cd /ora_arch/oradata/PROD/archive && rm -f -- *

it likely will work.  However,  if you use:

   cd /ora_arch/oradata/PROD/archive && ls | xargs rm -f --

and if you use:

   cd /ora_arch/oradata/PROD/archive && ls | xargs -i rm -f -- {}

it will work under all.

--
Dan Mercer

Opinions expressed herein are my own and may not represent those of my employer.

 
 
 

RM: maximum number of files?

Post by evdwa » Thu, 15 Jul 1999 04:00:00


Quote:>   cd /ora_arch/oradata/PROD/archive && ls | xargs -i rm -f -- {}

I put this in my crontab of oracle and we will see tomorow if it worked.
Thanks.
 
 
 

1. Maximum number of files in a directory

Hi,

We are running linux as an SMTP server with about to 6000 users.
Sometimes procmail complains about permission to create a file on
/var/spool/mail and I solve this problem by erasing the files with zero
size. Can I solve this problem by increasing the Inode size?

        TIA,
                Gustavo
--
"La Femme partage nos peines,
 double nos joies et triple
 nos depenses."

2. no init found with bootx

3. Maximum inode number per UFS file system

4. Etherexpress 16 LAN drivers

5. Help - maximum number of files in a directory

6. Exabyte problems under 1.3.77

7. Maximum file number under /tmp

8. IP Host unreachable

9. increasing the maximum number of file descriptors

10. setting maximum number of file descriptors for a process

11. Maximum number of open files

12. Maximum number of files that can be opened in linux

13. Increasing maximum number of open files per process?