Qs on attack by high rate process creation

Qs on attack by high rate process creation

Post by Farsh » Tue, 04 Dec 2001 15:01:29



Hi,

 It has been know that by a high rate process creation even by a
restricted access a system would stop responding. The code can be as
simple as, for(;;) fork();
 There are also some known ways to protect against such attacks.
 But I cannot analyze the problem. Why a high rate of process creation
causes such a serious problem? While parameters like max number of
processes a user can create can be set, they are of no use in this
attacks.
 If your answer is that "the kernel gets involved with it self too
much", then what does it mean exactly? I'm looking for a clear
analysis of the problem and discussion of why this happens by
considering scheduler and kernel internal mechanism. If the user is
allowed to have 512 processes, for example, and the system will run
with this number of processes created by a user why creation with a
high rate stops the kernel from responding before getting to the
sppecified maximum while processes can run only for a specified slot
of time in jiffies?

     Thank you,

 
 
 

Qs on attack by high rate process creation

Post by Jem Berke » Tue, 04 Dec 2001 22:17:43


Quote:>  It has been know that by a high rate process creation even by a
> restricted access a system would stop responding. The code can be as
> simple as, for(;;) fork();
>  There are also some known ways to protect against such attacks.
>  But I cannot analyze the problem. Why a high rate of process creation
> causes such a serious problem? While parameters like max number of
> processes a user can create can be set, they are of no use in this
> attacks.

I get the feeling (and I could be wrong?) that linux currently doesn't handle
this sort of attack adequatedly at the current level. After I set up ulimits,
I found that they weren't enabled through SSH. Obviously you can enable it
through SSH but the point is that as far as I know, there is no global system
default. What happens if a process gets run through unprotected CGI for
example? You see what I mean.

--
http://www.pc-tools.net/
Windows, Linux & UNIX software

 
 
 

Qs on attack by high rate process creation

Post by Ian Jone » Wed, 05 Dec 2001 04:59:23


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


>  It has been know that by a high rate process creation even by a
> restricted access a system would stop responding. The code can be as
> simple as, for(;;) fork();
>  There are also some known ways to protect against such attacks.
>  But I cannot analyze the problem. Why a high rate of process creation
> causes such a serious problem? While parameters like max number of
> processes a user can create can be set, they are of no use in this
> attacks.
>  If your answer is that "the kernel gets involved with it self too
> much", then what does it mean exactly? I'm looking for a clear
> analysis of the problem and discussion of why this happens by
> considering scheduler and kernel internal mechanism. If the user is
> allowed to have 512 processes, for example, and the system will run
> with this number of processes created by a user why creation with a
> high rate stops the kernel from responding before getting to the
> sppecified maximum while processes can run only for a specified slot
> of time in jiffies?

It would be worth looking into some of the many "real time" branches of
development to see if they offer some help here. I also seem to recall
a preemptive patch to the current stable line which may help also.

I am only guessing, though. You might want to pose your question to
one of the kernel development lists if it hasn't been asked before (I
bet it has).

-----BEGIN PGP SIGNATURE-----
Comment: Keeping the world safe for geeks.

iD8DBQE8C9mbwBVKl/Nci0oRAhWmAKDbsbwLEpgOopnbJhAWV4L8Z/ApmQCfQJxb
bQ5dnvxg4V2srX6ODHhg6f8=
=3aJe
-----END PGP SIGNATURE-----

 
 
 

Qs on attack by high rate process creation

Post by sverr » Wed, 05 Dec 2001 08:18:19



Quote:> Hi,

>  It has been know that by a high rate process creation even by a
> restricted access a system would stop responding. The code can be as
> simple as, for(;;) fork();
>  There are also some known ways to protect against such attacks.
>  But I cannot analyze the problem. Why a high rate of process creation
> causes such a serious problem? While parameters like max number of
> processes a user can create can be set, they are of no use in this
> attacks.
>  If your answer is that "the kernel gets involved with it self too
> much", then what does it mean exactly? I'm looking for a clear
> analysis of the problem and discussion of why this happens by
> considering scheduler and kernel internal mechanism. If the user is
> allowed to have 512 processes, for example, and the system will run
> with this number of processes created by a user why creation with a
> high rate stops the kernel from responding before getting to the
> sppecified maximum while processes can run only for a specified slot
> of time in jiffies?

;) because the 512 processes are too much for your CPU to handle !
The fork() syscall does consume pretty much CPU time (about 5000-6000
clocks on a PIII) and during this time, the CPU can not handle any
interrupts. So you will have to lower the limit on processes to
some sane number (64-128). Its really more a question of the CPU, than
of the kernel.
For example i have ulimit -c 0 -u 84 -m 65536 -t 360 in my /etc/profile
(not for root of course). When I (as a non-root user) run something like
the famous `:(){ :|:&};:` in bash, or any other fork-bomb, I wont be
able to "fork" any more processes, because I get
"bash: fork: Resource temporarily unavailable" in bash
or "No more processes." in tcsh.  All you have to do is to employ some
*sane* limits on the number of processes for every user according to
your CPU power.

--

                        sverre  <sverreQgmx.net>

 
 
 

Qs on attack by high rate process creation

Post by Farsh » Wed, 05 Dec 2001 19:14:53




> > Hi,

> >  It has been know that by a high rate process creation even by a
> > restricted access a system would stop responding. The code can be as
> > simple as, for(;;) fork();
> >  There are also some known ways to protect against such attacks.
> >  But I cannot analyze the problem. Why a high rate of process creation
> > causes such a serious problem? While parameters like max number of
> > processes a user can create can be set, they are of no use in this
> > attacks.
> >  If your answer is that "the kernel gets involved with it self too
> > much", then what does it mean exactly? I'm looking for a clear
> > analysis of the problem and discussion of why this happens by
> > considering scheduler and kernel internal mechanism. If the user is
> > allowed to have 512 processes, for example, and the system will run
> > with this number of processes created by a user why creation with a
> > high rate stops the kernel from responding before getting to the
> > sppecified maximum while processes can run only for a specified slot
> > of time in jiffies?

> ;) because the 512 processes are too much for your CPU to handle !
> The fork() syscall does consume pretty much CPU time (about 5000-6000
> clocks on a PIII) and during this time, the CPU can not handle any
> interrupts. So you will have to lower the limit on processes to
> some sane number (64-128). Its really more a question of the CPU, than
> of the kernel.
> For example i have ulimit -c 0 -u 84 -m 65536 -t 360 in my /etc/profile
> (not for root of course). When I (as a non-root user) run something like
> the famous `:(){ :|:&};:` in bash, or any other fork-bomb, I wont be
> able to "fork" any more processes, because I get
> "bash: fork: Resource temporarily unavailable" in bash
> or "No more processes." in tcsh.  All you have to do is to employ some
> *sane* limits on the number of processes for every user according to
> your CPU power.

I think that a linux machine that stops responsing under fork bombing
attack while the max number of user processes set to 256, can easily
respond to 25 users connected to it, which reseasonably create more
that 256 processes simultaneosly.
I mean that the *sane* limit you say will protect system against fork
attack but it may not be because the system cannot handle 512
processes in queue. As a system can run with more that 512 if they are
not created with a high rate.

    ThanX,
    Farshad

 
 
 

Qs on attack by high rate process creation

Post by sverr » Thu, 06 Dec 2001 06:25:12



Quote:

> I think that a linux machine that stops responsing under fork bombing
> attack while the max number of user processes set to 256, can easily
> respond to 25 users connected to it, which reseasonably create more
> that 256 processes simultaneosly.
> I mean that the *sane* limit you say will protect system against fork
> attack but it may not be because the system cannot handle 512
> processes in queue. As a system can run with more that 512 if they are
> not created with a high rate.

>     ThanX,
>     Farshad

sure, but there is AFAIK no other way to limit the *rate* of forking
processes. All the non-privileged users should have some limit on the
number of processes available and if this limit is low enough, it might
as well protect the machine against fork-bombs.

--

                        sverre  <sverreQgmx.net>

 
 
 

Qs on attack by high rate process creation

Post by Farsh » Thu, 06 Dec 2001 11:43:18


I just checked the system by running 256 processes with a very very
slow pace and although OS becomes slow, everything is ok. But by
generating a few number of processes with a high rate system becomes
unstable. I think that the source of this abnormality is scheduler. By
creating many processes wiht a very high rate, the processes
distribution by priority is such that the scheduler would select from
those spawned processes and it takes a long time to come to other
processes and kernel stops providing services since vital processes
needed do not get the time to be run regularly as expected.
Any Ideas? Comments? - Thank you...

     Farshad




> > > Hi,

> > >  It has been know that by a high rate process creation even by a
> > > restricted access a system would stop responding. The code can be as
> > > simple as, for(;;) fork();
> > >  There are also some known ways to protect against such attacks.
> > >  But I cannot analyze the problem. Why a high rate of process creation
> > > causes such a serious problem? While parameters like max number of
> > > processes a user can create can be set, they are of no use in this
> > > attacks.
> > >  If your answer is that "the kernel gets involved with it self too
> > > much", then what does it mean exactly? I'm looking for a clear
> > > analysis of the problem and discussion of why this happens by
> > > considering scheduler and kernel internal mechanism. If the user is
> > > allowed to have 512 processes, for example, and the system will run
> > > with this number of processes created by a user why creation with a
> > > high rate stops the kernel from responding before getting to the
> > > sppecified maximum while processes can run only for a specified slot
> > > of time in jiffies?

> > ;) because the 512 processes are too much for your CPU to handle !
> > The fork() syscall does consume pretty much CPU time (about 5000-6000
> > clocks on a PIII) and during this time, the CPU can not handle any
> > interrupts. So you will have to lower the limit on processes to
> > some sane number (64-128). Its really more a question of the CPU, than
> > of the kernel.
> > For example i have ulimit -c 0 -u 84 -m 65536 -t 360 in my /etc/profile
> > (not for root of course). When I (as a non-root user) run something like
> > the famous `:(){ :|:&};:` in bash, or any other fork-bomb, I wont be
> > able to "fork" any more processes, because I get
> > "bash: fork: Resource temporarily unavailable" in bash
> > or "No more processes." in tcsh.  All you have to do is to employ some
> > *sane* limits on the number of processes for every user according to
> > your CPU power.

> I think that a linux machine that stops responsing under fork bombing
> attack while the max number of user processes set to 256, can easily
> respond to 25 users connected to it, which reseasonably create more
> that 256 processes simultaneosly.
> I mean that the *sane* limit you say will protect system against fork
> attack but it may not be because the system cannot handle 512
> processes in queue. As a system can run with more that 512 if they are
> not created with a high rate.

>     ThanX,
>     Farshad

 
 
 

Qs on attack by high rate process creation

Post by Kasper Dupon » Thu, 06 Dec 2001 14:40:19



> I just checked the system by running 256 processes with a very very
> slow pace and although OS becomes slow, everything is ok. But by
> generating a few number of processes with a high rate system becomes
> unstable. I think that the source of this abnormality is scheduler. By
> creating many processes wiht a very high rate, the processes
> distribution by priority is such that the scheduler would select from
> those spawned processes and it takes a long time to come to other
> processes and kernel stops providing services since vital processes
> needed do not get the time to be run regularly as expected.
> Any Ideas? Comments? - Thank you...

I guess you are right that there is some problem with the
scheduler. I recently tried runing this shell script
$0&$0& and my system was completely dead, the only way
out was a hard reset. AFAIR runing the maximum number of
processes spending as much CPU time they can get but no
forking would still allow me to login as root on another
VC and kill the processes. It is going to be slow but
possible.

--
Kasper Dupont

 
 
 

1. High Scan Rates, High IO Wait Question

Hello,

I'm an Oracle DBA who is relatively new to AIX, and I have a server
tuning question regarding proper settings for vmtune.

Server: H60, 4 CPU, 3 Gig RAM, 2.4 GIG paging (3x800M), AIX 4.3.3
Application: Oracle 8.1.6 servicing PeopleSoft Financials 7.5

The machine is a dedicated Oracle server.

Symptoms:
Currently, vmtune -p 15 -P 60.  Paging, as reported by vmstat, has
almost disappeared.  However, it appears that when Oracle dedicated
server processes are started, the scan rate goes through the roof, I see
spikes at around 14000 to 16000 runing vmstat 120.  We also see very
high IO wait time that I believe is related to VMM activity.

I think the other vmtune parameters are default settings, -f 121 -F 128.

What can I do to reduce the IO wait and the high scan rates, or, are
scan rates this high normal for this environment?

I've looked long and hard at various manuals and web pages, but I can't
seem to find specific settings to fix this problem.

Thanks,
Nick

2. RPM problem

3. Find exact creation time of a process; not kill a wrong process

4. SB16 & cdu33a & redhat 2.1

5. HIGH TCP CONNECT ERROR SYSTEM PORT 25 MAY BE UNDER SYN FLOOD ATTACK

6. Linker Problems

7. High Volume TCP Socket Creation

8. network problem

9. How can I sustain a high transfer rate ?

10. PPP lost packet rate very high

11. Using Linux to Capture Packets on High Speed/Rate Network

12. High collision rate - what to do

13. High Sun Ray 1 failure rate