why parent close socket fd before child finish?

why parent close socket fd before child finish?

Post by stephe » Mon, 27 Mar 2000 04:00:00



Hello:

I 've got this server example code where the parent closes
the socket before the child finishes. But why?

==================================================
<snip>

server_sockfd = socket(AF_INET, SOCK_STREAM, 0);
// took out some socket setup code
bind(sever_sockfd, (struct sockaddr *)&server_address, server_len);
listen(server_sockfd,5);

signal (SIGCHLD, SIG_IGN);

while(1)
{
    char ch;

    printf("Server waiting\n");

    client_sockfd = accept(server_sockfd, (struct sockaddr
*)&client_address, &client_len);

    if(fork()==0)
    {
        read(client_sockfd, &ch, 1);
        sleep(5);
        ch++;
        write(client_sockfd, &ch, 1);
        close(client_sockfd);
        extit(0);
    }
    else
    {
        close(client_sockfd);
    }

Quote:}

<snip>
==============================================

It's my understanding the "server_sockefd" is maintained for the
life of the server, but a new "client_sockfd" is generated for every
"accept". It's logical the client should close it's "client_sockfd", but

why does the parent close the "client_sockfd" as well and before
the child has the chance to finish?

Thanks
Stephen

 
 
 

why parent close socket fd before child finish?

Post by Nicholas Drone » Mon, 27 Mar 2000 04:00:00



> Hello:
> I 've got this server example code where the parent closes
> the socket before the child finishes. But why?
> ==================================================
> <snip>
> server_sockfd = socket(AF_INET, SOCK_STREAM, 0);
> // took out some socket setup code
> bind(sever_sockfd, (struct sockaddr *)&server_address, server_len);
> listen(server_sockfd,5);
> signal (SIGCHLD, SIG_IGN);
> while(1)
> {
>     char ch;
>     printf("Server waiting\n");
>     client_sockfd = accept(server_sockfd, (struct sockaddr
> *)&client_address, &client_len);
>     if(fork()==0)
>     {
>         read(client_sockfd, &ch, 1);
>         sleep(5);
>         ch++;
>         write(client_sockfd, &ch, 1);
>         close(client_sockfd);
>         extit(0);
>     }
>     else
>     {
>         close(client_sockfd);
>     }
> }
> <snip>
> ==============================================
> It's my understanding the "server_sockefd" is maintained for the
> life of the server, but a new "client_sockfd" is generated for every
> "accept". It's logical the client should close it's "client_sockfd", but
> why does the parent close the "client_sockfd" as well and before
> the child has the chance to finish?

If a process forks, all of its file descriptors are duplicated,
so you then have two indexes into the global file descriptor
table.  When either the parent or the child exits, the file
descriptor (or socket) isn't really closed -- the other process
retains it's copy of the index and a variable in the global
file descriptor table is simply decremented by one to reflect
the fact that one less process has an index into it.  The
file or socket is only truly closed when the value of that
variable becomes 0.

Since your process forks, there are two copies of the process
running, each with its own index into the file descriptor
table, which means that the client's copy is not closed
when the server closes it's copy of client_sockfd.  Similarly,
if you were to close(server_sockfd) from the child process,
the parent process (the one that did the accept) would still
have server_sockfd open.  In fact, it's conventional to do
just that: in the child process, close the descriptor being used
for accept(); in the parent process, close the descriptor returned
from accept().

So -- to answer your question -- the parent process closes
client_sockfd because, having forked, it no longer needs to
keep it open.  The upshot of the preceding two paragraphs
is that this does not affect whether the child process can
finish doing its job or not.  The child should be able to
continue as if the parent process didn't close client_sockfd.

It's good practise to close the client_sockfd in the parent, as
on a busy server it can be a problem to leave them lying around
while the child process does its thing.  If you don't close
the client_sockfd it's possible that accept() will begin
to return EMFILE, meaning your process has hit the limit
on the maximum number of file descriptors available per
process.  This is generally considered a bad thing. :)

+---------------------------------------------------------------+
| When you really look for me, you will see me instantly --     |
| you will find me in the tiniest house of time.                |
|                                               - Kabir         |
+---------------------------------------------------------------+
| nick dronen      (unsigned char *) "ndronen at frii dot com"  |
i+--------------------------------------------------------------+

 
 
 

why parent close socket fd before child finish?

Post by stephe » Mon, 27 Mar 2000 04:00:00



> If a process forks, all of its file descriptors are duplicated,
> so you then have two indexes into the global file descriptor
> table.  When either the parent or the child exits, the file
> descriptor (or socket) isn't really closed -- the other process
> retains it's copy of the index and a variable in the global
> file descriptor table is simply decremented by one to reflect
> the fact that one less process has an index into it.  The
> file or socket is only truly closed when the value of that
> variable becomes 0.

<snip>

Thanks Nick for your help. Upon reading your post I had
a second look at fork() and you're right the child <quote>
inherits a *copy* of open file descriptors ... <end quote>
That has me thinking. What about a malloc() before a
fork()? Both parent and child pointer points to the same
address right? So either the parent free() or the child,
but not both?

Stephen

 
 
 

why parent close socket fd before child finish?

Post by Casper H.S. Dik - Network Security Engine » Tue, 28 Mar 2000 04:00:00


[[ PLEASE DON'T SEND ME EMAIL COPIES OF POSTINGS ]]


>Thanks Nick for your help. Upon reading your post I had
>a second look at fork() and you're right the child <quote>
>inherits a *copy* of open file descriptors ... <end quote>
>That has me thinking. What about a malloc() before a
>fork()? Both parent and child pointer points to the same
>address right? So either the parent free() or the child,
>but not both?

Just like filedescriptors,  both client and parent must
close (or exit) and both client and parent must free (or exit)

Casper
--
Expressed in this posting are my opinions.  They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

 
 
 

1. Parent wait for child to finish or kill it after 1 minute

Ok, I figured out how to use signal so that ^C kills the child process (using
http://www.ecst.csuchico.edu/~beej/guide/ipc/signals.html). Now, here is where
things get hairy. I want the parent process to wait until either the child
process finishes, 1 minute or parent blows childs brains out, or human decides
to pull the trigger by pressing ^C. How can I accomplish the 1 minute wait or
wait(-1) or ^C?

Thank you,
Chad

P.S. This in done in C++.

2. how to make 'Pegas.usb V90 USB Modem' in RH linux

3. Reclaiming used socket fd or how to close sockets properly

4. What is this?

5. Getting it right- passing fd from child to parent processes- Linux

6. ProxyPass and ProxyRemote, (Apache 1.2.0)

7. close file descriptors between parent and children, Solaris 2.5.1 ?

8. Counting Hits...

9. socket reading in parent/child environment

10. Parent / Child - parent exit problem

11. how do i wait for child processes, and *their* children to finish

12. When call shutdown(), how is socket fd closed?

13. Closing socket fd