Writing TCP server.

Writing TCP server.

Post by johnv.. » Fri, 05 May 2000 04:00:00



Greetings,

I'm writing a server that echoes the uptime back to the client, to learn
more about writing fast multi-client servers.  My client and server use
port 4000.  My client is a "simple concurrent server" -- it just fork()s
to handle clients.

In order to test the reliability and quickness of the server, I made a
program called tester, which calls the client once the server has been
started in the background.  Tester has two versions; one which calls the
client using system in an infinite loop, but calls sleep(1) after so the
client is only called every second.  I called both versions of tester,
and let them run for 15 seconds.  Here's an example of the results I got
with the first version:

% ./foo_server &
% ./tester
7:25pm  up  2:04,  1 user,  load average: 0.24, 0.23, 0.10   # this is
the uptime the server sent back after 15 seconds

And here are the results which I got with the second version of tester,
which doesn't wait a second between firing the client:

% ./foo_server &
% ./tester
7:24pm  up  2:03,  1 user,  load average: 0.73, 0.29, 0.11  # this is
the uptime the server sent back after 15 seconds

As you can see, the second version started eating up my resources.  I
was wondering about two things:

1.  How can I stop the server that handles incoming clients very quickly
from overloading the system?  I think this has to do with my use of
fork()s, i.e. too many children but I have no idea how to take care of
it (my source is later in the message).

2.  Aside from this, my server prints the port the client is connecting
from when it recieves a connection.  It takes the client port from the
structure passed to accept().  When I run the tester program, I noticed
that the port number, starting from around 1200, is incremented by one
each time... I don't understand why it's incrementing, or changing at
all.  My programs use port 4000, so shouldn't it always output 4000?

Anyways, here's the essential part of my server:

        for (;;) {
                addrlen = sizeof(client);
                if ((connfd = accept(listenfd, (struct sockaddr *)&client,
                                                                                                 &addrlen)) < 0) {
                        if (errno == EINTR)
                                continue;
                        else {
                                printf("call to accept failed.\n");
                                exit(EXIT_FAILURE);
                        }
                }
                printf("client address: %s\n", inet_ntoa(client.sin_addr));
                printf("port: %hd\n", ntohs(client.sin_port));

                switch ((fpid = fork())) {
                case 0:
                        printf("in child.\n");
                        report_uptime(connfd);
                        close(connfd);
      close(listenfd);
                        exit(EXIT_SUCCESS);
                        break;
                case -1:
                        printf("error forking.\n");
                        exit(EXIT_FAILURE);
                        break;
                default:
                        printf("in parent.\n");
                        close(connfd);
                        break;
                }
                waitpid(fpid, (int *)NULL, WNOHANG);
        }
        close(listenfd);

report_uptime() is just a function that calls uptime using popen and
sends the result to the given socket descriptor..

Any suggestions or comments on solving my problem or making the code
more efficient are welcomed.  I don't want this to use threads or
anything like that -- at least not yet.  I mainly am looking for advice
on efficient use of fork(), and on the weird behavior I get with the
port numbering..

Thanks a lot!
 -- John

Sent via Deja.com http://www.deja.com/
Before you buy.

 
 
 

Writing TCP server.

Post by Nicholas Drone » Sat, 06 May 2000 04:00:00



> As you can see, the second version started eating up my resources.  I
> was wondering about two things:
> 1.  How can I stop the server that handles incoming clients very quickly
> from overloading the system?  I think this has to do with my use of
> fork()s, i.e. too many children but I have no idea how to take care of
> it (my source is later in the message).
> 2.  Aside from this, my server prints the port the client is connecting
> from when it recieves a connection.  It takes the client port from the
> structure passed to accept().  When I run the tester program, I noticed
> that the port number, starting from around 1200, is incremented by one
> each time... I don't understand why it's incrementing, or changing at
> all.  My programs use port 4000, so shouldn't it always output 4000?
> Any suggestions or comments on solving my problem or making the code
> more efficient are welcomed.  I don't want this to use threads or
> anything like that -- at least not yet.  I mainly am looking for advice
> on efficient use of fork(), and on the weird behavior I get with the
> port numbering..

A client doesn't typically request to use a particular local port.
If it does, it has to use the bind(2) system call to request that
port.  On my Linux system, a long series of socket, connect, close
calls result in the client being assigned ports ranging from 1024
to 4999, at which point the sequence starts again at 1024.  On
an AIX box the kernel-assigned client port numbers range from 32769
to 33768.  If you were rewrite your client to bind to a particular
port and you used the SO_REUSEADDR setsockopt option, you could
use the same lock port number across multiple calls to socket().

See espectially W. Richard Stevens' _Unix Network Programming Vol 1_.

+---------------------------------------------------------------+
| When you really look for me, you will see me instantly --     |
| you will find me in the tiniest house of time.                |
|                                               - Kabir         |
+---------------------------------------------------------------+
| nick dronen      (unsigned char *) "ndronen at frii dot com"  |
i+--------------------------------------------------------------+

 
 
 

Writing TCP server.

Post by Barry Margoli » Sat, 06 May 2000 04:00:00



>1.  How can I stop the server that handles incoming clients very quickly
>from overloading the system?  I think this has to do with my use of
>fork()s, i.e. too many children but I have no idea how to take care of
>it (my source is later in the message).

For simple servers like yours, you could use threads instead of forking
separate processes, as they put less load on the system.

If you want to ensure that you never process more than one request per
second, you could put a 1-second sleep in your main server loop.

--

Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.