GLX & X server resources problem on SGI

GLX & X server resources problem on SGI

Post by Andreas Ra » Tue, 15 Apr 1997 04:00:00



Hi,

I have a serious problem with the GLX and X server resources.
In my program I have to allocate _lots_ of GLX pixmaps to render into
and it seems that the resources are not free'd by the X server upon
deletion of the GLX pixmap and the X pixmap. See the following program:

--------------------------- begin source ----------------------------

#include <stdio.h>
#include <X11/X.h>
#include <X11/Intrinsic.h>
#include <GL/gl.h>
#include <GL/glx.h>

int attrs[] = { GLX_LEVEL, 0, GLX_RED_SIZE, 4, GLX_BLUE_SIZE, 4,
                GLX_GREEN_SIZE, 4, GLX_DEPTH_SIZE, 8, GLX_RGBA, None};  

void
main(int, char**)
{
  Drawable pixmap, dummyPixmap;
  GLXPixmap glxPixmap, dummyGLX;
  GLXContext ctx;
  Display *display;
  XVisualInfo *vInfo;

  display = XOpenDisplay(0);
  vInfo = glXChooseVisual(display,0,attrs);

  dummyPixmap = XCreatePixmap(display, XRootWindow(display,0), 10, 10,
                              vInfo->depth);
  dummyGLX = glXCreateGLXPixmap(display, vInfo, dummyPixmap);

  for(;;)
    {
      printf(".");fflush(stdout);
      /* create the X pixmap */
      pixmap = XCreatePixmap(display, XRootWindow(display,0), 768, 576,
                             vInfo->depth);
      /* create the associated GLX pixmap */
      glxPixmap = glXCreateGLXPixmap(display, vInfo, pixmap);
      /* create a new context */
      ctx = glXCreateContext(display,vInfo,NULL,0);

      /* X1: make it current */
      glXMakeCurrent(display, glxPixmap, ctx);

      /* insert your drawing code here */

      /* X2: Try to destroy any references to the GLXPixmap */
      glXMakeCurrent(display,dummyGLX, ctx);
      glXMakeCurrent(display,None, NULL);

      /* free up the resources */
      glXDestroyGLXPixmap(display, glxPixmap);
      XFreePixmap(display, pixmap);
    }

Quote:}

------------------------- end source ----------------------------

Actually the above program should run forever. However,
when running the above on SGI (Irix 5.3 and 6.2) it ends up by
saying that there are no more X server resources available.
When examining the server I noted that this is true, the server
seems to be filled up with a huge amount of memory during execution.

However, everything works fine if I take out the lines marked with
X1 and X2. It seems that there are still some references to the
allocated GLX/X pixmap and I don't know where they could come from.
The man pages to glXDestroyGLXPixmap say that "if GLX pixmap pix is not
current to any client, glXDestroyGLXPixmap destroys it immediately".
This is what tried to achieve in the lines marked with X2.
So, are there still any references to it? If yes, where are they coming
from? Or is it a bug on SGI only?!

Any help would be greatly appreciated,
  Andreas

--
Linear algebra is your friend - Trigonometry is your enemy.

I Department of Simulation and Graphics      Phone: +49 391 671 8065  I
I University of Magdeburg, Germany           Fax:   +49 391 671 1164  I
+=============< http://simsrv.cs.uni-magdeburg.de/~raab >=============+

 
 
 

GLX & X server resources problem on SGI

Post by Rex Barze » Tue, 15 Apr 1997 04:00:00



> Hi,

> I have a serious problem with the GLX and X server resources.
> In my program I have to allocate _lots_ of GLX pixmaps to render into
> and it seems that the resources are not free'd by the X server upon
> deletion of the GLX pixmap and the X pixmap. See the following program:

> --------------------------- begin source ----------------------------

> #include <stdio.h>
> #include <X11/X.h>
> #include <X11/Intrinsic.h>
> #include <GL/gl.h>
> #include <GL/glx.h>

> int attrs[] = { GLX_LEVEL, 0, GLX_RED_SIZE, 4, GLX_BLUE_SIZE, 4,
>                 GLX_GREEN_SIZE, 4, GLX_DEPTH_SIZE, 8, GLX_RGBA, None};

> void
> main(int, char**)
> {
>   Drawable pixmap, dummyPixmap;
>   GLXPixmap glxPixmap, dummyGLX;
>   GLXContext ctx;
>   Display *display;
>   XVisualInfo *vInfo;

>   display = XOpenDisplay(0);
>   vInfo = glXChooseVisual(display,0,attrs);

>   dummyPixmap = XCreatePixmap(display, XRootWindow(display,0), 10, 10,
>                               vInfo->depth);
>   dummyGLX = glXCreateGLXPixmap(display, vInfo, dummyPixmap);

>   for(;;)
>     {
>       printf(".");fflush(stdout);
>       /* create the X pixmap */
>       pixmap = XCreatePixmap(display, XRootWindow(display,0), 768, 576,
>                              vInfo->depth);
>       /* create the associated GLX pixmap */
>       glxPixmap = glXCreateGLXPixmap(display, vInfo, pixmap);
>       /* create a new context */
>       ctx = glXCreateContext(display,vInfo,NULL,0);

>       /* X1: make it current */
>       glXMakeCurrent(display, glxPixmap, ctx);

>       /* insert your drawing code here */

>       /* X2: Try to destroy any references to the GLXPixmap */
>       glXMakeCurrent(display,dummyGLX, ctx);
>       glXMakeCurrent(display,None, NULL);

>       /* free up the resources */
>       glXDestroyGLXPixmap(display, glxPixmap);
>       XFreePixmap(display, pixmap);
>     }
> }

> ------------------------- end source ----------------------------

> Actually the above program should run forever.

The above program will not run forever on any platform.  The problem is
not with the pixmaps.  Instead the problem is you are never destroying
the GLX contexts.  Each time through the loop, you create a new context:

Quote:>       ctx = glXCreateContext(display,vInfo,NULL,0);

Each context requires memory.  Since you never call glXDestroyContext()
that memory is never freed up.  Eventually your process reaches the
maximum amount of memory allowed by the operating system.  You get an X
error because the client side of X is trying to allocate more memory,
and it can't.  The fix is to call

        glDestroyContext(display, ctx);

after the call to XFreePixmap(display, pixmap);

--
Rex Barzee
Hewlett-Packard Graphics Software Lab
http://hpweb-gsl.fc.hp.com/gsl/people/barzee/homepage.html

 
 
 

GLX & X server resources problem on SGI

Post by Michael I. Gol » Tue, 15 Apr 1997 04:00:00


| The above program will not run forever on any platform.  The problem is
| not with the pixmaps.  Instead the problem is you are never destroying
| the GLX contexts.  Each time through the loop, you create a new context:
|
| >       ctx = glXCreateContext(display,vInfo,NULL,0);

Nice catch.

| Each context requires memory.  Since you never call glXDestroyContext()
| that memory is never freed up.  Eventually your process reaches the
| maximum amount of memory allowed by the operating system.  You get an X
| error because the client side of X is trying to allocate more memory,
| and it can't.  The fix is to call
|
|         glDestroyContext(display, ctx);
|
| after the call to XFreePixmap(display, pixmap);

An even better solution, if your application allows, is to create a
context once, and reuse it.  Your current strategy runs the risk of
fragmenting the heap.  The same applies (even more so) to pixmaps.
--
Michael I. Gold     Silicon Graphics Inc.     http://www.veryComputer.com/
And my mama cried,  "Nanook a no no! Don't be a * eskimo! Save your
money, don't go to the show!"  Well I turned around and I said, "Ho! Ho!"

 
 
 

GLX & X server resources problem on SGI

Post by Andreas Ra » Wed, 16 Apr 1997 04:00:00


[code deleted]

: > Actually the above program should run forever.
:
: The above program will not run forever on any platform.  The problem is
: not with the pixmaps.  Instead the problem is you are never destroying
: the GLX contexts.  Each time through the loop, you create a new context:
:
: >       ctx = glXCreateContext(display,vInfo,NULL,0);

You're right. The code fragment was just to show what's basically
happening since my app is somewhat more complicated. I only verified that
the server resources are getting down.

: The fix is to call
:
:         glDestroyContext(display, ctx);
:
: after the call to XFreePixmap(display, pixmap);

And now its getting _really_ strange. After adding the glXDestroyContext()
in the previously shown code (since I do have it in my actual app) the
following happened:

* on SGI Indigo^2 Extreme (Irix 5.3):
  Runs fine. The servers grows somewhat but only slightly.

* on SGI Indy (Irix 5.3):
  No change, the server still runs out of resources after approx. 100
  iterations.

* on SGI Indigo^2 MaxImpact w/ R10000 and on Onyx InfiniteReality2
  (both Irix 6.2):
  The server grows and after approx. 200 iterations
  THE SYSTEM COMPLETELY LOCKS UP!!!
  (i.e. no input response, no remote login anymore; all that's left is
  to press the magic button)

Ok, can anyone tell me what's going on here?!

Now, what I've done after crashing two of our machines (fortunately
nobody was working when tried it) was to move the glXCreateContext()
and the glXDestroyContext completely out of the loop by creating
the context only once and assigning different pixmaps to it
(note that this won't work for my app since it may be using different
visuals) and everything worked fine. It appears to me that there is some
strange behaviour if lots of contexts and associated pixmaps are created.

  Andreas
--
Linear algebra is your friend - Trigonometry is your enemy.

I Department of Simulation and Graphics      Phone: +49 391 671 8065  I
I University of Magdeburg, Germany           Fax:   +49 391 671 1164  I
+=============< http://simsrv.cs.uni-magdeburg.de/~raab >=============+

 
 
 

GLX & X server resources problem on SGI

Post by Ralf Helbin » Wed, 16 Apr 1997 04:00:00



> * on SGI Indigo^2 MaxImpact w/ R10000 and on Onyx InfiniteReality2
>   (both Irix 6.2):

Let me just add (in case it makes any difference) that the big box is
actually an Onyx2 IR running 6.4.

Ralf
--
"Do not induce vomiting." "The movie will do that for you."

Ralf Helbing,    University of Magdeburg,     Department of Computer Science
39106 Magdeburg, UniPlatz 2                         Phone: +49 0391 67-12189

 
 
 

1. GLX resource problem


|> This has come up twice in the last couple of weeks.
|>
|> Users trying to run a GLX application get the following error:
|> X Error of failed request:  BadAlloc (insufficient resources for
|> operation)

The message probably has a few more lines than that.  Does
it say which type of GLX request had the BadAlloc?

BadAlloc can happen for quite a number of reasons.  Perhaps
you have created a GLX pixmap that is too large or run
the system out of kernel rendering contexts (you created
too many OpenGL rendering contexts).  Or you are trying
to allocate a pbuffer which doesn't fit in off-screen
memory.

You should also look in /var/X11/xdm/xdm-errors and
/usr/adm/SYSLOG for more indications what the problem
might be.

|> This often happens after repeatedly running an application that exits
|> abnormally (ie during debug).  The only solution seems to be to reboot.
|> Can anyone explain what's going on and if there is any kind of fix?

What version of IRIX are you running?  What hardware is this on?

I would recommend installing the most recent OpenGL patch for
your OS and hardware.

I hope this helps.

- Mark

2. need Raptor3 Alpha NT3.50 drivers

3. Depth buffer values

4. GLX Pixmap problem on SGI O2

5. Looking behind a colored filter

6. SGI->nonSGI : GLX problem ?

7. Please take the time to help us out

8. SGI Font Server Problem (solved)

9. OpenGL/GLX on screen-less server

10. DGL/GLX capable X server for Linux

11. X server: Fatal IO error 0 with MesaGL/glx