reading large corefiles?

reading large corefiles?

Post by Gran » Fri, 18 Apr 2003 09:32:43



Note sure the best place to post this, programming group or Tru64, ideally
I guess a programming group for Tru64... anyways.

We have an app that keeps coring at 300 megs. The core file written out is
almost exactly 300 megs. I can't read it with dbx or ladebug, dbx errors
out saying its a bad core file, and ladebug segv's when reading it.

A few questions on this.

Does anyone know if there is a limit to the size of the core files that
either de* uses?

Or, what I think may be the problem, is the core file was limited or
something, so its really just a junk core file. When I type "limit" from my
prompt, it says core_file_size is unlimited, but is there any other factor
that could affect this?

Finally, is there some limit to a process size? That is causing this sucker
to core out at exactly 300 megs all the time?

Its rather annoying because I'd like to see what the corefile says to see
what the stack trace is, but no de* will read it... is there a way to
get the stack trace without using a de*?

Thanks
-grant

 
 
 

1. corefiles

How can i check what went wrong, which produced a corefile?

I have sendmail, it worked fine, i recompiled it and sind than it
creates corefiles, and on what went wrong.  I have no clear info on what
has changed in the meantime on my machine, but i gues that there is a
corrupted library somewhere.  I though you could use adb, but i don't
know how.

2. LINUX SUCKS,.. the ultimate solution

3. NEED 'corefile' on Linux 1.3.3

4. Video Display Card

5. Problem with "corefile truncation" on Solaris 8

6. Linux-friendly POS receipt printer

7. gdb corefile debug problem: not in exe format

8. Problem with syslogd: ??? instead of ip address...

9. dbx, corefile problem

10. gdb-4.15, ELF, corefiles

11. gdb corefile debug problem: not in exe format

12. how to get suid corefiles?

13. Cascading Corefiles