1. apache 1.3.3, solaris 2.6 and mmap'ing big files (~50MB)
OS: solaris 2.6, full patch set
HW: Ultra2, 2x300Mhz cpus, 1024MB RAM
Headache Factor: extensive
i'm having this weird problem with apache 1.3.3 and serving big files (20-50MB).
i've come across countless situations where the amount of memory being used by
a single child httpd is about ~50MB, which is of course unacceptable.
here's a top output:
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
3553 nobody 1 58 0 44M 2928K sleep 0:00 0.00% httpd
7779 nobody 1 58 0 17M 3408K sleep 0:00 0.00% httpd
5666 nobody 1 58 0 11M 5336K sleep 0:04 0.00% httpd
the large numbers are overall size, not resident memory. a
/usr/proc/bin/pmap -x of those processes dumps the address space map:
3553: /sso/httpd/httpd -d /sso/httpd
Address Kbytes Resident Shared Private Permissions Mapped File
00010000 360 352 336 16 read/exec httpd
00078000 24 24 8 16 read/write/exec httpd
0007E000 1000 1000 40 960 read/write/exec [ heap ]
ECC00000 42408 536 - 536 read dev:85,99 ino:501348
what i originally thought was a problem with bad coding and perhaps a
memory leak on the part of the code shows that 42Mb are being used
by this process while reading a file. in this case, using sun's 'ncheck'
command, and the given major&minor numbers as well as the inode, i can
get a filename for this "read" operation that seems to be taking up so much memory.
in my case, this is a data file about 40MB large. seem weird to anyone else?
well, this got me to thinking about the way apache handles file IO. is this
a result of memory mapping on the part of apache? if so, wouldn't it be
silly on apache's part to read the whole freaking file into memory?
i don't think that it's doing the latter. in fact, if i understand memory mapping
properly, it should be up to the virtual memory subsystem to 'clean out' the
memory mapped area of dirty pages, and adding them back to the freelist.
of course, this isn't happening for me. seems as if the entire file is read
into memory and never flushed. should there be a high-water mark
for the amount of memory should be allocated to an mmap()'d file?
it probably does an fstat() to get the file syste, but come on,
does it really need the entire file in memory?
anyone have any suggestions, comments, lessons to teach me? :)
thanks in advance...
ps incidentally, my site gets a huge pounding as far as servers go. we normally
hit our maxclients of 100, which sucks...
2. Where can I get FreeBSD
3. mmap'ing a large file
4. How to choose different .cf file when compiling X?
5. delete file after lp'ing it
6. su group like in bsd
7. deleting mmap'ed files cause problems?
9. mmap'ing vmalloc()'ed memory
10. mmap'ing 2 or more mem-area's to 1 mem-area?
11. 'delete' command in ftp delete files in local hard disk ???
12. "rm: can't unlink 'files'",can't delete files
13. mmap'ing C++ objects