We have been fortunate enough to benchmark a model's performance on
two identical servers. One with Windows NT for servers, and the
other with Linux. The two servers each have 2 processors, spreed is
1.40 GHz, RAM is 1.26 GB and also the same storage 36 GB. The Unix
machine is not identical, but similar in most specs.
The same code is submitted on both (using -nojmv), it writes thousand
of files, because I want to test file IO, with the total size of
files reaching between 20 and 30 GB.
My guess was that Linux (and Unix) would outperform (because Windows
automatically takes 1/2 of all RAM avaiable) and I thought it would
have less overhead.
The model finished in about 15 hours in the Windows environment. It
has been running twice that long on Linux, and is not even half done.
Unix performance looks similar to Linux.
Is this surprising to anyone?
Do you think my Linux system could be improved?
If you were forced to do this much file IO,what would you do to
Any other insights?
Everyone's perspectives on this would be greatly appreciated.