No, this is not a continuation of the "All hail Bill..." thread.
I recently got a JPEG/GIF viewer running on Linux (zgv) and
it seemed like it uncompressed JPEGs much faster than I remember.
I have Linux running on an old 386 and Windows 95 running on
my 486 so I decided to race them [fun, eh?!]. I really expected
Linux to [somewhat] equalize the speed between the machines.
However, here's what I found...
[The Machines]
Linux Windows 95
386 DX 40 486 SX 33
128k cache 256k cache
8 Meg RAM 24 Meg RAM
ISA Video & Controller VLB Video & Controller
1 Meg Video RAM 2 Meg Video RAM
[The Race]
63k JPEG = 8 seconds 63k JPEG = 14 seconds
29k JPEG = 5 seconds 29k JPEG = 9 seconds
Everytime I tried it, the 386 w/Linux popped up the
picture in about half the time as the Windows 95 on
the 486. This really surprised me, since the 486 had
more cache, VLB, more video RAM and a whopping 24 Meg
of RAM.
Oh, I realize this isn't a very scientific bench mark.
I just did it for fun and thought someone might enjoy
reading about it.
I'm assuming from this that Linux is way more efficient
than Brand X. One possible flaw in my logic... the
386 DX has a math co-processor. Does the viewing of
JPEGs (or, more to the point, the uncompressing of JPEGs)
use floating point math? If so, the 386 would have an
unfair advantage against the 486 and negate my results
[however dubiuos].