>My question is slightly different -- I'm trying to figure out what is
>the most *maxed-out* system for Linux I can build that will work
>perfectly with available/supported drivers (I don't mind playing around
>with kernels as long as I know it's going to work). Therefore, I
>address this question to anyone that has tried to add fairly late model
>parts to their system and succeeded.
>I'd also like to know if the manufacturer has a good record of linux
>support and which version worked with the part in question. Even *with*
>slightly older hardware I know a linux system is still going to blow
>away some of the OSes bloating out of control out there...
>Motherboard: BX; Have you been able to get a 133Mhz bus clock board with
>heat alarms, sleep mode clock speed reduction, AGP (2x AGP?) to work?
Not at 133Mhz. We don't overclock or etc. because our systems are designed
for 24/7 operation (i.e. they're never turned off, ever).
Quote:>Hard disk: What is the largest overall size or partition that can be
You can't afford it :-). I think it's in the terabyte range. The biggest
setup I have personally done had ten 18gb Barracudas in it for a total
capacity of 180gb. This was attached to a 5-channel RAID controller.
Quote:>SCSI host adaptor: Looks like Mylex and Advansys might have the best
>linux support up to now, but does anyone have a multichannel 80
>Mbyte/sec card working? Or one that will allow adding a slower device
Cards based upon the Symbios chipsets are also a best bet, because
Symbios (now owned by LSI Logic) is a chip vendor, not a board
vendor. That means that they release all the technical details so that
people can build boards with it. Meaning better drivers. Intraserver
and DigitalScape are two companies that explicitly support Linux and
release cards based on these chipsets.
We have successfully used an Intraserver 6201 board with Linux. This
is a 2-channel board based on the Symbios 53c896 chipset. However, it
did require Gerard's latest 53c8xx driver. Under some BIOS's it may
require a slight hack to the 2.0 kernel's PCI bus scan to get both
functional devices reported. On the 2.2 kernel, the PCI bus scan is a
"make xconfig" option, and if your BIOS does not report both
functional devices, you can configure the kernel to do its own BIOS
scan. This works better with most modern motherboards.
For more channels, the 5-channel RAID controller that I used is an
ICP-Vortex GDT controller. It works great, has excellent performance,
Quote:>(CD) to it without throwing a wrench into the transfer rates? I've been
>told that it's better not to mix and match with IDE, is this true?
>Anyone have RAID working?
ICP-Vortex directly supports Linux. Their GDT driver was written by
them, themselves, and they have a Linux version of all the support
tools for their GDT RAID controllers. As far as I can tell they were the
second RAID controller to be supported under Linux (DPT was the first).
Most Ultra2 cards based on the Symbios chipset have a "SCSI Buddy"
chip that creates a separate bus for the CD-ROM and tape drives. It is
still on the same channel, but a separate bus. Thus the SCSI chip
slows down only when it's talking to the CD-ROM, and runs full speed
when talking to the rest of the hard drives. The LVD ICP-Vortex boards
use this "SCSI Buddy" chip on one of the channels so that a SCSI CD-ROM
can be connected without affecting the rest of the bus.
There's no problem with running IDE CD-ROMs with SCSI hard drives. But
do not try to use an IDE ZIP and SCSI hard drives. This confuses many
BIOS's (which will try to boot off the IDE ZIP rather than off the
SCSI hard drive), as well as confusing the Red Hat installer. Also, if
you have both SCSI and IDE hard drives, the installer and many BIOS'es
will automatically default to the IDE.
Quote:>Memory: PC100 SRAM DIMMS ought to be transparent, but let me know if
>they're not. Max size is not the biggest concern, but if you've
>populated your board to 1G and can use all of it I wouldn't mind hearing
Boards based upon the BX chipset need *BUFFERED* 256mb SDRAM modules
to get to 1G. Many of the older ones will also require a BIOS update
to properly configure that much memory. I have successfully populated
an ASUS P2B-D board to 1G before (it was an older board that did
require a BIOS update). The default Linux kernel will only use around
900mb of that memory, but can be fairly easily patched to access up to
2gb (at the expense of some virtual memory space). The patches are
floating around on the Internet.
Quote:>Sound: Have the PCI sound cards reached "supported" status yet (they're
>supposed to be less interrupt intensive than the ISA cards and therefore
>more conducive to performance)?
Sort of. There is actually no difference in performance unless you use
bigger buffers than the default OSS drivers will do.
Quote:>Network: 100Mbit cards are mainstream by now, right?
Yes, but be careful. With the purchase of Digital Semiconductor by Intel,
cards based upon the Digital Tulip chipsets are becoming rare and far
between. Most former Tulip customers have switched to something called the
"Lite-On PNIC", which is a sort of cut-rate Tulip clone that does not work
well at all.
So far, the two most reliable 10/100 cards are also the two most
expensive -- the Intel EtherExpress Pro/100, and the Digital/Cabletron
DE-500. Both will set you back somewhere around $100-$120 retail (less
wholesale, but not much less). Some people swear by 3Com, and some
people swear at 3Com, so I avoid them. Basically, for most uses the
EEPRO100 is slightly cheaper and has slightly better performance, but
the DE-500 is preferable if you're going to be using gated (apparently
the eepro100 driver has some problems with that). We sell the DE-500
because we don't know what use they're going to make of the machine
and we don't want to have to stock two different cards (one for gated,
and one for everything else), but for most people the EEPRO100 is also
an excellent choice.
"Microsoft will compete ... by adding features" -- Ed Muth, Microsoft