Big (REALLY BIG) HD storage & Linux

Big (REALLY BIG) HD storage & Linux

Post by Steve Walk » Sat, 17 Feb 1996 04:00:00



Okie..allow me to apologize in advance, as this will be long, and
contain many questions:)

I've read (and read and read) to determine if there are limiting
factors to the amount of HD space mountable under Linux.
We are setting up a POP to support the student population of a small
campus, and a storage minimum of 5MB per student has been established.
My understanding from the SCSI-HOWTO is that Linux can handle drives
up to a Terabyte. what we actually need is the ability to handle at
least 61Gig
Questions:
1. Is there a memory limitation similar to novell in that novell must
have enough memory to hold the tables for the volume size it is
mounting. I seem to remember reading somewhere (and I've tried to find
it) that Linux uses memory for the inodes on the drive....if so, how
much does it require?

2. If it does require memory, is it possible to boot linux from a
smaller partition (obviously we have to boot from one with < 1024 cyl
anyway), then, mount the extra partitions AFTER mounting the swap
partitions? (i.e. will the inode tables be swapped to the swapfile and
allow us to mount the huge partitions that way)

3. If that doesn't work, would it be possible to spread things out a
bit by adding additional servers and mounting thier volumes as NFS,
then use them for the users home directories? (i.e. what are the
memory requirements for mounting an NFS volume...obviously less than
mounting a physical drive that large, but how much less?)

4. Has anyone made use of the NFS volume provided by IBM Lan File
Services/ESA with a Linux system? (for humor I should interject here
that I thought the Mainframers were gonna pass out when we told them
in a meeting we wanted to consider sucking up 61Gig of thier shiny new
DASD...I thought the tape backup guy was gonna have a heart attack
when we started talking about ADSM:) and the IBM rep went all starry
eyed like he was gonna retire after selling us another one maybe:)

5. On a little different track, has anyone thought of a nifty way to
place user home directories under home but on another drive?
I've considered rranging them under home/home1/user...home/home2/user,
but maybe someone knows of a better way that would require a less
complicated script (we're importing a list), and less administration:)

6. Anyone had any experience (read problems) running Linux on a Compaq
Prosignia 5000 server?

I know this is long, and I apologize once again. While I've run Linux
for a while, I've never seen anyone talk about mounting more than
about 10Gig....so I thought I'd best check out the answers to this
stuff prior to committing myself to administrative hell:)

Thanks,
Steve Walker

 
 
 

Big (REALLY BIG) HD storage & Linux

Post by Rhys Thur » Mon, 19 Feb 1996 04:00:00


: 5. On a little different track, has anyone thought of a nifty way to
: place user home directories under home but on another drive?
: I've considered rranging them under home/home1/user...home/home2/user,
: but maybe someone knows of a better way that would require a less
: complicated script (we're importing a list), and less administration:)

A nifty little way that I did it was just to:

cd /home
find . | cpio -pd /usr/home
rm -rf home
ln -s /usr/home home

My /usr partition is on a separate drive, and I wanted to move the homes
there because my root partition is really small and they were taking up
too much room.  I didn't have to do anything else :).  You could put them
on an NFS-mountable drive, I suppose, too.

The only restriction is that I have my root user's home directory on the
root partition still.  That may be a requirement, but I doubt it.  It
would probably work anyway.

Good luck with your 61GB drive.  I wish I could help you, but I don't
know enough about the memory requirements of mounting drives...  :(

--
Westheimer's Discovery:
        A couple of months in the laboratory can frequently save a couple of
hours in the library.

 
 
 

Big (REALLY BIG) HD storage & Linux

Post by Rob Janss » Tue, 20 Feb 1996 04:00:00



Quote:>1. Is there a memory limitation similar to novell in that novell must
>have enough memory to hold the tables for the volume size it is
>mounting. I seem to remember reading somewhere (and I've tried to find
>it) that Linux uses memory for the inodes on the drive....if so, how
>much does it require?

Unlike s*cking Netware, Linux does not read the entire directory
structure (or the inodes, for that matter) into memory when mounting
a volume.  It just caches what it reads to find the files that are
being opened.
When the memory for the cache runs out, it drops some not-recently-used
info.
So, while a lot of memory may help you with performance, it is not
a requirement as with Netware.

Quote:>2. If it does require memory, is it possible to boot linux from a
>smaller partition (obviously we have to boot from one with < 1024 cyl
>anyway), then, mount the extra partitions AFTER mounting the swap
>partitions? (i.e. will the inode tables be swapped to the swapfile and
>allow us to mount the huge partitions that way)

It is certainly a good idea to split the 60 GB disk in some partitions
and mount them separately.  The swap will not be used for inode info,
as far as I know.  The inodes are just read back from the disk when
they are required again.

Quote:>4. Has anyone made use of the NFS volume provided by IBM Lan File
>Services/ESA with a Linux system? (for humor I should interject here
>that I thought the Mainframers were gonna pass out when we told them
>in a meeting we wanted to consider sucking up 61Gig of thier shiny new
>DASD...I thought the tape backup guy was gonna have a heart attack
>when we started talking about ADSM:) and the IBM rep went all starry
>eyed like he was gonna retire after selling us another one maybe:)

Did you expect that to happen, or did it really happen?
I would not think 61 GB is a real problem for a large system, also
not to backup.

Quote:>5. On a little different track, has anyone thought of a nifty way to
>place user home directories under home but on another drive?
>I've considered rranging them under home/home1/user...home/home2/user,
>but maybe someone knows of a better way that would require a less
>complicated script (we're importing a list), and less administration:)

Use symbolic links.

Rob
--
+------------------------------------+--------------------------------------+


+------------------------------------+--------------------------------------+

 
 
 

Big (REALLY BIG) HD storage & Linux

Post by Steve Walk » Wed, 21 Feb 1996 04:00:00




>: 5. On a little different track, has anyone thought of a nifty way to
>: place user home directories under home but on another drive?
>: I've considered rranging them under home/home1/user...home/home2/user,
>: but maybe someone knows of a better way that would require a less
>: complicated script (we're importing a list), and less administration:)
>A nifty little way that I did it was just to:
>cd /home
>find . | cpio -pd /usr/home
>rm -rf home
>ln -s /usr/home home
>My /usr partition is on a separate drive, and I wanted to move the homes
>there because my root partition is really small and they were taking up
>too much room.  I didn't have to do anything else :).  You could put them
>on an NFS-mountable drive, I suppose, too.
>The only restriction is that I have my root user's home directory on the
>root partition still.  That may be a requirement, but I doubt it.  It
>would probably work anyway.
>Good luck with your 61GB drive.  I wish I could help you, but I don't
>know enough about the memory requirements of mounting drives...  :(

Actually, that was the source of the question...61Gig is the space
requirement...that 61Gig will actually be spread out across about 12
drives....but, I did get a good suggestion for using symbolic links
that may solve my problem:)

Since you're at Columbia, I should ask you if you're a staff member,
we may be able to collaborate on some projects going on in my
department (Information Services/Client Services) and some going on in
the CS department here at CMSU (Warrensburg)

Steve Walker

 
 
 

Big (REALLY BIG) HD storage & Linux

Post by Kevin Coze » Fri, 23 Feb 1996 04:00:00



>5. On a little different track, has anyone thought of a nifty way to
>place user home directories under home but on another drive?
>I've considered rranging them under home/home1/user...home/home2/user,
>but maybe someone knows of a better way that would require a less
>complicated script (we're importing a list), and less administration:)

Nothing to it. Make a directory on the primary drive called /home. Put
no files in it. then type 'mount /dev/hdb1 /home'. If drive two
(represented here by /dev/hdb1) contains the user files, thats all you
need to do. If you need to use more disks to hold all the user files
then you can create /home and then /home/atol or /home/mtoz and then
issue mount commands to mount the various drives under the respective
sub-directories. You get the idea.

>Thanks,
>Steve Walker


Cheers!

Kevin.  (http://www.io.org/~kcozens/)

In decreasing likelihood of getting mail through to me:



#include <disclaimer/favourite>

 
 
 

Big (REALLY BIG) HD storage & Linux

Post by Steve Walk » Fri, 23 Feb 1996 04:00:00



>>4. Has anyone made use of the NFS volume provided by IBM Lan File
>>Services/ESA with a Linux system? (for humor I should interject here
>>that I thought the Mainframers were gonna pass out when we told them
>>in a meeting we wanted to consider sucking up 61Gig of thier shiny new
>>DASD...I thought the tape backup guy was gonna have a heart attack
>>when we started talking about ADSM:) and the IBM rep went all starry
>>eyed like he was gonna retire after selling us another one maybe:)
>Did you expect that to happen, or did it really happen?
>I would not think 61 GB is a real problem for a large system, also
>not to backup.

Did I expect to use the IBM Lan File Services...
Not really, but since the IS Director thought the idea up, we felt we
should give it due consideration:)

Did I expect the Mainframers to pass out....nope...but it was fun to
watch thier mouths* open:)
61 Gig isn't a lot really, but we're operating a System/370 which
isn't really a large system as Mainframes go...only has about 80GB...
For backup they are still using a manual tape system:)

Quote:>>5. On a little different track, has anyone thought of a nifty way to
>>place user home directories under home but on another drive?
>>I've considered rranging them under home/home1/user...home/home2/user,
>>but maybe someone knows of a better way that would require a less
>>complicated script (we're importing a list), and less administration:)
>Use symbolic links.

I had rather planned that...I should have made my question more
clear...
I was wondering if anyone had a system already set up which spread
users' home directories across multiple partitions and yet kept
administration relatively easy.
The symbolic links will of course work, I was really looking for
tools, etc...anything or any advice which anyone had found useful up
to this point for administrating such a system up to this point.

Steve Walker

 
 
 

1. Big Big Big CORE Image !!

We have SCO Openserver 5.0.0b on a Corollary CBUS machine with 4 PENTIUM
166 Mhz processors, 64 MB RAM, a RAID with 6 - 4GB Disks and a 3com 3c905
Fast Ethernet card.

After the problem of "WARNING : ip: spinning on PCB Fxxxxxx" that has been
solved thanks to FCO.DIAZ and Jean-Pierre Radley, now we are experiencing a
strange but serious problem.

On this machine we have installed a copy of Conetic C-BASE database rel 3.7
that act as our main database and we are using standard telnet and Xterm
session to connect.

When one of our clients disconnect from the machine without "logout" or
"^D" a big big CORE image ( about 200 MB ) is generated under the directory
where the application is installed, then the machine begin swapping and
paging when this happens and all the users are logged out.

What I checked is that this big big core only happens when the C-BASE menu
command has been in execution on the client session that disconnect.

The menu process then remain PPID 1 and the CORE is generated.

Can anyone please help me ??

I know that there is a core parameters on System V that allow to specify
the soft and hard limit of a core dump file that a process can create (
SCORLIM & HCORLIM ) but I cannot find any spec on Openserver 5.0.

Thank You in advance.

--
Paolo Palmisano
====================

2. pppd failed

3. I have a BIG, BIG,BIG problem with DOSEMU 0.98.5.

4. installing and using Wangtek5525 tape and AHA 2740 controller

5. Big, Big Very BIG SCSI DISK

6. Moving MSDOS.SYS so I can use FIPS

7. Really big hd under linux

8. No KDE login from initial console...but

9. How big is a big Linux router?

10. Big Machines, Big Linux

11. Can Linux really help big business?

12. I know how to really get LINUX BIG....

13. Directories: how big is too big?