Increase MAXPATH problem and telnet problem

Increase MAXPATH problem and telnet problem

Post by Siddarth Ra » Sat, 14 Feb 1998 04:00:00



we are using sco unix 3.2  on pentium
One of the user wrote a shell program which creates dir and sub directiores.
ie he wrote a shell program which creates a directory new and a 2nd
sudirectory new inside new . this goes on infinitely  

When i went to single user mode and tried to delete it using rm -r new
it gives message rm: path too long (1023/1024)

Fsck on the filsystem fails in the second phase ie while checking Pathnames
giving a message Dir pathname too deep Increase MAXPATH & recompile
can anyone suggest how to delete the directories new.
I have already tried incresing the NINODE and NFILE parameter in kernel
parameter with no luck.

We also face a telnet problem ie suddenly users are no longer able to do
a telnet to the server but ping to the server works fine.
even telnet from the same server to itself fails.
ie suppose the server name is archie. then users suddenly arenot able to
do telnet to archie server . also telnet from archie to archie itself fails
ishort the telnet daemon dies. The problem gets rectified when i do a
tcp stop and tcp start.
Is there any permanent solution to the above problems

list

Thanks in advance.

Awaiting some reply at the earliest

Best Regards

siddarth


 
 
 

Increase MAXPATH problem and telnet problem

Post by Bela Lubki » Sat, 14 Feb 1998 04:00:00



> we are using sco unix 3.2  on pentium

Run `uname -X` to get a useful release number ("3.2" isn't useful).

Quote:> One of the user wrote a shell program which creates dir and sub directiores.
> ie he wrote a shell program which creates a directory new and a 2nd
> sudirectory new inside new . this goes on infinitely  

> When i went to single user mode and tried to delete it using rm -r new
> it gives message rm: path too long (1023/1024)

It can't be infinitely deep.  cd down into the path until you can't cd
any farther.  Then clean up.  e.g.:

  while [ -d new ]; do cd new; done
  while [ -d ../new ]; do cd ..; rmdir new; done

Quote:> Fsck on the filsystem fails in the second phase ie while checking Pathnames
> giving a message Dir pathname too deep Increase MAXPATH & recompile

Well, that's a message to the Unix porting house which is porting Unix
to a new architecture; not to the end-user.  You can't change MAXPATH.

Quote:> We also face a telnet problem ie suddenly users are no longer able to do
> a telnet to the server but ping to the server works fine.
> even telnet from the same server to itself fails.
> ie suppose the server name is archie. then users suddenly arenot able to
> do telnet to archie server . also telnet from archie to archie itself fails
> ishort the telnet daemon dies. The problem gets rectified when i do a
> tcp stop and tcp start.

In the future please post separate articles for unrelated questions.

This one is a lot more version-dependent than the other one.  I can only
give a rough guess and some suggestions.  Guess: you're running out of
STREAMS resources.  Run `netstat -m` and if there are failures, increase
something.  I can't say exactly what since that depends on the version.
Other than that, look for weird failures in `netstat -i`, `llistat -l`.
Look at the output from those commands while the system is working
normally, then again when it's clogged, see if you can see a pattern.

Quote:>Bela<

--
Back a bit early from our world travels.  Vietnam in a few months (we hope).
Travelogue http://www.armory.com/~alexia/trip/trip.html still being updated.

 
 
 

Increase MAXPATH problem and telnet problem

Post by Tony Lawrenc » Sun, 15 Feb 1998 04:00:00




> > ie he wrote a shell program which creates a directory new and a 2nd
> > sudirectory new inside new . this goes on infinitely
> It can't be infinitely deep.  cd down into the path until you can't cd
> any farther.  Then clean up.  e.g.:

It can be infinitely deep if he has created a link to the directory
itself.  

I've certainly seen recursively linked directories (though not
in many years now) from system crashes, and I'm somewhat sure
that at one time, on some Unice, somewhere, I could do a hard
directory link as root from the shell or certainly from
writing a three line C program.  I can't seem to
do that in OSR5, not even from a simple C program, and I should
be able to (as root, of course).  The man pages agree with me,
but the commands themselves don't seem to understand how
certain I am that I should be able to do this :-)

Is this a change in link() or just in HTFS?  Either one raises
the obvious question that the mkdir command needs to creates
hard directory links, so if it doesn't do it through link(),
just what is going on?

Anyway, even if it is just some incredibly deep but not recursive
directory, wouldn't an unlink followed by an fsck (in single user
mode, of course) be simpler than trying to burrow down?  I've
done this for recursive links, but of course there is less
for fsck to clean up in that case..

--

SCO ACE
Microsoft MCSE
http://www.aplawrence.com

 
 
 

Increase MAXPATH problem and telnet problem

Post by Bela Lubki » Mon, 16 Feb 1998 04:00:00





> > > ie he wrote a shell program which creates a directory new and a 2nd
> > > sudirectory new inside new . this goes on infinitely

> > It can't be infinitely deep.  cd down into the path until you can't cd
> > any farther.  Then clean up.  e.g.:

> It can be infinitely deep if he has created a link to the directory
> itself.  

I was assuming that Siddarth had analyzed the local situation correctly
-- that his user really had created a long series of individual nested
directories, not a single recursive directory...

Quote:> I've certainly seen recursively linked directories (though not
> in many years now) from system crashes, and I'm somewhat sure
> that at one time, on some Unice, somewhere, I could do a hard
> directory link as root from the shell or certainly from
> writing a three line C program.  I can't seem to
> do that in OSR5, not even from a simple C program, and I should
> be able to (as root, of course).  The man pages agree with me,
> but the commands themselves don't seem to understand how
> certain I am that I should be able to do this :-)

> Is this a change in link() or just in HTFS?  Either one raises
> the obvious question that the mkdir command needs to creates
> hard directory links, so if it doesn't do it through link(),
> just what is going on?

It is a change in HTFS, I believe.  Allowing hard directory links was
going to severely complicate fsck's ability to either roll back or
complete partially committed directory changes.  Apparently the
applicable POSIX or XPG standard made it optional whether a filesystem
had to support directory links, so it was within permitted specs to
disallow them.

mkdir(C) uses the mkdir(S) system call, which does the work atomically.

Quote:> Anyway, even if it is just some incredibly deep but not recursive
> directory, wouldn't an unlink followed by an fsck (in single user
> mode, of course) be simpler than trying to burrow down?  I've
> done this for recursive links, but of course there is less
> for fsck to clean up in that case..

An unlink followed by an fsck?  That would probably cause the unlinked
directory's child to be moved into lost+found, with the entire rest of
the chain depending from it, still too long for `rm -r` to handle.  I
like my two-step shell ditty a lot better.

Quote:>Bela<

--
Travelogue of our recent trip, http://www.armory.com/~alexia/trip/trip.html
 
 
 

Increase MAXPATH problem and telnet problem

Post by Bill Vermilli » Tue, 17 Feb 1998 04:00:00




Quote:>I've certainly seen recursively linked directories (though not
>in many years now) from system crashes, and I'm somewhat sure
>that at one time, on some Unice, somewhere, I could do a hard
>directory link as root from the shell or certainly from
>writing a three line C program.  I can't seem to
>do that in OSR5, not even from a simple C program, and I should
>be able to (as root, of course).

That used to be a problem in the Xenix 1.x for the Radio Shack 16.

I've seen it more than once, and it was a bug in the lp mechanism
as I recall.  One of the lp directories would recurse up to root
and then back down through the whole chain, into lp, and start over
again.  

I don't remember the cause but it was a real PITA.

--

(Remove the anti-spam section from the address on a mail reply)