In order to get around scping all the files over individually, and
entering a password each time, have you thought of taring up the directory
and scping them all over in one shot? Then all you have to do is untar it
once you get it over there. If you do indeed want all the files, and I
expect that's why you used *, this might be the way to go.
Good luck.
> > >>I need to copy about 20,000 files from one Solaris to another one.
The
> > >>following command gives 'arg list too long':
> > >>I ever used xargs to change the ownership of a large number of files
> > >>using the following command:
> > >>find . -name test-\* -print | xargs chown user:group
> > Won't these run into the same "arg list too long" problem? If the shell
> > can't handle that many arguments when it runs scp, why would it be any
> > happier when the command is ls? I think you meant:
> 'ls *' does run into 'arg list too long'.
> > Without the wildcard, the shell never tries to insert all the filenames
> > into the command line.
> Although 'ls' doesn't, what if I just want to copy over the files that
follow some
> specific pattern?
> In addition, I can not bypass the password checking. I don't mind
entering the
runs as a
> loop. It asks for password before tranferring each file. That's not
desirable.
> Bing