Hi,
How can I use the head command recursively through a directory tree?
Douglas
How can I use the head command recursively through a directory tree?
Douglas
[...]Quote:> How can I use the head command recursively through a directory tree?
find . \! -type d -exec head '{}' +
(find . \! -type d -print0 | xargs -r0 head
with GNU find).
With zsh:
head ./**/*(D^/)
--
Stephane
> How can I use the head command recursively through a directory tree?
> Douglas
ff -re 'head "$f"' path_to_directory
Note that "ff" is a bash script, so comes with all theQuote:> <http://freshmeat.net/projects/ff/>
> ff -re 'head "$f"' path_to_directory
Note that you need GNU coreutils and GNU bash (as /bin/bash), so
it is not much relevant in a non-GNU/Linux newsgroup.
Note that there's a tree walker developped by AT&T Labs
(http://www.research.att.com/~gsf/tw/tw.html
http://www.research.att.com/sw/tools/uwin/man/man1/tw.html)
intended as a replacement for find which is probably more
reliable as not written in shell.
In zsh, walking a tree is as simple as:
for f (./**/<pattern>(<qualifiers>)) cmd $f
No need for any external command.
--
Stephane
>> ff -re 'head "$f"' path_to_directory
I'm curious as to what obvious rules were violated. Could you give someQuote:> Note that "ff" is a bash script, so comes with all the
> limitations of bash programming (especially in that case where
> the most obvious rules for correct shell programming seem to have
> been ignored). In other words, it is not reliable. Most
I wasn't too sure about the advantages either. Would you mind correcting orQuote:> statements in the FAQ shipped with it about its advantages are
> wrong (especially the performance one once you've read such an
That script really is quite horrible, but I think it's only meant to serveQuote:> awfull script like: ff.capitalize).
"tw" seems to be much more robust than "ff"; thanks for pointing it out.Quote:> Note that you need GNU coreutils and GNU bash (as /bin/bash), so
> it is not much relevant in a non-GNU/Linux newsgroup.
> Note that there's a tree walker developped by AT&T Labs
> (http://www.research.att.com/~gsf/tw/tw.html
> http://www.research.att.com/sw/tools/uwin/man/man1/tw.html)
> intended as a replacement for find which is probably more
> reliable as not written in shell.
I suppose it's back to the drawing board for the author :-)Quote:> In zsh, walking a tree is as simple as:
> for f (./**/<pattern>(<qualifiers>)) cmd $f
> No need for any external command.
[...]Quote:>> Note that "ff" is a bash script, so comes with all the
>> limitations of bash programming (especially in that case where
>> the most obvious rules for correct shell programming seem to have
>> been ignored). In other words, it is not reliable. Most
> I'm curious as to what obvious rules were violated. Could you give some
> examples?
I was being a bit unfair as it seems that some attention was
paid to avoid some common traps that are often forgotten (even
in most system scripts).
A quick look through the code indicates that it is likely to
fail with filenames containing wildcard characters, newlines,
starting with '[<n>]=', starting with "-", ending in blanks...
I have no GNU system by hand, so I can't make tests.
--
Stephane
| Nevertheless, in comparison to using find -exec, keyQuote:>> statements in the FAQ shipped with it about its advantages are
>> wrong (especially the performance one once you've read such an
> I wasn't too sure about the advantages either. Would you mind correcting or
> clarifying the incorrect statements?
How come? in the recursive way, ff even uses find (in a
non-reliable way).
| * Flexibility -- the user can perform complex tasks without
| having to write a separate script.
???
What's the problem with running a shell or perl inline script
with find -exec ? That's even more flexible as you can chose
the interpreter you like.
| * Simplicity -- saves the user time and effort while
| increasing their productivity.
What's so difficult with "find -exec"?
| Performance of execution.
|
| For every file it encounters, find -exec forks a separate
| child-process to execute the user's commands. On a large input
| size, this behavior yields poor performance. In contrast, this
| tool employs only one process (analogous to xargs) to execute
| the user's commands. Thus it uses less system resources and
| yields better performance.
No, the shell forks a process for every command it runs unless
it is builtin. xargs saves forks (and overall execs) by
processing several files at the same time with one single
command ("rm file1 file2" instead of "rm file1" then "rm
file2"), ff doesn't seem to do that.
The problem is that rewriting it would reveal the need for awkQuote:>> awfull script like: ff.capitalize).
> That script really is quite horrible, but I think it's only meant to serve
> as an example. Perhaps the author should consider rewriting it or remove
> such poorly written examples from the package.
With zsh, to capitalise files recursively:
zmv '(**/)(*)' '$1/${(C)2}'
or:
for f (./**/*(NDod)) mv $f $f:h/${(C)f:t}
(with no limitation on the filenames).
--
Stephane
[...]
I too was under the impression that this tool and other BASH scripts cannotQuote:> A quick look through the code indicates that it is likely to
> fail with filenames containing wildcard characters, newlines,
> starting with '[<n>]=', starting with "-", ending in blanks...
Also, for file names that do begin with '[<n>]=' and '-' that cause the tool
to crash, a solution for dealing with them is written in the tool's FAQ
<http://ff-bash.sourceforge.net/docs/faq.html>:
"Why are some targets given on the command-line handled improperly?
The names of these targets may contain characters that are being
misinterpreted by the tool or the shell. A solution is to feed a list of
the target names to the tool whilst invoking it with the --pipe option.
This ensures that target names bypass the scrutiny of both the shell and
tool, thus avoiding misinterpretation."
> [...]
>> A quick look through the code indicates that it is likely to
>> fail with filenames containing wildcard characters, newlines,
>> starting with '[<n>]=', starting with "-", ending in blanks...
> I too was under the impression that this tool and other BASH scripts cannot
> handle new-line and un-printable characters. However this notion has been
> gallantly disproved in a recent post (Message-ID:
That avoids some of the problem, not all and adds some becauseQuote:> Also, for file names that do begin with '[<n>]=' and '-' that cause the tool
> to crash, a solution for dealing with them is written in the tool's FAQ
> <http://ff-bash.sourceforge.net/docs/faq.html>:
> "Why are some targets given on the command-line handled improperly?
> The names of these targets may contain characters that are being
> misinterpreted by the tool or the shell. A solution is to feed a list of
> the target names to the tool whilst invoking it with the --pipe option.
> This ensures that target names bypass the scrutiny of both the shell and
> tool, thus avoiding misinterpretation."
I can see the author is working to make it better. That
discussion will have been useful.
--
Stephane
1. How to 'grep' file contents recursively down tree?
I want to use grep (or any other tool) to search the contents of all
files in my directory tree for a string. When I pipe a recursive list
of files into grep it only looks at their names, not contents.
I'm new to unix, in VMS it's simple: search [...]*.* string
thanks!
Sent via Deja.com http://www.deja.com/
Before you buy.
2. DIAMOND STEALTH 64 VIDEO VRAM PCI
3. Grep'ing recursively down a directory tree
4. HELP: kernel Oops when making backups
5. need C code that searches a directory tree recursively
6. Solaris 10 b72 default locale
7. using rm to delete thru the tree
8. ESS INTERNAL MODEM ISA PLUG & PLAY
9. HEADS UP - CVS TREE CHANGES
10. SCCS branch at head of tree?
11. Compare All Files to All Files in 2 Tree Directories?
12. Listing files recursively with pathnames?
13. Listing recursively files and directories