Batch file question - average size of file in directory
Ian Smith
smithi at nimnet.asn.au
Wed Jan 3 22:18:07 PST 2007
On Wed, 3 Jan 2007, Kurt Buff wrote:
> On 1/3/07, Ian Smith <smithi at nimnet.asn.au> wrote:
> > > From: James Long <list at museum.rain.com>
> > > > From: "Kurt Buff" <kurt.buff at gmail.com>
[..]
> > > > I've got a directory with a large number of gzipped files in it (over
> > > > 110k) along with a few thousand uncompressed files.
> >
> > If it were me I'd mv those into a bunch of subdirectories; things get
> > really slow with more than 500 or so files per directory .. anyway ..
>
> I just store them for a while - delete them after two weeks if they're
> not needed again. The overhead isn't enough to worry about at this
> point.
Fair enough. We once had a security webcam gadget ftp'ing images into a
directory every minute, 1440/day, but a php script listing the files for
display was timing out just on the 'ls' when over ~2000 files on a 2.4G
P4, prompting better (in that case, directory per day) organisation.
[..]
> > > while read fname; do
> > - > if file $fname | grep -q "compressed"
> > + if file $fname | grep -q "gzip compressed"
> > > then
> > - > echo -n "$(zcat $fname | wc -c)+"
> > + echo -n "$(gunzip -l $fname | grep -v comp | awk '{print $2}')+"
That was off the top of my (then tired) head, and will of course barf if
'comp' appears anywhere in a filename; it should be 'grep -v ^comp'.
> Ah - yes, I think that's much better. I should have thought of awk.
That's the extent of my awk-foo, see Giorgos' post for fancier stuff :)
And thanks to James for the base script to bother playing with ..
Cheers, Ian
More information about the freebsd-questions
mailing list