shell/awk scripting help - parsing directories to gainuser/file information for commands

Adam Maloney adamm at sihope.com
Tue May 27 06:10:21 PDT 2003


You could pull the data for the FOR loop out of the apache config,
something like:

# Assuming your config is something like:
 <VirtualHost 192.168.1.1>
 DocumentRoot /www/user1
 ServerName www.blah.com
 CustomLog .../blah.com.access_log
 ...
 </VirtualHost>

for DOMAIN in `grep ^ServerName httpd.conf | awk '{ print $2 }'`; do
  # DocRoot is one line above ServerName
  OUTPUT_DIR=`grep -B 1 $DOMAIN httpd.conf | grep ^DocumentRoot | awk '{ print $2 }'`
  OUTPUT_DIR=`echo "$OUTPUT_DIR/stats"`

  # CustomLog is 1 line after ServerName
  ACCESS_LOG=`grep -A 1 ^CustomLog httpd.conf | awk '{ print $2 }'`

  webalizer -o $OUTPUT_DIR $ACCESS_LOG
done

Obviously needs some tweaking, and there are TONS of other ways of doing
this.  I have perl code somewhere that actually does this right - it
treats everything between the VirtualHost lines as a record, and can pull
all of the per-site config from it.

For most things like this in Awk, you need to know 3 things:

awk '{ print $3 }'    will print field 3 (whitespace seperated)
awk '{ printf("Hello %s\n", $3 }'  you can use C-style printf for formatting
echo "1X2X3X4" | awk -F X '{ print $2 }'     field seperator is "X"

You can get more fancy:

Input is:

"print this line"
"not this line"

# Only print the line with the word "print" in it:
awk '/print/ { print $0 }'

$0 represents the entire line, so awk searches (/) for "print" and for
each matching line, prints it.

Get much more complicated than this and just "use Perl;" :)

Good luck.

> # ls /www/*/logs/*.access_log
> 
> generates...
> 
> /www/user1/.logs/user1domain1.com.access_log
> /www/user1/.logs/user1domain2.com.access_log
> /www/user2/.logs/user2domain1.com.access_log
> /www/user2/.logs/user2domain2.com.access_log
> /www/user3/.logs/user3domain1.com.access_log
> /www/user3/.logs/user3domain2.com.access_log
> 
> what I am trying to script is something that does;
> 
> <pseudo code>
> for i in /www/*/logs/*.access_log;
> 	ereg (user)(domain_name) from $i;
> 	do webalizer -n $domain_name -o /www/$user/stats/;
> done
> </pseudo code>
> 
> ...as this would eliminate human error in maintaining a file which contains the
> appropriate lines to handle this via cron or something every night.
> 
> This is one example, there are a slew of other similar applications that I would
> use this for. have played with awk as well, just can't wrap my head around this
> (not a shell scripting person by trade).
> 
> Any guidance or insight would be appreciated.
> 
> Dave
> 
> 
> _______________________________________________
> freebsd-isp at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-isp
> To unsubscribe, send any mail to "freebsd-isp-unsubscribe at freebsd.org"
> 

Adam Maloney
Systems Administrator
Sihope Communications



More information about the freebsd-isp mailing list