Files in the top-level directory from the latest check-in of branch trunk
- ConfigDict.py
- proclog.conf
- proclog.py
- README
On setting up a new home network recently, I went looking for log file
processors and was surprised when I couldn't find one that was
sufficintly flexible to generate a number of different report types
(i.e. - event counters, most frequent events, exception reports,
etc.). So I wrote proclog. The version here represents the second
implementation of the same basic idea.
System Requirements:
Python 2.4. It may work with earlier pythons, but hasn't been used
with them.
Installation:
Put the extracted tarball wherever you're comfortable with it. Symlink
proclog.py to a directory in the path as "proclog".
Copy proclog.conf to /usr/local/etc/proclog.conf, and edit as
appropriate. Invoke "proclog --doc" for information on what's in the
proclog.conf file. At a minimum, you'll need to change the
command/file entries to reference your log files for the previous day,
and the network addresses to reference your network. You probably want
to change all the "except" values to be None or 0 initially. Finally,
change the "proclog" entry to include the reports you want to run.
At this time there's a good chance that there are no processors for
one or more systems you are running. Sorry - you'll have to write
those. Note that the running time is very sensitive to the regular
expressions you write. The more explicit you can make them, the
quicker they will reject a given line, and the faster proclog will
run. If you write your own rule sets, sending them back to me will
get them considered for inclusion in the next release.
One last note: I log most things multiple times: all mail events go
into a log for mail, all events from a specific machine go into a log
for that machine, all network events go into a network log file,
everything goes into a central log file, and so on. This makes it easy
for me to check on exactly what I want to check on (e.g. - if I want
to watch mail flowing, I can watch the mail log; if I want to see how
a specific machine is doing, I can check it's log). You'll see that in
the example proclog.conf: sections will specify a command file, and
also have the same data passed down to it from processors running on
files that include that data from different sources. Running a
processor set on the narrowest set of data makes proclog
faster. Running a large processor on a wide set of data provides more
flexibility in ordering the output.