syslog-ng for collecting remote logs
Syslog-ng Configuration
This is a breakdown of what the syslog-ng.conf file looks like and what it does. I never found a guide like this, so I thought I would post one.
Syslog-ng takes system logging to the next level. The “normal” syslog daemon leaves a lot to be desired especially in collecting logs from remote servers.
Syslogd (the original one) does not separate logs into different files, just collects the logs and dumps them all together, which is good sometimes, but mostly bad.
So here we go:
As with all good config files this one begins with comments (starting with #) describing the files purpose:
(NOTE: the actual conf file is all bold in this doc for easy reference. Also in code tags, so the indentation gets preserved -- bwkaz ;))
Code:
# Begin /etc/syslog-ng/syslog-ng.conf
# Taken from Syslog-ng configuration for Linux from Scratch
# Re-authored by MLeo
# Date: February 2006
# Version 2 (my own version number)
Now, as with most linux apps, there are global options to define:
Code:
options { sync (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames(off);
# this next one is important as it will build folders based on the remote servers name:
use_dns (yes);
use_fqdn (no);
# this will automatically build the folder structure for incoming logs:
create_dirs (yes);
keep_hostname (yes);
};
There are 4 major sections (besides the above) that make up this file:
Source : this basically tells syslog-ng what to listen too.
Destination : this defines the files to send the logs too
filter : defines how to break the incoming logs into individual reporting services
log : the log section brings the source, destination and filters together to actually begin the logging.
the first source is defined as “src” but can be named anything from what I have found
Code:
# the unix-stream seems to be just the regular mechanism
# linux uses to log from services running locally
source src { unix-stream("/dev/log");
internal();
pipe("/proc/kmsg");
};
# here we define 2 additional source's (remotetcp and remoteudp)
# to receive logs from remote machines. By default, syslog sends logs via UDP
# and syslog-ng uses TCP, so we define both so we can receive log's from
# either type of clients. Again, the name's are whatever you define
# them as:
source remotetcp { tcp(ip(0.0.0.0) port(514)); };
source remoteudp { udp(); };
Now that we have determined what to listen too, lets target where we want the logs to go.
As you will see, syslog-ng can use system variables ($HOST, $DATE, etc) which makes it incredibly customizable.
One thing to consider, however, is that if you are troubleshooting an issue across many systems, you would have to look at several different logs if you were to parse all the logs into their own directories. If you dump all logs into one giant file, it would be easy to sort by time, then see a common thread across all your systems. However, that file could become huge and unwieldy after a while.
The format should be obvious, and you can make the destinations where ever you want.
In our example we are only separating the auth and authpriv logs into their own directories
in /var/log/, but all other logs just dump into their respective paths.
You do not have to create these folders before hand as the global setting will allow this to
happen on the fly.
Code:
# The name (authpriv, auth, syslog, etc) is up to you. These do not have to match the
# facility you are trying to log from (or too), but it just makes sense to name them the
# same.
destination authpriv { file("/var/log/$HOST/authorize.log"); };
destination auth { file("/var/log/$HOST/authorize.log"); };
destination syslog { file("/var/log/syslog.log"); };
destination cron { file("/var/log/cron.log"); };
destination daemon { file("/var/log/daemon.log"); };
destination kernel { file("/var/log/kernel.log"); };
destination lpr { file("/var/log/lpr.log"); };
destination user { file("/var/log/user.log"); };
destination uucp { file("/var/log/uucp.log"); };
destination mail { file("/var/log/mail.log"); };
destination news { file("/var/log/news.log"); };
destination debug { file("/var/log/debug.log"); };
destination messages { file("/var/log/$HOST/messages.log"); };
destination everything { file("/var/log/everything.log"); };
destination console { usertty("root"); };
destination console_all { file("/dev/tty12"); };
The filter section is where we define what daemon/process/facility we want to log from.
The names (f_auth, f_cron, f_kernel) can be named what ever you like. The facility, however, must be the actual process from where the logs are coming from.
Code:
# Nothing special here, just like source and destination, name them what you want.
filter f_auth { facility(auth); };
filter f_authpriv { facility(auth, authpriv); };
filter f_syslog { not facility(authpriv, mail); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_kernel { facility(kern); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_user { facility(user); };
filter f_uucp { facility(cron); };
filter f_news { facility(news); };
filter f_debug { not facility(auth, authpriv, news, mail); };
filter f_messages { level(info..warn) and not facility(auth, authpriv, mail, news); };
filter f_everything { level(debug..emerg) and not facility(auth, authpriv); };
filter f_emergency { level(emerg); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_crit { level(crit); };
filter f_err { level(err); };
Now, onto the actual execution of the syslog-ng process.
In the above sections, we have simply defined many variables, and in this section we will execute on them:
The format is simple:
log { source(yoursourcehere); filter(yourfilterhere); destination(yourdestinationhere);
Simple, huh?
The log process then parses out and piece's together the different variables and starts logging the information.
As clients connect, syslog-ng will build the directory structure on the fly for you if you have defined $HOST above.
Code:
log { source(src); filter(f_authpriv); destination(authpriv); };
log { source(remotetcp); filter(f_authpriv); destination(authpriv); };
log { source(remotetcp); filter(f_auth); destination(auth); };
log { source(remoteudp); filter(f_authpriv); destination(authpriv); };
log { source(remoteudp); filter(f_auth); destination(auth); };
log { source(remoteudp); filter(f_messages); destination(messages); };
log { source(src); filter(f_syslog); destination(syslog); };
log { source(src); filter(f_cron); destination(cron); };
log { source(src); filter(f_daemon); destination(daemon); };
log { source(src); filter(f_kernel); destination(kernel); };
log { source(src); filter(f_lpr); destination(lpr); };
log { source(src); filter(f_mail); destination(mail); };
log { source(src); filter(f_news); destination(news); };
log { source(src); filter(f_user); destination(user); };
log { source(src); filter(f_uucp); destination(uucp); };
log { source(src); filter(f_debug); destination(debug); };
log { source(src); filter(f_messages); destination(messages); };
log { source(src); filter(f_emergency); destination(console); };
log { source(src); filter(f_everything); destination(everything); };
log { source(src); destination(console_all); };
# END /etc/syslog-ng/syslog-ng.conf
A few things to remember:
Its a good idea to shutoff syslogd permanently since we don't want to get confused later on.
Also, make sure you set syslog-ng to start on boot for you particular distro.
Finally, lets look at a simple client config to send its log files to our central log server:
To add the ability to any syslog client to send its logs to a remote server, simply add a line including what to log and instead of listing a local file, list @remotelogserver.
Of course, it is a good idea to also log files locally, but you can have your syslog log the same message to multiple places.
This is just the modified piece of the client config file:
Code:
authpriv.* /var/log/secure
auth.*;authpriv.* @yourremote.log.server
As you see, the authpriv logs to /var/log/secure, but also sends it's info to yourremote.log.server as well.
However, when it arrives at yourremote.log.server, it adheres to its path for logging. In our case:
log { source(remoteudp); filter(f_authpriv); destination(authpriv); };
which will show up in /var/log/$HOST/authorize.log on our log server.
get it?
Disclaimer: I am no expert on this. I just happened to get this to work after a few hours of tweaks. Some things might work if configured differently. For instance:
source remotetcp { tcp(ip() port(514)); };
Might work just as well as:
source remotetcp { tcp(ip(0.0.0.0) port(514)); };
I just haven't gone back to test each line of changes I had to make to get this to work.
I was just at a loss for a break down of the file so I thought I'd share what I had found.
Increasing the power of syslog-ng
SEC:
http://www.estpak.ee/%7Eristo/sec/
But now lets go one step further and send all the incoming logs through a SEC filter, and react to them as they happen, rather then parsing the files after they've been written.
The code for syslog-ng.conf to use SEC is simple.
Like the previous setup, we need a destination (d_sec), an input (“net”) and then to start logging it. There is no filter in this instance since we want EVERYTHING to run through the destination(sec.pl).
Code:
# SEC destination for emailing alerts to unix_adm
destination d_sec {
program("/usr/bin/sec.pl -input=\"-\" -conf=/etc/sec.d/sec.ignores -conf=/etc/sec.d/sec.rules");
};
# send all logs through sec to filter on rules and email if needed
log {
source(net);
destination(d_sec);
};
As you can see, syslog-ng calls sec.pl and sec's conf files located in /etc/sec.d/, which we will get to next.
Installing SEC is really just a matter of extracting the perl script into a path...in our example, /usr/bin/sec.pl, while the configuration files will live in /etc/sec.d/.
You can either chose a single, large conf file, or several small ones. And you can either point to each file manually, or just tell SEC to look in a directory for anything it can interpret.
For the example, we will have 2 rules files, one for things we want to react to, and one full of things to ignore. As you get more and more logs coming in, your ignore rules will grow accordingly. The sec.rules file are mostly things you know you want to know about, so they stay pretty static, but can also grow as you learn.
Here is a sample sec.ignores file (which can be named anything. If you are pointing to a whole directory full of rules files, you should number them so that ignores are read BEFORE regular rules are read in. If you don't, SEC will react to something before it knows to ignore it).
(See http://kodu.neti.ee/~risto/sec/ for details on each option)
Note: all the "patterns" are pulled directly from the logs that I know I want to ignore.
Code:
# This is the ignores file for SEC
# This is used via syslog-ng
# Company name
# Author: Me
# July 3rd, 2007
#
#
##########
# This file is used to set ignore rules
# i.e. dba file systems we don't care about, erroneous errors, etc
#
# ignore /u01 filesystem on server1
type=Suppress
ptype=RegExp
pattern=server1 .*/u01
# patches caused this to start showing up, but mail still sends - need to #investigate
type=Suppress
ptype=RegExp
pattern=NOQUEUE: SYSERR\(sys\): can not chdir\(/var/spool/clientmqueue/\): Permission denied
# these guys are broken, apparently - all MX boxes are not responding
type=Suppress
ptype=RegExp
pattern=timeout writing message to .*\.timesgroup.com.: Broken pipe
# erroneous errors from sendmail
type=Suppress
ptype=RegExp
pattern=collect: I/O error on connection from
# ignore putbody errors
type=Suppress
ptype=RegExp
pattern=putbody
# dserver2 is dying...
type=Suppress
ptype=SubStr
pattern=dserver2 kernel:
This is a sec.rules file, full of things we want to react too:
Note: again, all the "patterns" are things that appear in my log files I want to know about.
Code:
# This is the rules file for SEC
# This is used via syslog-ng
# Company
# Author: Me
# July 3rd, 2007
#
#
##########
# This section was moved to sec.ignore
########## End of ignores
# Watch for mail forwarding loops
type=Single
ptype=RegExp
pattern=mail forwarding loop
desc=$0
action=pipe 'Mail forwarding loop errors detected $0' /usr/bin/mail -s "Mail Forwarding Loop errors found" "unix-admins@company1.com"
# Watch for sendmail system problems - possible out of memory errors
type=Single
ptype=RegExp
pattern=SYSERR
desc=$0
# now add this error to SYSERR in memory, collect them for 30 seconds, then
# email them all together as 1, instead of many many emails
action=add SYSERR; set SYSERR 30 (report SYSERR /usr/bin/mail -s "SEC warnings" "unix-admins@company1.com")
# Watch for memory errors
# then collect ^CPU errors and email unix-admins
type=Pair
ptype=RegExp
pattern=machine check error
desc=$0
action=pipe '$0' /usr/bin/mail -s "CPU/Memory errors" "unix-admins@company1.com"
ptype2=RegExp
pattern2=^CPU
desc2=$0
action2=add CPU; set CPU 30 (report CPU /usr/bin/mail -s "CPU warnings" "mleo@company1.com")
# Check for nic duplex errors (usually from boots)
type=Single
ptype=RegExp
pattern=(?i)Half.Duplex
desc=$0
action=pipe '$0' /usr/bin/mail -s "SEC warnings: Duplex issues" "unix-admins@company1.com"
As you can see, if you can write a RegExp, you can filter for it and react to it.
Additionally, you can do combinations of things, like, if you see an NFS warning, but then see an NFS OK message within 10 seconds, simply ignore it and chalk it up to network issues.
Or, if you see kernel: errors, collect them all together for 30 seconds, and send 1 email, rather than flooding unix-admins with important, but very redundant spam.
You can even call extrernal scripts. For instance, if you see ftp login failures, run a script to restart ftpd or similar.
Again, check out the official docs for specifics on how to get that granular, and that robust.
If you are a system admin and responsible for any hosts or hardware, you'd be remiss if you didn't take an hour or two to setup this simple configuration.
A cheap server with free software that can make your life so much easier is within any IT budget.