

Next up we add the files to monitor in the configuration file. Please mind the two spaces in front of ‘hosts’, which is mandatory for a YAML document! # mv /etc/filebeat/filebeat.yml /etc/filebeat/ Please mind the default configuration of filebeat is direct output to elasticsearch, which means we don’t have an option to enrich the data! First of all we add the output, which is logstash in our case. The best way to configure the file is to move the file, and create your own file with your desired configuration. As the extension of the file indicates, this is a file organised in YAML syntax. These files need to be configured in filebeat, in the file: /etc/filebeat/filebeat.yml. This is actually a lesser known hidden gem in Linux, and provides audit information from the Linux kernel. – /var/log/audit/audit.log: this is the logfile of the linux kernel based audit facility. The default is a normal text based logfile, it could be set to XML. These audit files provide what is called ‘mandatory auditing’, and includes at least connections to the instance with administrator privilege, database startup and database shutdown. – /u01/app/oracle/admin/*/adump/*.aud: this is the default place where the oracle database stores it’s audit files. – /var/log/secure: this is the default linux logfile which contains all kinds of authentication messages, as defined in /etc/nf (authpriv.* /var/log/secure). So we need to keep track of a couple of files: We are going to look into linux and oracle auditing. There are and could be multiple input products, in this blogpost I use ‘filebeat’, which keeps track of logfiles. The last part of the data pipeline is ‘filebeat’. Logstash uses upstart (meaning a startup script in /etc/init) instead of the legacy startup mechanism using the chkconfig and service utilities. It does not mean it will all work as desired, there could be runtime issues. If you see the ‘Configuration OK’ message, it means logstash could interprent the configuration files. Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties # sudo -u logstash /usr/share/logstash/bin/logstash ttings /etc/logstash -t
INSTALL FILEBEATS WITH YUM HOW TO
In order to make the installation really easy, I use the yum repository of the elastic company, this is how to set that up (all done as root, ‘#’ indicates root):
INSTALL FILEBEATS WITH YUM INSTALL
The below install actions were executed on a Linux 64 bit host running Oracle Linux 6.8. In reality you should have a log gatherer on every host (called ‘filebeat’) and a dedicated host which runs the rest of the stack (logstash, elasticsearch and kibana). In this blogpost I will install everything on the same host, everything being the ELK stack and an Oracle database installation. Installing the ELK stack in a basic way is easy. When looking at Kibana, it quite much looks like the splunk interface. – Kibana is an open source data visualisation plugin for Elasticsearch.

– Logstash is a fully configurable open source data processing pipeline that can receive data from a multiple sources simultaneously, transform it and output it based on the output plugin, which is the elastic search plugin in this blogpost but could be anything from STDOUT, an unix pipe, a file, a file in CSV, HTTP, email, IRC, Jira, graphite, kafka, mongodb, nagios, S3, SolR, … really whatever you want. – Elasticsearch is an open source search engine based on Apache Lucene, which provides a distributed, multitenant-capable full-text search engine with a http web interface and schema-free JSON documents.

The ELK stack gets it’s name from Elasticsearch, Logstash and Kibana. This blog post is about two things: one how you can monitor who is bringing you database up and down (there is a twist at the end!) and two how you can very conveniently do that with aggregated logs in a browser with a tool called ‘Kibana’, which is the K in ELK.
