All Collections
SIEM
How to configure your log collector on RedHat Enterprise Linux 8
How to configure your log collector on RedHat Enterprise Linux 8
Alan Butcher avatar
Written by Alan Butcher
Updated over a week ago

Introduction

What is a collector?

A collector is a device deployed within your environment that receives logs from internal nodes and forwards them to the Defense.com SIEM platform securely. The main software used by the collector is Logstash, a Java tool built by the developers of Elasticsearch which can collect logs from a variety of inputs, manipulate the logs by, for example, adding or removing fields, then sends them on to another location.

Why a collector is needed

Many devices, especially network devices only support log output using syslog. By default, syslog is a UDP protocol, meaning encryption is not available. Sending the logs to a collector within your environment allows them to be forwarded to the Defense.com SIEM platform securely.

Furthermore, in order to forward logs to the Defense.com SIEM platform, a client SSL Certificate is required. Having one device that connects to our platform avoids a situation where, when this certificate expires, you have to upload the new certificate to each device in your network.

What is an agent?

An agent is a piece of software which collects logs on your individual nodes and sends them to the collector.

For Windows nodes, we provide Winlogbeat, which collects Windows Event Logs, and Auditbeat, which does File Integrity and System-Level monitoring.

Auditbeat is also used on Linux nodes, but instead of Winlogbeat, Filebeat is used to collect logs from plain-text files.

The agents we provide are developed by Elasticsearch and are designed to be very lightweight in terms of Memory and CPU footprint. If you ever notice that one of the agents is causing excessive resource usage on one of your systems, please raise a ticket via the Defense.com portal.

Prerequisites

Set up a VM or dedicated node for the collector

The collector needs to be hosted on its own dedicated VM or physical node. Logstash is a resource-intensive software and will affect the performance of other applications if hosted on a shared node. Here are the system requirements for this node:

Resource

Minimum Requirement

OS

RedHat Enterprise Linux 8

CPU Cores

2

Memory

4GB

Disk Space

20GB

Once this node is set up, please provide Defense.com with its Public IP address. If you’re unsure how to find this out, you can run the below command on the collector:

curl ifconfig.co

Firewall Rules

Firewall rules are required between all logging agents and the collector, and between the collector and Defense.com infrastructure. Please see the requirements below:

Source

Destination

Protocol

Port

Notes

Windows and Linux Agents

Collector

TCP

5044

Allows agents to send logs to the collector using TCP.

Syslog Devices

Collector

UDP

5514 and 514

Allows syslog devices to send logs using UDP

Collector

31.28.93.145/32

TCP

31090 – 31100

Allows the collector to send logs to the Defense.com SIEM platform.

Installation

  1. Defense.com will provided an install pack containing the scripts and software required to configure your collector. This install pack contains a folder called “collector”, which contains the following files:


Please copy the entire collector folder to your RedHat server using a program like WinSCP.

  1. Next, open a terminal session to the RedHat server and login as root.

  2. There are two scripts in the collector folder. These need to be marked as “executable” before they can be run. To do this, first navigate to the collector folder in your terminal. For example, if you’ve copied it to a folder with the path /home/admin/collector, run the below command:

    cd /home/admin/collector

  3. Next, run the below commands to mark the files as executable:

    chmod 700 collector-install.sh
    chmod 700 hardening.sh

  4. To run the collector installation script, enter the following command:

    ./collector-install.sh

This script will install and configure Logstash, Filebeat and Auditbeat on the collector automatically.

Note: By default, logstash runs as an unprivileged user, so it doesn’t have sufficient permissions to open up a listener on port 514. Port 5514 is used by default, but the collector-install.sh script will add a custom IPTables rule to forward any traffic sent to port 514, to port 5514.

Once this has finished, you can verify that the services are running by entering the below commands:

service logstash status
service filebeat status
service auditbeat status


If all is well, you should see an output like the below:

Text

Description automatically generated


If any of the services aren’t running, please contact Defense.com for assistance.

(Optional) We also include a hardening script in the install pack. This secures file permissions, disables unneeded and insecure services and configures SSH and auditd securely. Please bear in mind that this script will remove root SSH login, so please make sure you have another user on the collector that you can login with. To run this script, type the following command:

./hardening.sh

Collector Configuration and Management

The logstash configuration is located in the directory /etc/logstash. This directory contains the main logstash config file, logstash.yml, a /certs subdirectory which contains the client certs and a /conf.d subdirectory that contains the pipeline configuration.

/etc/logstash/logstash.yml

The logstash.yml file contains the general configuration settings for logstash. The majority of it is commented out and their default values are used, but you can uncomment lines to set custom values. In most cases, the defaults are okay, but it may be useful to know which settings can be useful to change.

  • queue.type – By default, logstash queues logs in memory before sending them to the Defense.com SIEM platform. This is efficient, but if the server were to suddenly crash, the queued logs would be dropped. If you change the queue.type to “persisted”, a persistent queue will be created in /var/lib/logstash/queue.

  • queue.max_bytes – If you decide to implement a persisted queue, you’ll also want to change this setting to define the maximum size you’d like the queue to be before it starts to delete old logs. By default, this is set to 1gb.

/etc/logstash/certs/

This is the directory that the keystore.jks and truststore.jks files from the install pack are copied to by the collector-install.sh script.

The keystore.jks file is your collectors client SSL certificate, used to authenticate with the Defense.com SIEM platform. This certificate is valid for 1 year. We will contact you when it’s time to replace this file.

The truststore.jks file contains our CA, so logstash can verify the certificate of our edge infrastructure. This expires in November 2024. We will contact you when it’s time to replace this file.

Both .jks files are password protected, with the passwords stored in /etc/logstash/conf.d/90-output.conf. To view the contents of these certificates, use the below commands:

For the Keystore:

KEYSTOREPASS=`grep ssl_keystore_password /etc/logstash/conf.d/90-output.conf | awk -F “\”” ‘{print $2}’`

keytool -list -v -keystore /etc/logstash/certs/keystore.jks -storepass $KEYSTOREPASS

For the Truststore:

TRUSTSTOREPASS=` grep ssl_truststore_password /etc/logstash/conf.d/90-output.conf | awk -F “\”” ‘{print $2}’`

keytool -list -v -keystore /etc/logstash/certs/truststore.jks -storepass $TRUSTSTOREPASS

There will be a lot of output to these commands, but towards the top you’ll see a line similar to the below, which indicates the issuing date and expiry date of the certificate.

Valid from: Mon Mar 06 10:34:34 GMT 2023 until: Tue Mar 05 10:35:04 GMT 2024

/etc/logstash/conf.d/

This directory contains the “pipeline” configuration for logstash. A pipeline consists of some inputs, filters and outputs. These are located in the files 10-input.conf, 20-filter.conf and 90-output.conf respectively. The file names are prefixed with a number to make sure Logstash interprets them in the correct order.

Syntax

The config files in this directory are in a format custom to logstash. If changes are made and they don’t follow the format, logstash will crash.

You start by specifying whether the config you’re writing is for an input, filter or output. The contents of this are contained in curly brackets. For example:

input {

}

Next, you add the config for the specific input, filter or output within these curly brackets. For example, logstash has a beats input to listen for logs from filebeat, auditbeat or winlogbeat:

input {

beats {

port => 5044

}

}

Notice how the “beats {“ line is indented with two spaces and the “port => 5044” line with four spaces. Each “subconfiguration” needs to be indented by two spaces from its parent configuration.

Also, each section needs to be encased in curly brackets, with the closing brackets requiring the same indentation as the line containing the opening bracket.

10-input.conf

Below is the default configuration we provide in the 10-input.conf file. A beats input is provided, listening on port 5044. SSL is not enabled by default (this is covered later in this documentation).

A udp input is also provided for syslog connections.

Text, letter

Description automatically generated

A full list of logstash input plugins is available here if you would like to add your own custom inputs.


20-filter.conf

You shouldn’t need to make any changes to this file, but it may be useful to understand what it does.

First, a “pipeline.collector.processed_at” field is added to each log. This is for internal use, so we can check for any delays between the logs being processed by the collector and eventually being stored in our platform.

Next, a “pipeline.collector.id” field is added. If you have multiple collectors, this identifies which collector each log has been processed by.

Finally, a “type” field is added to each log, so each log can be identified and processed accordingly once ingested into the Defense.com SIEM platform.

Graphical user interface, text, application, email

Description automatically generated

90-output.conf

Finally, we have the output. The entry point to our SIEM platform is our Kafka cluster, so the kafka output plugin is used. You shouldn’t need to make any changes to this file, but in specific circumstances we may ask you to. Full instructions will be supplied in any such case.

Text, letter

Description automatically generated

Upgrading the collector

Our install pack provides RPM files which are used to install logstash and the beats agents. When upgrades are required, we will send you an upgrade pack containing a script and the new RPM files.

Syslog Port Forwarding

As previously mentioned, our install script includes commands which will set up a persistent iptables rule to forward traffic from UDP port 514 to 5514. To check if this is still in place at any time, run the below command:

sudo iptables -t nat -L

If the rule is in place, you’ll see the below at the top of the output of this command:

Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT udp -- anywhere anywhere udp dpt:syslog redir ports 5514

If for any reason, this is no longer configured, run the below commands to re-add the rule and make it persistent:

sudo iptables -t nat -A PREROUTING -p udp --dport 514 -j REDIRECT --to-port 5514
sudo iptables-save > /etc/sysconfig/iptables

Internal SSL

Unfortunately, with syslog being a UDP protocol, encryption isn’t supported. You can, however, encrypt the connection between the beats agents and the collector if you wish.

If you have your own Certificate Authority, generate a certificate with a Subject Alternative Name (SAN) of the IP address of the collector. The certificate needs to be in PEM format, with the private key in pkcs8 format. You can use the following command to convert a private key to pkcs8:

openssl pkcs8 -in /path/to/private/key -topk8 -nocrypt -out /path/to/newly/generated/key

Upload both of these to the /etc/logstash/certs/ directory on the collector, making sure they’re readable by the “logstash” user. For example, by running the command below:

chown -R logstash:logstash /etc/logstash/certs

Note: If you don’t have your own Certificate Authority, we can provide a script to create one and generate a certificate automatically.

Next, configure the entries in /etc/logstash/conf.d/01-input.conf, removing the hash at the beginning of the ssl-related lines of this file and setting the paths correctly. For example:

Text, letter

Description automatically generated

Once this is done, restart the logstash service and SSL will now be enabled.

Once SSL is enabled on logstash itself, the beats agents will also need to be configured to use SSL. Each beats agent has a config file. Here are the locations:

  • Filebeat - /etc/filebeat/filebeat.yml

  • Auditbeat (Linux) - /etc/auditbeat/auditbeat.yml

  • Winlogbeat – C:\Program Files\winlogbeat\winlogbeat.yml

  • Auditbeat (Windows) – C:\Program Files\auditbeat\auditbeat.yml

In each of the files you’ll see the below section:

# Optional SSL. By default is off.

# List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Upload the CA that was used to sign the certificate to each of the nodes with beats agents installed, then remove the hash from before the “ssl.certificate_authorities” line of the beats config files and update the path to match where you uploaded the CA.

Restart the beats services and they will now be using SSL when connecting to logstash.

Load Balancing

Multiple collectors can be installed for resiliency or multi-environment systems. To configure agents to send logs to multiple hosts, edit the host definition in the beats config files:

hosts: ["192.168.0.1:9550", "192.168.0.2:9550"]

A host from the list will be selected during start-up. If there is a failure, the other host will be used.

To load balance across multiple hosts at the same time, use the following configuration:

hosts: ["192.168.0.1:9550", "192.168.0.2:9550"]

loadbalance: true

Unfortunately, at this time it is not possible to select a preferred collector.

Did this answer your question?