Skip to main content

Ubuntu/Debian

Elastic Stack Installation#

Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. The ElastiFlow Unified Flow Collector can be configured to store the collected, processed and enriched records in Elasticsearch. Kibana enables you to interactively explore, visualize, and share insights into your data and manage and monitor the stack. Elasticsearch is where the indexing, search, and analysis happens.

Elasticsearch provides real-time search and analytics for all types of data. It efficiently indexes and stores records in a way that supports fast queries. As your data and query volume grows, the distributed nature of Elasticsearch enables your deployment to grow seamlessly along with it.

Kibana enables you to interactively explore, visualize, and share insights into your network flow data, as well as manage and monitor Elasticsearch.

This document will describes in detail the installation of the ElastiFlow Unified flow collector and the Elastic Stack (Elasticsearch and Kibana) on a single server running Ubuntu Linux 20.04 LTS.

Sizing#

Elasticsearch can be deployed as a single-mode server or multi-node cluster. The latter provides for horizontal scaling to handle very high ingest rates and longer retention periods. For more information on properly sizing an Elasticsearch cluster, see Sizing.

Environment#

ResourceInformation
Hostnameubuntu2004
IP Address192.168.56.101
CPU Cores4
Memory32 GB
Storage1 TB
OSUbuntu Server 20.04 LTS
Elasticsearch7.14.1
Kibana7.14.1
ES FeaturesTLS, RBAC
ElastiFlow UFC5.1.7
important

The hostname and IP address above are for examples only. You must replace these values with those of your own server when executing any commands or editing any files.

Tune the Linux Kernel#

1. Add the parameters required by Elasticsearch#

Elasticsearch uses a mmapfs directory by default to store its indices. The Linux default limits on mmaps is usually too low, which can result in out-of-memory exceptions. This limit should be raised to 262144.

Run the following command to add the file /etc/sysctl.d/70-elasticsearch.conf with the attribute vm.max_map_count=262144:

echo "vm.max_map_count=262144" | sudo tee /etc/sysctl.d/70-elasticsearch.conf > /dev/null

2. Tune network parameters for better throughput#

The default Linux network parameters are not optimal for high throughput applications, in particular a high volume of ingress UDP packets. This can result in dropped packets and lost data. Linux network performance for ElastiFlow can optimized by changing the parameters below.

Run the following command to add the file /etc/sysctl.d/60-net.conf with the recommended changes.

echo -e "net.core.netdev_max_backlog=4096\nnet.core.rmem_default=262144\nnet.core.rmem_max=67108864\nnet.ipv4.udp_rmem_min=131072\nnet.ipv4.udp_mem=2097152 4194304 8388608" | sudo tee /etc/sysctl.d/60-net.conf > /dev/null

3. Apply Changes#

For changes to the above parameters to take effect the system can be restarted. Alternatively the following commands can be run to apply the changes without a reboot:

sudo sysctl -w vm.max_map_count=262144 && \
sudo sysctl -w net.core.netdev_max_backlog=4096 && \
sudo sysctl -w net.core.rmem_default=262144 && \
sudo sysctl -w net.core.rmem_max=67108864 && \
sudo sysctl -w net.ipv4.udp_rmem_min=131072 && \
sudo sysctl -w net.ipv4.udp_mem='2097152 4194304 8388608'

Disable the Firewall#

The easiest way to get started is to disable the Linux firewall. Alternatively the firewall can be configured to allow access to any required ports. Details of configuring the Linux firewall are beyond the scope of this document. However if enabled, you will need to allow access to the following ports:

ApplicationPort
ElasticsearchTCP/9200
KibanaTCP/5601
Unified Flow CollectorUDP 9995 or other configured port(s)
sudo systemctl stop ufw.service && sudo systemctl disable ufw.service

Install Prerequisite Packages#

Run the following commands to install required packages.

sudo apt install -y apt-transport-https
sudo apt install -y unzip

Install Elasticsearch#

1. Add Elastic PGP Key#

Elastic signs all of their packages with the Elasticsearch Signing Key (PGP key D88E42B4, available from https://pgp.mit.edu) with fingerprint: 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4

Download and install the public signing key.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

2. Add the Elastic Repository#

Add the Elastic repository definition to /etc/apt/sources.list.d/elastic-7.x.list by running the following command.

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list > /dev/null

3. Install Elasticsearch using apt#

Run the following commands to install the Elasticsearch package.

sudo apt update && sudo apt install -y elasticsearch
note

If two entries exist for the same Elasticsearch repository, you will see an error during apt update similar to Duplicate sources.list entry https://artifacts.elastic.co/packages/7.x/apt/ ...

Examine /etc/apt/sources.list.d/elasticsearch-7.x.list for the duplicate entry or locate the duplicate entry amongst the files in /etc/apt/sources.list.d/ and the /etc/apt/sources.list file.

4. Configure JVM Heap Size#

If a JVM is started with unequal initial and max heap sizes, it may pause as the JVM heap is resized during system usage. For this reason it’s best to start the JVM with the initial and maximum heap sizes set to equal values.

Add the file heap.options to /etc/elasticsearch/jvm.options.d and set -Xms and -Xmx to about one third of the system memory, but do not exceed 31g. For this example we will use 12GB of the available 32GB of memory for JVM heap.

echo -e "-Xms12g\n-Xmx12g" | sudo tee /etc/elasticsearch/jvm.options.d/heap.options > /dev/null

5. Increase System Limits#

Increased system limits should be specified in a systemd attributes file for the elasticsearch service.

sudo mkdir /etc/systemd/system/elasticsearch.service.d && \
echo -e "[Service]\nLimitNOFILE=131072\nLimitNPROC=8192\nLimitMEMLOCK=infinity\nLimitFSIZE=infinity\nLimitAS=infinity" | \
sudo tee /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf > /dev/null

6. Generate CA and Certificates#

There are numerous ways to generate certificates that can be used to secure communications using TLS. To simplify the process Elastic provides the elasticsearch-certutil tool. For more details about this tool, refer to Elastic's documentation.

It is first necessary to generate a certificate authority (CA) by running the following command.

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca --pem

When you see Please enter the desired output file [elastic-stack-ca.zip]: press enter to accept the default.

The resulting file will be placed in /usr/share/elasticsearch. To unzip and move the CA key and cert to /etc/elasticsearch/certs run the following commands.

sudo mkdir /etc/elasticsearch/certs && \
sudo unzip /usr/share/elasticsearch/elastic-stack-ca.zip -d /etc/elasticsearch/certs

To generate certificates for the Elasticsearch node, create a file named /usr/share/elasticsearch/instances.yml similar to the following. Replace the values with those appropriate for your environment.

instances:
- name: "node1"
ip:
- "192.0.2.1"
dns:
- "node1.mydomain.com"

For example, in the system used for this guide, the name of the server is ubuntu2004, the IP address is 192.168.56.101 and there is no name configured in DNS. The instance would contain:

instances:
- name: "ubuntu2004"
ip:
- "192.168.56.101"

Use elasticsearch-certutil to genrate the certificates and keys from the CA and instances file.

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pem --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key

The resulting file will be placed in /usr/share/elasticsearch. To unzip and move the node keys and certs to /etc/elasticsearch/certs run the following commands.

sudo unzip /usr/share/elasticsearch/certs.zip -d /etc/elasticsearch/certs

7. Edit elasticsearch.yml#

Edit the Elasticsearch configuration file, /etc/elasticsearch/elasticsearch.yml, replacing the contents of the file with the following configuration. Edit as necessary for your environment.

cluster.name: elastiflow
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.type: 'single-node'
indices.query.bool.max_clause_count: 8192
search.max_buckets: 250000
action.destructive_requires_name: 'true'
xpack.security.http.ssl.enabled: 'true'
xpack.security.http.ssl.verification_mode: 'none'
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca/ca.crt
xpack.security.http.ssl.key: /etc/elasticsearch/certs/ubuntu2004/ubuntu2004.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/ubuntu2004/ubuntu2004.crt
xpack.monitoring.enabled: 'true'
xpack.monitoring.collection.enabled: 'true'
xpack.monitoring.collection.interval: 30s
xpack.security.enabled: 'true'
xpack.security.audit.enabled: 'false'
note

If you want Elasticsearch data to be stored on a different mount point, you must first create the directory and assign permissions to the elasticsearch. For example, to store data on /mnt/data0, run sudo mkdir /mnt/data0/elasticsearch && sudo chown -R elasticsearch:elasticsearch /mnt/data0/elasticsearch. The edit the path.data option in elasticsearch.yml specifying this path.

7. Enable and Start Elasticsearch#

Execute the following commands to start Elsticsearch and enable it run automatically when the server boots:

sudo systemctl daemon-reload && \
sudo systemctl enable elasticsearch && \
sudo systemctl start elasticsearch

Confirm Elasticsearch started successfully by executing:

sudo systemctl status elasticsearch

8. Set Passwords for Elasticsearch Built-in Accounts#

Execute the following command for to setup passwords for the various built-in accounts:

sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

The following will be displayed:

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]

Answer y, then enter and confirm passwords for the built-in Elasticsearch accounts.

9. Verify Elasticsearch#

Ensure that the Elasticsearch REST API is available by running the following:

curl -XGET -k "https://elastic:PASSWORD@127.0.0.1:9200"

The output should be similar to the following:

{
"name" : "ubuntu2004",
"cluster_name" : "elastiflow",
"cluster_uuid" : "S5Y3Z2USSq2sR2TyOkLe3A",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Install Kibana#

1. Install Kibana using apt#

Run the following commands to install the Kibana package.

sudo apt update && sudo apt install -y kibana

2. Copy CA and Certificates#

Kibana will also require access to the CA, certifcates and keys. To use the same files that were created for Elasticsearch, copy them from /etc/elasticsearch to /etc/kibana.

sudo cp -r /etc/elasticsearch/certs /etc/kibana

3. Edit kibana.yml#

Edit the Kibana configuration file /etc/kibana/kibana.yml, replacing the contents of the file with the following configuration. Edit as necessary for your environment (especially elasticsearch.password).

telemetry.enabled: false
telemetry.optIn: false
newsfeed.enabled: false
server.host: '0.0.0.0'
server.port: 5601
server.maxPayload: 8388608
server.publicBaseUrl: 'https://192.168.56.101:5601'
server.ssl.enabled: true
server.ssl.certificateAuthorities: /etc/kibana/certs/ca/ca.crt
server.ssl.key: /etc/kibana/certs/ubuntu2004/ubuntu2004.key
server.ssl.certificate: /etc/kibana/certs/ubuntu2004/ubuntu2004.crt
elasticsearch.hosts: ['https://192.168.56.101:9200']
elasticsearch.username: 'kibana_system'
elasticsearch.password: 'PASSWORD'
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca/ca.crt
elasticsearch.ssl.key: /etc/kibana/certs/ubuntu2004/ubuntu2004.key
elasticsearch.ssl.certificate: /etc/kibana/certs/ubuntu2004/ubuntu2004.crt
elasticsearch.ssl.verificationMode: 'certificate'
elasticsearch.requestTimeout: 132000
elasticsearch.shardTimeout: 120000
kibana.autocompleteTimeout: 2000
kibana.autocompleteTerminateAfter: 500000
monitoring.enabled: true
monitoring.kibana.collection.enabled: true
monitoring.kibana.collection.interval: 30000
monitoring.ui.enabled: true
monitoring.ui.min_interval_seconds: 20
xpack.maps.showMapVisualizationTypes: true
xpack.security.enabled: true
xpack.security.audit.enabled: false
xpack.encryptedSavedObjects.encryptionKey: 'ElastiFlow_0123456789_0123456789_0123456789'

4. Enable and Start Kibana#

Execute the following commands:

sudo systemctl daemon-reload && \
sudo systemctl enable kibana && \
sudo systemctl start kibana

Confirm Kibana started successfully by executing:

sudo systemctl status kibana

You should now be able to access Kibana at https://IP_OF_KIBANA_HOST:5601. Since this HTTPS connection is using a self-signed certificate, you may see an error similar to the following.

Chrome:

cert_error_chrome

Firefox:

cert_error_firefox

Safari:

cert_error_safari

You need to either create an exception in your browser, or import and trust the CA certificate on the system running the browser. This can usually be achieved by downloading the ca.crt file from the server. Double-clicking it will usually propt you to import the certificate. On MacOS the certificate should appear as follows in the keychain application after it is configured to be trusted.

cert_macos

You should now be able to connect to Kibana after allowing an exception.

Install the ElastiFlow Unified Flow Collector#

The ElastiFlow Unified Flow Collector can be installed natively on Ubuntu and Debian Linux. The instructions are available here. In this section we will cover the primary configuration options for the Elasticsearch output.

The Unified Flow Collector options are configured using environment variables. To configure the environment variables, edit the file /etc/systemd/system/flowcoll.service.d/flowcoll.conf. For details on all of the configuration options, please refer to the Configuration Reference.

1. Request a Basic or Trial License#

Without a license key the ElastiFlow Unified Flow Collector run with a Community tier license. The Basic tier is also available at no cost and supports additional standard information elements. A license can be requested on the ElastiFlow website. Alternatively a 30-day Premium trial may be requested, which increases the scalability of the collector and enables all supported vendor and standard information elements.

note

After requesting a license it can take up to 30 minutes for the email to arrive.

License keys are generated per account. EF_FLOW_ACCOUNT_ID must contain the Account ID for the Licence Key specified in EF_FLOW_LICENSE_KEY. The number of licensed cores will be 1 for a Basic license, and up to 64 for a 30-day Trial.

Environment="EF_FLOW_ACCOUNT_ID=FROM_THE_EMAIL"
Environment="EF_FLOW_LICENSE_KEY=FROM_THE_EMAIL"
Environment="EF_FLOW_LICENSED_CORES=1"

2. Copy CA Certificate#

The Unified Flow Collector will require access to the CA certifcate to verify the Elasticsearch node. Copy the CA certifcate from /etc/elasticsearch/certs/ca/ca.crt to /etc/elastiflow/ca/ca.crt.

sudo mkdir /etc/elastiflow/ca && \
sudo cp /etc/elasticsearch/certs/ca/ca.crt /etc/elastiflow/ca

3. Enable the Elasticsearch Output#

Set EF_FLOW_OUTPUT_ELASTICSEARCH_ENABLE to true to enable the Elasticsearch output.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_ENABLE=true"

4. Specify a Schema#

The Unified Flow Collector outputs data using ElastiFlow's CODEX schema. Optionally you can choose to output data in Elastic Common Schema (ECS). To do so, set EF_FLOW_OUTPUT_ELASTICSEARCH_ECS_ENABLE to true.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_ECS_ENABLE=true"

5. Source of @timestamp#

There are multiple possible sources to set the value of @timestamp field, which is the primary timestamp field used by Kibana. They supported options are:

ValueField UsedDescription
startflow.start.timestampThe flow start time indicated in the flow.
endflow.end.timestampThe flow end time (or last reported time).
exportflow.export.timestampThe time from the flow record header.
collectflow.collect.timestampThe time that the collector processed the flow record.

Usually end would be the best setting. However, in the case of poorly behaving or misconfigured devices, collect may be the better option. The actual timestamp used may be different than configured depending on the content of the received records. If end is not available the collector will fall back to export. If export is not available the collector will fall back to collect.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_TIMESTAMP_SOURCE=end"

6. Index Shards and Replicas#

For this small single node install set the number of shards to 1 and replicas to 0.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_INDEX_TEMPLATE_SHARDS=1"
Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_INDEX_TEMPLATE_REPLICAS=0"
note

The optimum value for these settings will depend on a number of factors. The number of shards should be at least 1 for each Elasticsearch data node in a cluster. Larger nodes (16+ CPU cores) and higher ingest rates can benefit from 2 shards per node. The largest nodes (64 CPU cores, 8 memory channels and multiple SSD drives) can even benefit from 3 or 4 shards per node. In a multi-node cluster 1 or more replicas may be specified for redundancy.

7. ILM Lifecycle#

Set the Index Lifecycle Management (ILM) lifecyle to elastiflow. The lifecycle policy will also need to be added later in Kibana Management.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_INDEX_TEMPLATE_ILM_LIFECYCLE=elastiflow"

8. Elasticsearch Server and Credentials#

Define the elasticsearch node to which the collector should connect and the credentials for which the password was defined during the Elasticsearch installation.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_ADDRESSES=192.168.56.101:9200"
Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_USERNAME=elastic"
Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_PASSWORD=changeme"

9. Encrypted Communications with TLS#

Enable TLS and specify the path to the CA certificate.

Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_TLS_ENABLE=true"
Environment="EF_FLOW_OUTPUT_ELASTICSEARCH_TLS_CA_CERT_FILEPATH=/etc/elastiflow/ca/ca.crt"

10. Enable and Start the Unified Flow Collector#

Execute the following commands:

sudo systemctl daemon-reload && \
sudo systemctl enable flowcoll && \
sudo systemctl start flowcoll

Confirm the service started successfully by executing:

sudo systemctl status flowcoll

The collector is now ready to receive flow records from the network infrastructure.

Import Kibana Objects#

The last step is to import the Kibana saved objects and apply the recommended advanced settings. Follow the instuctions in the Kibana section of the documentation for detailed instructions.