Ubuntu/Debian
Elastic Stack Installation
Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. NetObserv Flow can be configured to store the collected, processed and enriched records in Elasticsearch. Kibana enables you to interactively explore, visualize, and share insights into your data and manage and monitor the stack. Elasticsearch is where the indexing, search, and analysis happens.
This document will describes in detail the installation of NetObserv Flow and the Elastic Stack (Elasticsearch and Kibana) on a single server running Ubuntu Linux 20.04 LTS.
Sizing
Elasticsearch can be deployed as a single-mode server or multi-node cluster. The latter provides for horizontal scaling to handle very high ingest rates and longer retention periods. For more information on properly sizing an Elasticsearch cluster, see Sizing.
Environment
Resource | Information |
---|---|
Hostname | myhost |
IP Address | 192.168.56.101 |
CPU Cores | 4 |
Memory | 32 GB |
Storage | 1 TB |
OS | Ubuntu Server 22.04 LTS |
Elasticsearch | 8.14.0 |
Kibana | 8.14.0 |
ES Features | TLS, RBAC |
ElastiFlow UFC | 7.4.0 |
The hostname and IP address above are for examples only. You MUST replace these values with those of your own server when executing any commands or editing any files.
Tune the Linux Kernel
1. Add Parameters Required by Elasticsearch
Elasticsearch uses a mmapfs
directory by default to store its indices. The Linux default limits on mmaps is usually too low, which can result in out-of-memory exceptions. This limit should be raised to 262144
.
Run the following command to add the file /etc/sysctl.d/70-elasticsearch.conf
with the attribute vm.max_map_count=262144
:
echo "vm.max_map_count=262144" | sudo tee /etc/sysctl.d/70-elasticsearch.conf > /dev/null
2. Tune Network Parameters
The default Linux network parameters are not optimal for high throughput applications, in particular a high volume of ingress UDP packets. This can result in dropped packets and lost data. Linux network performance for ElastiFlow can optimized by changing the parameters below.
Run the following command to add the file /etc/sysctl.d/60-net.conf
with the recommended changes.
echo -e "net.core.netdev_max_backlog=4096\nnet.core.rmem_default=262144\nnet.core.rmem_max=67108864\nnet.ipv4.udp_rmem_min=131072\nnet.ipv4.udp_mem=2097152 4194304 8388608" | sudo tee /etc/sysctl.d/60-net.conf > /dev/null
3. Apply Changes
For changes to the above parameters to take effect the system can be restarted. Alternatively the following commands can be run to apply the changes without a reboot:
sudo sysctl -w vm.max_map_count=262144 && \
sudo sysctl -w net.core.netdev_max_backlog=4096 && \
sudo sysctl -w net.core.rmem_default=262144 && \
sudo sysctl -w net.core.rmem_max=67108864 && \
sudo sysctl -w net.ipv4.udp_rmem_min=131072 && \
sudo sysctl -w net.ipv4.udp_mem='2097152 4194304 8388608'
Disable the Firewall
The easiest way to get started is to disable the Linux firewall. Alternatively the firewall can be configured to allow access to any required ports. Details of configuring the Linux firewall are beyond the scope of this document. However if enabled, you will need to allow access to the following ports:
Application | Port |
---|---|
Elasticsearch | TCP/9200 |
Kibana | TCP/5601 |
NetObserv Flow | UDP 9995 or other port(s) configured by EF_FLOW_SERVER_UDP_PORT |
sudo systemctl stop ufw.service && sudo systemctl disable ufw.service
Install Prerequisite Packages
Run the following commands to install required packages.
sudo apt install -y apt-transport-https
sudo apt install -y unzip
Install Elasticsearch
1. Add Elastic PGP Key
Elastic signs all of their packages with the Elasticsearch Signing Key (PGP key D88E42B4
, available from https://pgp.mit.edu) with fingerprint: 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
Download and install the public signing key.
curl -sS https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor | sudo tee /usr/share/keyrings/elasticsearch-archive-keyring.gpg >/dev/null
2. Add the Elastic Repository
Add the Elastic repository definition to /etc/apt/sources.list.d/elasticsearch.list
by running the following command.
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-archive-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elasticsearch.list >/dev/null
3. Install Elasticsearch using apt
Run the following commands to install the Elasticsearch package.
sudo apt update && sudo apt install -y elasticsearch
If two entries exist for the same Elasticsearch repository, you will see an error during apt update
similar to Duplicate sources.list entry https://artifacts.elastic.co/packages/8.x/apt/ ...
Examine /etc/apt/sources.list.d/elasticsearch.list
for the duplicate entry or locate the duplicate entry amongst the files in /etc/apt/sources.list.d/
and the /etc/apt/sources.list
file.
4. Configure JVM Heap Size
If a JVM is started with unequal initial and max heap sizes, it may pause as the JVM heap is resized during system usage. For this reason it’s best to start the JVM with the initial and maximum heap sizes set to equal values.
Add the file heap.options
to /etc/elasticsearch/jvm.options.d
and set -Xms
and -Xmx
to about one third of the system memory, but do not exceed 31g
. For this example we will use 12GB of the available 32GB of memory for JVM heap.
echo -e "-Xms12g\n-Xmx12g" | sudo tee /etc/elasticsearch/jvm.options.d/heap.options > /dev/null
5. Increase System Limits
Increased system limits should be specified in a systemd
attributes file for the elasticsearch
service.
sudo mkdir /etc/systemd/system/elasticsearch.service.d && \
echo -e "[Service]\nLimitNOFILE=131072\nLimitNPROC=8192\nLimitMEMLOCK=infinity\nLimitFSIZE=infinity\nLimitAS=infinity" | \
sudo tee /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf > /dev/null
6. Generate CA and Certificates
There are numerous ways to generate certificates that can be used to secure communications using TLS. To simplify the process Elastic provides the elasticsearch-certutil
tool. For more details about this tool, refer to Elastic's documentation.
It is first necessary to generate a certificate authority (CA) by running the following command.
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca --pem
When you see Please enter the desired output file [elastic-stack-ca.zip]:
press enter
to accept the default.
The resulting file will be placed in /usr/share/elasticsearch
. To unzip and move the CA key and cert to /etc/elasticsearch/certs
run the following commands.
sudo mkdir /etc/elasticsearch/certs && \
sudo unzip /usr/share/elasticsearch/elastic-stack-ca.zip -d /etc/elasticsearch/certs
To generate certificates for the Elasticsearch node, create a file named /usr/share/elasticsearch/instances.yml
similar to the following. Replace the values with those appropriate for your environment.
instances:
- name: "myhost"
ip:
- "192.0.2.1"
dns:
- "myhost.mydomain.com"
For example, in the system used for this guide, the name of the server is myhost
, the IP address is 192.168.56.101
and there is no name configured in DNS. The instance would contain:
instances:
- name: "myhost"
ip:
- "192.168.56.101"
Use elasticsearch-certutil
to generate the certificates and keys from the CA and instances file.
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pem --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key
The resulting file will be placed in /usr/share/elasticsearch
. To unzip and move the node keys and certs to /etc/elasticsearch/certs
run the following commands.
sudo unzip /usr/share/elasticsearch/certs.zip -d /etc/elasticsearch/certs
7. Edit elasticsearch.yml
Edit the Elasticsearch configuration file, /etc/elasticsearch/elasticsearch.yml
, replacing the contents of the file with the following configuration. Edit as necessary for your environment.
cluster.name: elastiflow
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.type: 'single-node'
indices.query.bool.max_clause_count: 8192
search.max_buckets: 250000
action.destructive_requires_name: 'true'
xpack.security.http.ssl.enabled: 'true'
xpack.security.http.ssl.verification_mode: 'none'
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca/ca.crt
xpack.security.http.ssl.key: /etc/elasticsearch/certs/myhost/myhost.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/myhost/myhost.crt
xpack.monitoring.enabled: 'true'
xpack.monitoring.collection.enabled: 'true'
xpack.monitoring.collection.interval: 30s
xpack.security.enabled: 'true'
xpack.security.audit.enabled: 'false'
If you want Elasticsearch data to be stored on a different mount point, you must first create the directory and assign permissions to the elasticsearch
. For example, to store data on /mnt/data0
, run sudo mkdir /mnt/data0/elasticsearch && sudo chown -R elasticsearch:elasticsearch /mnt/data0/elasticsearch
. The edit the path.data
option in elasticsearch.yml
specifying this path.
8. Enable and Start Elasticsearch
Execute the following commands to start Elasticsearch and enable it run automatically when the server boots:
sudo systemctl daemon-reload && \
sudo systemctl enable elasticsearch && \
sudo systemctl start elasticsearch
Confirm Elasticsearch started successfully by executing:
sudo systemctl status elasticsearch
9. Set Passwords for Elasticsearch Built-in Accounts
Execute the following command for to setup passwords for the various built-in accounts:
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
The following will be displayed:
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]
Answer y
, then enter and confirm passwords for the built-in Elasticsearch accounts.
10. Verify Elasticsearch
Ensure that the Elasticsearch REST API is available by running the following:
curl -XGET -k "https://elastic:PASSWORD@127.0.0.1:9200"
The output should be similar to the following:
{
"name" : "myhost",
"cluster_name" : "elastiflow",
"cluster_uuid" : "S5Y3Z2USSq2sR2TyOkLe3A",
"version" : {
"number" : "8.14.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Install Kibana
1. Install Kibana using apt
Run the following commands to install the Kibana package.
sudo apt update && sudo apt install -y kibana
2. Copy CA and Certificates
Kibana will also require access to the CA, certificates and keys. To use the same files that were created for Elasticsearch, copy them from /etc/elasticsearch
to /etc/kibana
.
sudo cp -r /etc/elasticsearch/certs /etc/kibana
3. Edit kibana.yml
Edit the Kibana configuration file /etc/kibana/kibana.yml
, replacing the contents of the file with the following configuration. Edit as necessary for your environment (especially elasticsearch.password
).
telemetry.enabled: false
telemetry.optIn: false
newsfeed.enabled: false
server.host: '0.0.0.0'
server.port: 5601
server.maxPayload: 8388608
server.publicBaseUrl: 'https://192.168.56.101:5601'
server.ssl.enabled: true
server.ssl.certificateAuthorities: /etc/kibana/certs/ca/ca.crt
server.ssl.key: /etc/kibana/certs/myhost/myhost.key
server.ssl.certificate: /etc/kibana/certs/myhost/myhost.crt
elasticsearch.hosts: ['https://192.168.56.101:9200']
elasticsearch.username: 'kibana_system'
elasticsearch.password: 'PASSWORD'
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca/ca.crt
elasticsearch.ssl.key: /etc/kibana/certs/myhost/myhost.key
elasticsearch.ssl.certificate: /etc/kibana/certs/myhost/myhost.crt
elasticsearch.ssl.verificationMode: 'certificate'
elasticsearch.requestTimeout: 132000
elasticsearch.shardTimeout: 120000
kibana.autocompleteTimeout: 2000
kibana.autocompleteTerminateAfter: 500000
monitoring.enabled: true
monitoring.kibana.collection.enabled: true
monitoring.kibana.collection.interval: 30000
monitoring.ui.enabled: true
monitoring.ui.min_interval_seconds: 20
xpack.maps.showMapVisualizationTypes: true
xpack.security.enabled: true
xpack.security.audit.enabled: false
xpack.encryptedSavedObjects.encryptionKey: 'ElastiFlow_0123456789_0123456789_0123456789'
4. Enable and Start Kibana
Execute the following commands:
sudo systemctl daemon-reload && \
sudo systemctl enable kibana && \
sudo systemctl start kibana
Confirm Kibana started successfully by executing:
sudo systemctl status kibana
You should now be able to access Kibana at https://IP_OF_KIBANA_HOST:5601
. Since this HTTPS connection is using a self-signed certificate, you may see an error similar to the following.
Chrome:
Firefox:
Safari:
You need to either create an exception in your browser, or import and trust the CA certificate on the system running the browser. This can usually be achieved by downloading the ca.crt
file from the server. Double-clicking it will usually prompt you to import the certificate. On MacOS the certificate should appear as follows in the keychain application after it is configured to be trusted.
You should now be able to connect to Kibana after allowing an exception. To login use the user elastic
and the password you defined earlier for this user.
Install NetObserv Flow
NetObserv Flow can be installed natively on Ubuntu and Debian Linux. The instructions are available here. In this section we will cover the primary configuration options for the Elasticsearch output.
The NetObserv Flow options are configured using YAML. To configure the collector, edit the file /etc/elastiflow/flowcoll.yml
. For details on the configuration options, please refer to the Configuration Reference.
1. Request a Basic or Trial License
Without a license key NetObserv Flow run with a Community tier license. The Basic tier is also available at no cost and supports additional standard information elements. A license can be requested on the ElastiFlow website. Alternatively a 30-day Premium trial may be requested, which increases the scalability of the collector and enables all supported vendor and standard information elements.
After requesting a license it can take up to 30 minutes for the email to arrive.
License keys are generated per account. EF_ACCOUNT_ID
must contain the Account ID for the License Key specified in EF_FLOW_LICENSE_KEY
. The number of licensed units will be 1
for a Basic license, and up to 64 for a 30-day Trial. The ElastiFlow EULA must also be accepted to use the software.
Environment="EF_LICENSE_ACCEPTED=true"
Environment="EF_ACCOUNT_ID=FROM_THE_EMAIL"
Environment="EF_FLOW_LICENSE_KEY=FROM_THE_EMAIL"
Environment="EF_FLOW_LICENSED_UNITS=1"
2. Copy CA Certificate
The NetObserv Flow will require access to the CA certificate to verify the Elasticsearch node. Copy the CA certificate from /etc/elasticsearch/certs/ca/ca.crt
to /etc/elastiflow/ca/ca.crt
.
sudo mkdir /etc/elastiflow/ca && \
sudo cp /etc/elasticsearch/certs/ca/ca.crt /etc/elastiflow/ca
3. Enable the Elasticsearch Output
Set EF_OUTPUT_ELASTICSEARCH_ENABLE
to true
to enable the Elasticsearch output.
Environment="EF_OUTPUT_ELASTICSEARCH_ENABLE=true"
4. Specify a Schema
The NetObserv Flow outputs data using ElastiFlow's CODEX schema. Optionally you can choose to output data in Elastic Common Schema (ECS). To do so, set EF_OUTPUT_ELASTICSEARCH_ECS_ENABLE
to true
.
Environment="EF_OUTPUT_ELASTICSEARCH_ECS_ENABLE=true"
5. Source of @timestamp
There are multiple possible sources to set the value of @timestamp
field, which is the primary timestamp field used by Kibana. They supported options are:
Value | Field Used | Description |
---|---|---|
start | flow.start.timestamp | The flow start time indicated in the flow. |
end | flow.end.timestamp | The flow end time (or last reported time). |
export | flow.export.timestamp | The time from the flow record header. |
collect | flow.collect.timestamp | The time that the collector processed the flow record. |
Usually end
would be the best setting. However, in the case of poorly behaving or misconfigured devices, collect
may be the better option. The actual timestamp used may be different than configured depending on the content of the received records. If end
is not available the collector will fall back to export
. If export
is not available the collector will fall back to collect
.
Environment="EF_OUTPUT_ELASTICSEARCH_TIMESTAMP_SOURCE=collect"
6. Index Shards and Replicas
For this small single node install set the number of shards to 1
and replicas to 0
.
Environment="EF_OUTPUT_ELASTICSEARCH_INDEX_TEMPLATE_SHARDS=1"
Environment="EF_OUTPUT_ELASTICSEARCH_INDEX_TEMPLATE_REPLICAS=0"
The optimum value for these settings will depend on a number of factors. The number of shards should be at least 1 for each Elasticsearch data node in a cluster. Larger nodes (16+ CPU cores) and higher ingest rates can benefit from 2 shards per node. The largest nodes (64 CPU cores, 8 memory channels and multiple SSD drives) can even benefit from 3 or 4 shards per node. In a multi-node cluster 1 or more replicas may be specified for redundancy.
7. Index Lifecycle Management (ILM)
Index Lifecycle Management (ILM) can be used to rollover the indices which store the ElastiFlow data, preventing issues that can occur when shards become too large. Enable rollover by setting, EF_OUTPUT_ELASTICSEARCH_INDEX_PERIOD
to rollover
. When enabled the collector will automatically bootstrap the initial index and write alias.
Environment="EF_OUTPUT_ELASTICSEARCH_INDEX_PERIOD=rollover"
The default Index Lifecycle Management (ILM) lifecycle is elastiflow
. If this lifecycle doesn't exist a basic lifecycle will be added which will remove data after 7 days. This lifecycle can be edited later via Kibana or the Elasticsearch ILM API.
8. Elasticsearch Server and Credentials
Define the Elasticsearch node to which the collector should connect and the credentials for which the password was defined during the Elasticsearch installation.
Environment="EF_OUTPUT_ELASTICSEARCH_ADDRESSES=192.168.56.101:9200"
Environment="EF_OUTPUT_ELASTICSEARCH_USERNAME=elastic"
Environment="EF_OUTPUT_ELASTICSEARCH_PASSWORD=changeme"
9. Encrypted Communications with TLS
Enable TLS and specify the path to the CA certificate.
Environment="EF_OUTPUT_ELASTICSEARCH_TLS_ENABLE=true"
Environment="EF_OUTPUT_ELASTICSEARCH_TLS_CA_CERT_FILEPATH=/etc/elastiflow/ca/ca.crt"
10. Enable and Start the NetObserv Flow
Execute the following commands:
sudo systemctl daemon-reload && \
sudo systemctl enable flowcoll && \
sudo systemctl start flowcoll
Confirm the service started successfully by executing:
sudo systemctl status flowcoll
The collector is now ready to receive flow records from the network infrastructure.
Import Kibana Objects
The last step is to import the Kibana saved objects and apply the recommended advanced settings. Follow the instructions in the Kibana section of the documentation for detailed instructions.