Install Guide: Using NetObserv Flow and Elasticsearch
Purpose
This guide is for a specific, though common, scenario:
- installing NetObserv Flow
- using the container version of NetObserv Flow
- installing Elasticsearch/Kibana
- installing both on the same Ubuntu machine
- assuming around 500 FPS (flows per second)
It is assumed that you will adapt any steps from this guide to suit your specific needs.
We are not Elastic, so we are not responsible for Elasticsearch itself, should it cause any issues for you. You can refer to Elastic's documentation if they have their own recommendations and guides. However, we want to provide this guide nonetheless to help kickstart your experience with NetObserv Flow.
For more complete and specific documentation regarding installation of NetObserv Flow itself, see either our linux install guide or our docker install guide.
Prerequisites
-
Internet connected, clean, unused, dedicated Ubuntu 22 (or greater) Linux server with root access
-
Ubuntu VM can be provisioned with 16GB of RAM, 8 CPU cores, and 500 GB of disk space. This will allow you to store roughly 1 month of flow data at 500 FPS (Flows per second)
-
Good copying and pasting skills
-
Non-snap based Docker. To check if you have a Docker snap installed
snap list | grep -q '^docker\s'
If you have a Docker snap, you'll need to remove it first. If there are images not backed up anywhere else, please save those images first. After that, uninstall the snap version of docker with this command:
sudo snap remove --purge docker
You can install Docker with the following convenience script:
curl -fsSL https://get.docker.com | sudo sh
Automated Script
You can run this convenience script:
sudo bash -c "$(wget -qLO - https://raw.githubusercontent.com/elastiflow/ElastiFlow-Tools/main/docker_install/install.sh)"
Or, you can follow the below steps manually.
1) Add the following recommended kernel tuning parameters to /etc/sysctl.conf
# Memory mapping limits for Elasticsearch
vm.max_map_count=262144
# Network settings for high performance
net.core.netdev_max_backlog=4096
net.core.rmem_default=262144
net.core.rmem_max=67108864
net.ipv4.udp_rmem_min=131072
net.ipv4.udp_mem=2097152 4194304 8388608
To activate the settings, run sudo sysctl -p
.
You could instead use the following one liner to do everything:
echo -e "\n# Memory mapping limits for Elasticsearch\nvm.max_map_count=262144\n# Network settings for high performance\nnet.core.netdev_max_backlog=4096\nnet.core.rmem_default=262144\nnet.core.rmem_max=67108864\nnet.ipv4.udp_rmem_min=131072\nnet.ipv4.udp_mem=2097152 4194304 8388608" | sudo tee -a /etc/sysctl.conf > /dev/null && sudo sysctl -p
Explanation of parameters
-
vm.max_map_count=262144
: Sets the max memory map areas per process, important for Elasticsearch to handle memory-mapped files. Default is often lower, so 262144 is needed for smooth operation. -
net.core.netdev_max_backlog=4096
: Defines the max queued packets at the network interface. A higher value (4096) helps systems with high traffic prevent packet drops. -
net.core.rmem_default=262144
: Sets the default socket receive buffer size (262144 bytes). Useful for applications like Elasticsearch that handle large amounts of data. -
net.core.rmem_max=67108864
: Defines the max socket receive buffer size (up to 64 MB) for handling high-throughput applications. -
net.ipv4.udp_rmem_min=131072
: Sets the minimum UDP socket receive buffer (131072 bytes), ensuring adequate space for UDP traffic without dropping packets. -
net.ipv4.udp_mem=2097152 4194304 8388608
: Defines UDP memory limits (in pages). 2 GB slows socket allocation, 4 GB starts dropping packets, and 8 GB is the max allowed. Helps manage high UDP traffic.
2) Configure Docker
Disable swapping and limit log file size
- Create or edit daemon.json
sudo nano /etc/docker/daemon.json
- Add the following text
{
"default-ulimits": {
"memlock": {
"name": "memlock",
"soft": -1,
"hard": -1
}
}
}
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
- Restart docker daemon
sudo systemctl restart docker
You can instead use a one liner to do everything:
echo '{"default-ulimits":{"memlock":{"name":"memlock","soft":-1,"hard":-1}},"log-driver":"json-file","log-opts":{"max-size":"10m","max-file":"3"}}' | sudo tee /etc/docker/daemon.json > /dev/null && sudo systemctl restart docker
3) Download Docker Compose files
Create a new directory on your server and download elasticsearch_kibana_compose.yml
, elastiflow_flow_compose.yml
, and .env
from here
Or run the following in a terminal session:
sudo wget "https://raw.githubusercontent.com/elastiflow/ElastiFlow-Tools/main/docker_install/.env" && sudo wget "https://raw.githubusercontent.com/elastiflow/ElastiFlow-Tools/main/docker_install/elasticsearch_kibana_compose.yml" && sudo wget "https://raw.githubusercontent.com/elastiflow/ElastiFlow-Tools/main/docker_install/elastiflow_flow_compose.yml"
4) Download required ElastiFlow NetObserv Flow support files
These 'support files' can be obtained from our .deb installer. Even if you use docker/compose version of NetObserv Flow (which this guide is doing), you will want to have some configuration files on your host machine. That way your configuration changes persist even after container recreation.
Download NetObserv Flow (deb file)Extract the contents of /etc/elastiflow/
in the archive to /etc/elastiflow/
on your ElastiFlow NetObserv Flow server.
You can instead use a one liner to do everything:
sudo wget -O flow-collector_7.9.0_linux_amd64.deb https://elastiflow-releases.s3.us-east-2.amazonaws.com/flow-collector/flow-collector_7.9.0_linux_amd64.deb && sudo mkdir -p elastiflow_extracted && sudo dpkg-deb -x flow-collector_7.9.0_linux_amd64.deb elastiflow_extracted && sudo mkdir -p /etc/elastiflow && sudo cp -r elastiflow_extracted/etc/elastiflow/. /etc/elastiflow
5) Configure memory in .env file
Use the following as inspiration.
# Set heap size to about one-third of the system memory, but do not exceed 31g. Assuming 16GB of system memory, we'll set this to 5GB
JVM_HEAP_SIZE=5
# Set the memory limit to 2x the heap size (currently set to 10GB)
MEM_LIMIT_ELASTIC=10737418240
# Set the memory limit to 2GB for small to medium workloads (currently set to 2GB)
MEM_LIMIT_KIBANA=2147483648
OPTIONAL: Geo and ASN Enrichment
If you would like to enable geo IP and ASN enrichment, please do the following
- Sign up for Geolite2 database access.
- Download gzipped database files (GeoLite2 ASN and GeoLite2 City).
- Extract their contents to
/etc/elastiflow/maxmind/
. - Enable Geo and ASN enrichment in
elasticsearch_kibana_compose.yml
.
EF_PROCESSOR_ENRICH_IPADDR_MAXMIND_GEOIP_ENABLE: 'true'
EF_PROCESSOR_ENRICH_IPADDR_MAXMIND_ASN_ENABLE: 'true'
To automate steps 2 and 3, you could run the following commands on your server. Be sure to replace YOUR_LICENSE_KEY
with your GeoLite2 license key.
sudo wget -O ./GeoLite2-ASN.tar.gz "https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-ASN&license_key=YOUR_LICENSE_KEY&suffix=tar.gz"
sudo wget -O ./GeoLite2-City.tar.gz "https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-City&license_key=YOUR_LICENSE_KEY&suffix=tar.gz"
sudo tar -xvzf GeoLite2-ASN.tar.gz --strip-components 1 -C /etc/elastiflow/maxmind/
sudo tar -xvzf GeoLite2-City.tar.gz --strip-components 1 -C /etc/elastiflow/maxmind/
6) Deploy
From the directory where you downloaded the yml and .env files,
sudo docker compose -f elasticsearch_kibana_compose.yml -f elastiflow_flow_compose.yml up -d
7) Log in to Kibana
After a few minutes, browse to http://IP_of_your_host:5601
.
Log in with:
- Username:
elastic
- Password:
elastic
8) Install ElastiFlow NetObserv Flow dashboards
-
Download this dashboards file to your local machine.
-
Log in to Kibana.
-
Click menu, "Stack Management", then under the heading "Kibana", click "Saved Objects"
-
Browse for and upload the ndjson file you downloaded. Choose "import" and "overwrite".
9) Send flow data
Option 1: (Best)
Send flow data to IP_of_your_host:9995. Refer to your network hardware vendor for how to configure Netflow 5,7,9 / IPFIX / sFlow / jFLow export.
Option 2: (OK)
Generate flow data from one of your hosts (either the same machine running NetObserv or a different one).
- Install Pmacct on a machine somewhere.
sudo apt-get install pmacct
- Add the following Pmacct configuration to a new file located here
/etc/pmacct/pmacctd.conf
. Be sure to replaceNETWORK_INTERFACE_TO_MONITOR
with the name of an interface andELASTIFLOW_NETOBSERV_FLOW_IP
with the IP address of your ElastiFlow NetObserv Flow server.
daemonize: false
pcap_interface: NETWORK_INTERFACE_TO_MONITOR
aggregate: src_mac, dst_mac, src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe, print
nfprobe_receiver: ELASTIFLOW_NETOBSERV_FLOW_IP:9995
nfprobe_version: 9
nfprobe_timeouts: tcp=15:maxlife=1800
- Run pmacct:
sudo pmacctd -f /etc/pmacct/pmacctd.conf
Option 3: (For Testing Purposes)
You can generate sample flow data with this approach. Be sure to replace ELASTIFLOW_NETOBSERV_FLOW_IP
with the IP address of your ElastiFlow NetObserv Flow server.
sudo docker run -it --rm networkstatic/nflow-generator -t ELASTIFLOW_NETOBSERV_FLOW_IP -p 9995
10) Visualize your Flow Data
In Kibana (http://IP_OF_YOUR_SERVER:5601
) do a global search (at the top) for the dashboard "ElastiFlow (flow): Overview" and open it. It may be a few minutes for flow records to populate as the system waits for flow templates to arrive.
11) Update Credentials
Now that you have ElastiFlow NetObserv Flow up and running, we advise that you change your Elasticsearch and Kibana passwords from elastic
to something complex as soon as possible. Here's how to do it:
- Open your
.env
file in a text editor like nano. - Specify a new
ELASTIC_PASSWORD
andKIBANA_PASSWORD
. Save changes. - Redeploy ElasticSearch, Kibana, ElastiFlow NetObserv Flow:
sudo docker compose -f elasticsearch_kibana_compose.yml -f elastiflow_flow_compose.yml down && sudo docker compose -f elasticsearch_kibana_compose.yml -f elastiflow_flow_compose.yml up -d
More enrichments and functionality are available with a free basic license. You can also request a 30 day premium license which unlocks broader device support, much higher flow rates, and all of the NetIntel enrichments.
Optional Enrichments
ElastiFlow NetObserv Flow is able to enrich flow records with many different pieces of data, making those records even more valuable, from app id, to threat information, geolocation, DNS hostnames, and more. Please click here for information on how to enable various enrichments.
Notes
- If you need to make any ElastiFlow NetObserv Flow configuration changes (such as turning options on and off, adding your license information, etc), edit the elastiflow_flow_compose.yml and then run the following command:
sudo docker compose -f elastiflow_flow_compose.yml down && sudo docker compose -f elastiflow_flow_compose.yml up -d
- After making configuration changes, or for troubleshooting, check the logs by doing
sudo docker logs flow-collector -f
-
If your server is has a different amount of RAM than 16GB, please view the .env file for guidance on the values for the following keys:
-
JVM_HEAP_SIZE
-
MEM_LIMIT_ELASTIC
-
MEM_LIMIT_KIBANA
-
-
Questions? You can find more helpful content in Community Forum and Community Slack Workspace.
-
Code in this folder may contain code from Elastic's Github Repo.