Using ELK stack for vulnerability management

Have you tried to keep all data of discovered vulnerabilities in the same place? When you have about 10 different tools, and also manual records in task tracker, it becomes a bit of a nightmare.

My main needs were:

  1. Have all types of data in the same place – vulnerable computers, pentest findings, automated scan discoveries, software composition analysis etc.
  2. Have visibility over time in each of categories for each product – I need to talk to people with graphs and good visualizations to make my points.
  3. Have a birds-eye view on the state of security in products – to be able to act on the trend, before it materializes into the flaw or breach.
  4. Easily add new metrics.
  5. The answer to these questions needs to be free or inexpensive.

After weeks of thinking and looking for a tool that will save me, I figured out that the solution has always been just next to me. I simply spin up an instance of ELK (ElasticSearch, Logstash, Kibana) and set up a few simple rules with Logstash filters. They work as data hovers – by simply sucking in everything that is sent in as json payloads. Then, transformed data ends up in Elastic, and Kibana visualizes the vulnerabilities into trends and shows them in real-time dashboards. Also, it has a powerful search engine where I can find specific data in a matter of seconds.

I wanted to create a configuration that can process a generic json payload that was assembled by a script after parsing any vulnerability detection tool report.

Here is the example of logstash pipeline configuration:

input {
http { }
}

filter{
json{
source => “message”
}
}

filter {
date{
match => [ “date”, ISO8601 ]
}
}

output { elasticsearch {
hosts => localhost
index => “vuln-%{+YYYY.MM.dd}”
}
}

Here, we ingest http inputs posted to the logstash endpoint as json, set the @timestamp to the “date” field from the payload (it has to be submitted in ISO8601 format), and send this data to the ElasticSearch instance, to the index with name “vuln-<today’s date>”.

Let’s say we have 3 branches that are submitting open-source component analysis results on every build. Here is how the resulting report may look like (Y-axis represents discovered vulnerable components, X-axis represents scan dates, split in 3 sub-plots):

v1

Here, each of 3 subplots represents a separate branch analysis dynamics.

Or, for separate branches:

v2

And finally, we can build some nice data tables:

v3

This example only includes one case – OSS vulnerability reports. But various vulnerability data can be submitted to ELK, and then aggregated into powerful views for efficient management.

Pandora FMS server with docker-compose

Docker-compose is amazing, this tool allows you to literally deploy complex clusters of containers with one command. Previously, I had a seamless experience running ELK (Elastic, Logstash, Kibana) with docker-compose and now decided to give a try to Pandora with it.

Pandora FMS is a great tool for monitoring and securing the infrastructure since it provides insights into anomalies that may happen to your servers. And it is open source and free!

To start with, I found 3 containers required to run the Pandora Server on the Docker Hub of Pandora FMS : MySQL DB instance initialized with Pandora DB, Pandora Server, and Pandora Console.

All the deployment magic happens within each container, and my task was only to create some infrastructure and orchestration for them with help of docker-compose.

I put the result on Github – pandora-docker-compose.

Here is a quick overview of  what it does:

  • Creates a dedicated network and assigns IPs to containers
  • Configures Postfix for sending emails to admins (int he default container it was not working)
  • Synchronizes time with Docker host
  • Maps Pandora DB files to local host folder so that you can back them up and restore

 

Removing unused Docker containers

Those who work with Docker know how fast is free space disappearing, especially, in case you are experimenting with some new features and don’t pay much attention to cleaning up. As the result, a lot of empty untagged containers can eat tens of gigabytes of space.

Here is the one-liner I found to be very useful for removing old stuff:

docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Actually, I’m not the one who came up with this answer, but I spent a few hours trying different commands and googling for the answer.

Here is the full Docker cheat sheet that I bumped into and found it being quite good itself:

https://github.com/wsargent/docker-cheat-sheet