Using ELK stack for vulnerability management

Have you tried to keep all data of discovered vulnerabilities in the same place? When you have about 10 different tools, and also manual records in task tracker, it becomes a bit of a nightmare.

My main needs were:

  1. Have all types of data in the same place – vulnerable computers, pentest findings, automated scan discoveries, software composition analysis etc.
  2. Have visibility over time in each of categories for each product – I need to talk to people with graphs and good visualizations to make my points.
  3. Have a birds-eye view on the state of security in products – to be able to act on the trend, before it materializes into the flaw or breach.
  4. Easily add new metrics.
  5. The answer to these questions needs to be free or inexpensive.

After weeks of thinking and looking for a tool that will save me, I figured out that the solution has always been just next to me. I simply spin up an instance of ELK (ElasticSearch, Logstash, Kibana) and set up a few simple rules with Logstash filters. They work as data hovers – by simply sucking in everything that is sent in as json payloads. Then, transformed data ends up in Elastic, and Kibana visualizes the vulnerabilities into trends and shows them in real-time dashboards. Also, it has a powerful search engine where I can find specific data in a matter of seconds.

I wanted to create a configuration that can process a generic json payload that was assembled by a script after parsing any vulnerability detection tool report.

Here is the example of logstash pipeline configuration:

input {
http { }
}

filter{
json{
source => “message”
}
}

filter {
date{
match => [ “date”, ISO8601 ]
}
}

output { elasticsearch {
hosts => localhost
index => “vuln-%{+YYYY.MM.dd}”
}
}

Here, we ingest http inputs posted to the logstash endpoint as json, set the @timestamp to the “date” field from the payload (it has to be submitted in ISO8601 format), and send this data to the ElasticSearch instance, to the index with name “vuln-<today’s date>”.

Let’s say we have 3 branches that are submitting open-source component analysis results on every build. Here is how the resulting report may look like (Y-axis represents discovered vulnerable components, X-axis represents scan dates, split in 3 sub-plots):

v1

Here, each of 3 subplots represents a separate branch analysis dynamics.

Or, for separate branches:

v2

And finally, we can build some nice data tables:

v3

This example only includes one case – OSS vulnerability reports. But various vulnerability data can be submitted to ELK, and then aggregated into powerful views for efficient management.