This post is the fourth of a series on Threat Intelligence Automation topic.
Post 1: Architecture and Hardening of MineMeld
Post 2: Foundation: write a custom prototype and SOC integration
Post 3: Export internal IoC to the community
Post 5: Connect to a TAXII service
After having laid the foundations for building a community with the previous posts, it’s now time to make some advanced analysis of the received IoC.
In post 2 I integrated MineMeld output nodes into Splunk SOC near-real-time engine to automate SOC IoC access detection. This configuration strengthens the analysis and response capabilities of our SOC.
With this post I show you how to integrate MineMeld miners IoC events (update and withdraw of remote IoC) into Splunk engine so you can use Splunk search advanced features to have a deeper look into the IoC received from the miners.
This is also an important information for a SOC because if you have an IoC hit the first think to do is to understand where the IoC come from, if it was sent by more than one source etc

Each miner periodically polls external sources and emits:
- an UPDATE message, when an IoC is added (EMIT_UPDATE). In the following picture pack-lines[.]com domain has been added to italian CERT-PA Infosec domain feed (see post 2 for details);
- a WITHDRAW message, when an IoC is deleted (EMIT_WITHDRAW). In the following picture domain www[.]germamedic[.]it has been removed from italian CERT-PA Infosec domain feed (see post 2 for details).

MineMeld has a very simple search interface which allows to search for specific events (update or withdraw) and IoC details (URL, domain, md5, sha etc).
The goal is to integrate these events into Splunk engine and make some dashboard to search into MineMeld data and do advanced analysis; this configuration provides analysts with additional analysis capabilities.
There are no specif MineMeld prototype to connect to Splunk but I found a logstash connector and used it.
This prototype by default sends tcp data to local logstash istance (default to 127.0.0.1:5514).
Why not send the same logs to Splunk? Is just a matter of parsing on Splunk side 😉
STEP 1: clone logstash prototype

From prototypes page, clone the stdlib.localLogStash prototype to a new one minemeldlocal.LOG-TO-SPLUNK. While cloning change the 2 prototype parameters as follow:
- logstash_host: <YOUR SPLUNK IP ADDRESS>;
- logstash_port: 1534 (or any port where Splunk will listen for MineMeld data).
The new minemeldlocal.LOG-TO-SPLUNK prototype looks like this.

STEP 2: Install the Splunk app to parse MineMeld data
I wrote a simple Technology Addon (TA) to receive and parse MineMeld data on Splunk, you can find it on my githut repo.
Download TA-custom-minemeld_ioc file and install it (from web interface you need to convert to .tar.gz first) on your Splunk single-istance or on Splunk forwarders of your distributed deployment (see picture below).

MineMeld sends IoC updates/withdraw to Splunk as a JSON stream with multiple lines (1 event per line).
On Splunk there is the need to parse the stream on forwarder side and before sending data to the indexers.
Notes on Splunk config:
- data are stored in minemeld_ioc index, create it or adjust the index name as you want;
- data are indexed with sourcetype minemeld_ioc;
- this line tells Splunk to break multi-line JSON data in single events: BREAK_ONLY_BEFORE = ^\{
STEP 3: configure MineMeld to send logs to Splunk
Now that Splunk and MineMeld are ready, let’s proceed with the making of the new output node that sends JSON data to Splunk. This new node is based on the cloned prototype minemeldlocal.LOG-TO-SPLUNK.
The existing configuration is the one from post 2 with the 3 italian CERT-PA miners to be connected to the new output node LOG-TO-SPLUNK (see the below image).

From MineMeld CONFIG page, IMPORT the following config in APPEND mode and then COMMIT.
nodes: LOG-TO-SPLUNK: inputs: - CERT-PA_domains - CERT-PA_listip - CERT-PA_urls output: false prototype: minemeldlocal.LOG-TO-SPLUNK

You can verify that your output node is receiving update/whitdraw events checking LOG-TO-SPLUNK node logs on MineMeld.

STEP 4: create Splunk dashboards for analysis
Now it’s time to move to Splunk config.
First of all check if data are collected and stored in minemeld_ioc index with a simple query (index=minemeld_ioc). If not, start troubleshooting step 1-2-3 🙂

Then install on your Splunk search head – in case of distributed environment – or on your Splunk single-istance my MineMeld Analysis application.
The app has two views:
- Threat Intelligence Center: a summary of received events (update/whitdraw ). There are two clickable panels that drill-down to Threat Intelligence Search view:
- Events trend: a graph panel that show aggregated events by message type (update/whitdraw) in the timeline (span time 5 minutes);
- Last events: a table panel that show details of received events:
- @indicator: the IoC received;
- type: type of IoC (sha1, sha256, md5, domain, url);
- message: update or whitdraw;
- @origin: the miner that originate the message;
- _time: index time for the event

- Threat Intelligence Search: a search interface for events. The drill-down by default redirects to the raw data search, but if you have Forsensic Investigator app installed just comment the javascript code of the view and you are redirected to VirusTotal Forensic Investigator search.

Follow a video of the app in action.
Enjoy!
Hi Giovanni, I am trying to replicate your setup here, and I have run into an issue. The note for your Splunk Technology add-on (TA-custom-minemeld_ioc), state that it needs to be installed on a Splunk forwarder. I am assuming that it needs to be a “heavy forwarder”? Unfortunately, we do not have any heavy forwarders in our environment as we use a “universal forwarder”. Will the TA still work on a Universal Forwarder?
Thanks!
Jon
LikeLike
Hi Jon
the scenario described in the posto is shown here; I refer in this case to Heavy Forwarders (bottom of the picture, FORWARD layer)
https://scubarda.files.wordpress.com/2017/09/schermata-da-2017-09-11-08-09-331.png?w=700&h=757
If you don’t have such env, you can install the TA also on the search head (if you have a single server), the server that receives MineMeld data.
You cannot install it on the remote forwarder (eg. MineMeld server)
Hope is clear or just ask me more details 🙂
LikeLike
Hi Giovanni,
So our Splunk environment is as follows:
1) Universal Forwarder on a rsyslog server (forwards to the Indexer Cluster)
2) Three indexers in a cluster
3) One Cluster Master / Licensing server
Thus, I would like to have the MineMeld data go to the Universal Forwarder. I have never manually installed Splunk applications, so I am not sure about the “proper” process for manually installing your TA on the Universal Forwarder.
LikeLike
Please disregard my previous post. I didn’t read your instructions carefully enough. I have the TA installed and configured, but I am not getting any data into Splunk. It appears that the LOG-TO-SPLUNK output only accepts one source? If that is the case, wouldn’t we have to make three of them (one for IP addresses, one for domains, and one for URLs)? If I search on “source:LOG-TO-SPLUNK”, I see lots of results, so I think that is working properly. However, if I do a Splunk search for “index=minemeld_ioc” no results are returned. I have verified that my inputs.conf in /opt/splunkforwarder/etc/apps/TA-custom-minemeld_ioc-master/default has the correct index name (minemeld_ioc) [I created and deployed this with the cluster master]. One thing I did notice is that inputs.conf is set to listen to tcp:1534 but your documentation mentions that when we clone the logstash output, the data will be sent UDP.
Any suggestions on what to look for to get the data into Splunk?
LikeLike
Jon,
I see that you see events in MineMeld (source:LOG-TO-SPLUNK) so this means that on MineMeld you are ok.
If I understood your architecture you are sendig MineMeld syslog data to a rsyslog server.
You need to send the logs (logstash_host parameter on MM) to an HF splunk istance listening on port 1534. If you are sending to a standard rsyslog server this will not work because the rsyslog server just take these data and forward to someone else to another port.
So is why I deployed an Heavy Forwarder (my post scenario) to collect syslog data where I cannot install splunk agents (I use different ports for differents data); in this way you can set the index metatag when your data are received on a specific port (in this case tcp/1434 –> index=minemeld_ioc, sourcetype=minemeld_ioc) and then forward the data to your indexers cluster (then you can search for index=minemeld_ioc).
When you do this with a rsyslog is not so easy…
Let me know if I understood your architecture and this analysis seems reasonable to you.
GIovanni
LikeLike
Hi Giovanni,
I really appreciate all your help with this! Is there an email address I could use to contact you directly? I have some screen-shots that I would like to share with you, and because of our environment, I don’t want to post too many details in an open forum.
Jon
LikeLike
You can contact me on my prontonmail addresd (see about page)
LikeLike
Hi Giovanni,
Any chance you can provide a little more information about “but if you have Forsensic Investigator app installed just comment the javascript code of the view and you are redirected to VirusTotal Forensic Investigator search but if you have Forsensic Investigator app installed just comment the javascript code of the view and you are redirected to VirusTotal Forensic Investigator search”?
I have Forensic Investigator installed, but I am not seeing the javascript code that you are referring to.
Jon
LikeLike